All of lore.kernel.org
 help / color / mirror / Atom feed
* [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
@ 2011-07-27 16:35 Sylwester Nawrocki
  2011-07-28  2:55 ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 44+ messages in thread
From: Sylwester Nawrocki @ 2011-07-27 16:35 UTC (permalink / raw)
  To: linux-media, Mauro Carvalho Chehab

Hi Mauro,

The following changes since commit f0a21151140da01c71de636f482f2eddec2840cc:

  Merge tag 'v3.0' into staging/for_v3.1 (2011-07-22 13:33:14 -0300)

are available in the git repository at:

  git://git.infradead.org/users/kmpark/linux-2.6-samsung fimc-for-mauro

Sylwester Nawrocki (28):
      s5p-fimc: Add support for runtime PM in the mem-to-mem driver
      s5p-fimc: Add media entity initialization
      s5p-fimc: Remove registration of video nodes from probe()
      s5p-fimc: Remove sclk_cam clock handling
      s5p-fimc: Limit number of available inputs to one
      s5p-fimc: Remove sensor management code from FIMC capture driver
      s5p-fimc: Remove v4l2_device from video capture and m2m driver
      s5p-fimc: Add the media device driver
      s5p-fimc: Conversion to use struct v4l2_fh
      s5p-fimc: Conversion to the control framework
      s5p-fimc: Add media operations in the capture entity driver
      s5p-fimc: Add PM helper function for streaming control
      s5p-fimc: Correct color format enumeration
      s5p-fimc: Convert to use media pipeline operations
      s5p-fimc: Add subdev for the FIMC processing block
      s5p-fimc: Add support for camera capture in JPEG format
      s5p-fimc: Add v4l2_device notification support for single frame capture
      s5p-fimc: Use consistent names for the buffer list functions
      s5p-fimc: Add runtime PM support in the camera capture driver
      s5p-fimc: Correct crop offset alignment on exynos4
      s5p-fimc: Remove single-planar capability flags
      noon010pc30: Do not ignore errors in initial controls setup
      noon010pc30: Convert to the pad level ops
      noon010pc30: Clean up the s_power callback
      noon010pc30: Remove g_chip_ident operation handler
      s5p-csis: Handle all available power supplies
      s5p-csis: Rework of the system suspend/resume helpers
      s5p-csis: Enable v4l subdev device node

 drivers/media/video/Kconfig                 |    4 +-
 drivers/media/video/noon010pc30.c           |  173 ++--
 drivers/media/video/s5p-fimc/Makefile       |    2 +-
 drivers/media/video/s5p-fimc/fimc-capture.c | 1424 +++++++++++++++++++--------
 drivers/media/video/s5p-fimc/fimc-core.c    | 1119 +++++++++++----------
 drivers/media/video/s5p-fimc/fimc-core.h    |  222 +++--
 drivers/media/video/s5p-fimc/fimc-mdevice.c |  859 ++++++++++++++++
 drivers/media/video/s5p-fimc/fimc-mdevice.h |  118 +++
 drivers/media/video/s5p-fimc/fimc-reg.c     |   76 +-
 drivers/media/video/s5p-fimc/mipi-csis.c    |   84 +-
 drivers/media/video/s5p-fimc/regs-fimc.h    |    8 +-
 include/media/s5p_fimc.h                    |   11 +
 include/media/v4l2-chip-ident.h             |    3 -
 13 files changed, 2921 insertions(+), 1182 deletions(-)
 create mode 100644 drivers/media/video/s5p-fimc/fimc-mdevice.c
 create mode 100644 drivers/media/video/s5p-fimc/fimc-mdevice.h

-- 
Regards,

Sylwester Nawrocki
Samsung Poland R&D Center

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-07-27 16:35 [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates Sylwester Nawrocki
@ 2011-07-28  2:55 ` Mauro Carvalho Chehab
  2011-07-28 10:09   ` Sylwester Nawrocki
  2011-07-29  8:17   ` Sakari Ailus
  0 siblings, 2 replies; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-07-28  2:55 UTC (permalink / raw)
  To: Sylwester Nawrocki; +Cc: linux-media

Hi Sylwester,

Em 27-07-2011 13:35, Sylwester Nawrocki escreveu:
> Hi Mauro,
> 
> The following changes since commit f0a21151140da01c71de636f482f2eddec2840cc:
> 
>   Merge tag 'v3.0' into staging/for_v3.1 (2011-07-22 13:33:14 -0300)
> 
> are available in the git repository at:
> 
>   git://git.infradead.org/users/kmpark/linux-2.6-samsung fimc-for-mauro
> 
> Sylwester Nawrocki (28):
>       s5p-fimc: Add support for runtime PM in the mem-to-mem driver
>       s5p-fimc: Add media entity initialization
>       s5p-fimc: Remove registration of video nodes from probe()
>       s5p-fimc: Remove sclk_cam clock handling
>       s5p-fimc: Limit number of available inputs to one
>       s5p-fimc: Remove sensor management code from FIMC capture driver
>       s5p-fimc: Remove v4l2_device from video capture and m2m driver
>       s5p-fimc: Add the media device driver
>       s5p-fimc: Conversion to use struct v4l2_fh
>       s5p-fimc: Conversion to the control framework
>       s5p-fimc: Add media operations in the capture entity driver
>       s5p-fimc: Add PM helper function for streaming control
>       s5p-fimc: Correct color format enumeration
>       s5p-fimc: Convert to use media pipeline operations
>       s5p-fimc: Add subdev for the FIMC processing block
>       s5p-fimc: Add support for camera capture in JPEG format
>       s5p-fimc: Add v4l2_device notification support for single frame capture
>       s5p-fimc: Use consistent names for the buffer list functions
>       s5p-fimc: Add runtime PM support in the camera capture driver
>       s5p-fimc: Correct crop offset alignment on exynos4
>       s5p-fimc: Remove single-planar capability flags
>       noon010pc30: Do not ignore errors in initial controls setup
>       noon010pc30: Convert to the pad level ops
>       noon010pc30: Clean up the s_power callback
>       noon010pc30: Remove g_chip_ident operation handler
>       s5p-csis: Handle all available power supplies
>       s5p-csis: Rework of the system suspend/resume helpers
>       s5p-csis: Enable v4l subdev device node

>From the last time you've submitted a similar set of patches:

>> Why? The proper way to select an input is via S_INPUT. The driver 
> may also
>> optionally allow changing it via the media device, but it should 
> not be
>> a mandatory requirement, as the media device API is optional.
> 
> The problem I'm trying to solve here is sharing the sensors and mipi-csi receivers between multiple FIMC H/W instances. Previously the driver supported attaching a sensor to only one selected FIMC at compile time. You could, for instance, specify all sensors as the selected FIMC's platform data and then use S_INPUT to choose between them. The sensor could not be used together with any other FIMC. But this is desired due to different capabilities of the FIMC IP instances. And now, instead of hardcoding a sensor assigment to particular video node, the sensors are bound to the media device. The media device driver takes the list of sensors and attaches them one by one to subsequent FIMC instances when it is initializing. Each sensor has a link to each FIMC but only one of them is active by default. That said an user application can use selected camera by opening corresponding video node. Which camera is at which node can be queried with G_INPUT.
> 
> I could try to implement the previous S_INPUT behaviour, but IMHO this would lead to considerable and unnecessary driver code complication due to supporting overlapping APIs

>From this current pull request:

>From c6fb462c38be60a45d16a29a9e56c886ee0aa08c Mon Sep 17 00:00:00 2001
From: Sylwester Nawrocki <s.nawrocki@samsung.com>
Date: Fri, 10 Jun 2011 20:36:51 +0200
Subject: s5p-fimc: Conversion to the control framework
Cc: Linux Media Mailing List <linux-media@vger.kernel.org>

Make the driver inherit sensor controls when video node only API
compatibility mode is enabled. The control framework allows to
properly handle sensor controls through controls inheritance when
pipeline is re-configured at media device level.

...
-       .vidioc_queryctrl               = fimc_vidioc_queryctrl,
-       .vidioc_g_ctrl                  = fimc_vidioc_g_ctrl,
-       .vidioc_s_ctrl                  = fimc_cap_s_ctrl,
...

I'll need to take some time to review this patchset. So, it will likely
miss the bus for 3.1.

While the code inside this patch looked ok, your comments scared me ;)

In summary: The V4L2 API is not a legacy API that needs a "compatibility
mode". Removing controls like VIDIOC_S_INPUT, VIDIOC_*CTRL, etc in
favor of the media controller API is wrong. This specific patch itself seems
ok, but it is easy to loose the big picture on a series of 28 patches
with about 4000 lines changed.

The media controller API is meant to be used only by specific applications
that might add some extra features to the driver. So, it is an optional
API. In all cases where both API's can do the same thing, the proper way 
is to use the V4L2 API only, and not the media controller API.

So, my current plan is to merge the patches into an experimental tree, after
reviewing the changeset, and test against a V4L2 application, in order to
confirm that everything is ok.

I may need a couple weeks for doing that, as it will take some time for me
to have an available window for hacking with it.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-07-28  2:55 ` Mauro Carvalho Chehab
@ 2011-07-28 10:09   ` Sylwester Nawrocki
  2011-07-28 13:20     ` Mauro Carvalho Chehab
  2011-07-29  8:17   ` Sakari Ailus
  1 sibling, 1 reply; 44+ messages in thread
From: Sylwester Nawrocki @ 2011-07-28 10:09 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: linux-media

Hi Mauro,

On 07/28/2011 04:55 AM, Mauro Carvalho Chehab wrote:
> Hi Sylwester,
> 
> Em 27-07-2011 13:35, Sylwester Nawrocki escreveu:
>> Hi Mauro,
>>
>> The following changes since commit f0a21151140da01c71de636f482f2eddec2840cc:
>>
>>   Merge tag 'v3.0' into staging/for_v3.1 (2011-07-22 13:33:14 -0300)
>>
>> are available in the git repository at:
>>
>>   git://git.infradead.org/users/kmpark/linux-2.6-samsung fimc-for-mauro
>>
>> Sylwester Nawrocki (28):
>>       s5p-fimc: Add support for runtime PM in the mem-to-mem driver
>>       s5p-fimc: Add media entity initialization
>>       s5p-fimc: Remove registration of video nodes from probe()
>>       s5p-fimc: Remove sclk_cam clock handling
>>       s5p-fimc: Limit number of available inputs to one
>>       s5p-fimc: Remove sensor management code from FIMC capture driver
>>       s5p-fimc: Remove v4l2_device from video capture and m2m driver
>>       s5p-fimc: Add the media device driver
>>       s5p-fimc: Conversion to use struct v4l2_fh
>>       s5p-fimc: Conversion to the control framework
>>       s5p-fimc: Add media operations in the capture entity driver
>>       s5p-fimc: Add PM helper function for streaming control
>>       s5p-fimc: Correct color format enumeration
>>       s5p-fimc: Convert to use media pipeline operations
>>       s5p-fimc: Add subdev for the FIMC processing block
>>       s5p-fimc: Add support for camera capture in JPEG format
>>       s5p-fimc: Add v4l2_device notification support for single frame capture
>>       s5p-fimc: Use consistent names for the buffer list functions
>>       s5p-fimc: Add runtime PM support in the camera capture driver
>>       s5p-fimc: Correct crop offset alignment on exynos4
>>       s5p-fimc: Remove single-planar capability flags
>>       noon010pc30: Do not ignore errors in initial controls setup
>>       noon010pc30: Convert to the pad level ops
>>       noon010pc30: Clean up the s_power callback
>>       noon010pc30: Remove g_chip_ident operation handler
>>       s5p-csis: Handle all available power supplies
>>       s5p-csis: Rework of the system suspend/resume helpers
>>       s5p-csis: Enable v4l subdev device node
> 
> From the last time you've submitted a similar set of patches:
> 
>>> Why? The proper way to select an input is via S_INPUT. The driver 
>> may also
>>> optionally allow changing it via the media device, but it should 
>> not be
>>> a mandatory requirement, as the media device API is optional.
>>
>> The problem I'm trying to solve here is sharing the sensors and
>> mipi-csi receivers between multiple FIMC H/W instances. Previously
>> the driver supported attaching a sensor to only one selected FIMC
>> at compile time. You could, for instance, specify all sensors as
>> the selected FIMC's platform data and then use S_INPUT to choose
>> between them. The sensor could not be used together with any other
>> FIMC. But this is desired due to different capabilities of the FIMC
>> IP instances. And now, instead of hardcoding a sensor assigment to
>> particular video node, the sensors are bound to the media device.
>> The media device driver takes the list of sensors and attaches them
>> one by one to subsequent FIMC instances when it is initializing. Each
>> sensor has a link to each FIMC but only one of them is active by 
>> default. That said an user application can use selected camera by
>> opening corresponding video node. Which camera is at which node
>> can be queried with G_INPUT.
>>
>> I could try to implement the previous S_INPUT behaviour, but IMHO this
>> would lead to considerable and unnecessary driver code complication due
>> to supporting overlapping APIs
> 
> From this current pull request:
> 
> From c6fb462c38be60a45d16a29a9e56c886ee0aa08c Mon Sep 17 00:00:00 2001
> From: Sylwester Nawrocki <s.nawrocki@samsung.com>
> Date: Fri, 10 Jun 2011 20:36:51 +0200
> Subject: s5p-fimc: Conversion to the control framework
> Cc: Linux Media Mailing List <linux-media@vger.kernel.org>
> 
> Make the driver inherit sensor controls when video node only API
> compatibility mode is enabled. The control framework allows to
> properly handle sensor controls through controls inheritance when
> pipeline is re-configured at media device level.
> 
> ...
> -       .vidioc_queryctrl               = fimc_vidioc_queryctrl,
> -       .vidioc_g_ctrl                  = fimc_vidioc_g_ctrl,
> -       .vidioc_s_ctrl                  = fimc_cap_s_ctrl,
> ...
> 
> I'll need to take some time to review this patchset. So, it will likely
> miss the bus for 3.1.

OK, thanks for your time!

> 
> While the code inside this patch looked ok, your comments scared me ;)

I didn't mean to scare you, sorry ;)

> 
> In summary: The V4L2 API is not a legacy API that needs a "compatibility
> mode". Removing controls like VIDIOC_S_INPUT, VIDIOC_*CTRL, etc in
> favor of the media controller API is wrong. This specific patch itself seems

Yes, it's the second time you say MC API is only optional;) I should
had formulated the summary from the other point of view. I wrote this
in context of two: V4L2 and MC API compatibility modes. Perhaps not too
fortunate wording.

> ok, but it is easy to loose the big picture on a series of 28 patches
> with about 4000 lines changed.

Yes, I agree. You really have to look carefully at the final result too.

When it comes to controls, as you might know, I didn't remove any
functionality. Although the ioctl handlers are gotten rid of in the driver,
they are handled in the control framework.
That's a complete patch:
http://git.infradead.org/users/kmpark/linux-2.6-samsung/commitdiff/c6fb462c38be60a45d16a29a9e56c886ee0aa08c

For the full picture it should be noticed that the control ops are created
and the control handler is assigned to a video device node:

...
+
+static const struct v4l2_ctrl_ops fimc_ctrl_ops = {
+       .s_ctrl = fimc_s_ctrl,
+};
...
        .vidioc_cropcap                 = fimc_cap_cropcap,
@@ -727,6 +724,7 @@ int fimc_register_capture_device(struct fimc_dev *fimc,
        if (ret)
                goto err_ent;

+       vfd->ctrl_handler = &ctx->ctrl_handler;
...

and then in v4l2-ioctl.c all the control ioctl are still handled:

...
	/* --- controls ---------------------------------------------- */
	case VIDIOC_QUERYCTRL:
	{
		struct v4l2_queryctrl *p = arg;

		if (vfh && vfh->ctrl_handler)
			ret = v4l2_queryctrl(vfh->ctrl_handler, p);
		else if (vfd->ctrl_handler)
			ret = v4l2_queryctrl(vfd->ctrl_handler, p);
		else if (ops->vidioc_queryctrl)
			ret = ops->vidioc_queryctrl(file, fh, p);
...

So, as far as controls are concerned, there is no functionality removal,
especially in favour of the MC API. This is just (hopefully) proper
integration with the control framework ;]

> 
> The media controller API is meant to be used only by specific applications
> that might add some extra features to the driver. So, it is an optional
> API. In all cases where both API's can do the same thing, the proper way 
> is to use the V4L2 API only, and not the media controller API.

Yes I continually keep that in mind. But there are some corner cases when
things are not so obvious, e.g. normally a video node inherits controls
from a sensor subdev. So all the controls are available at the video node.

But when using the subdev API, the right thing to do at the video node is not
to inherit sensor's controls. You could of course accumulate all controls at
video node at all times, such as same (sensor subdev's) controls are available
at /dev/video* and /dev/v4l-subdev*. But this is confusing to MC API aware
application which could wrongly assume that there is more controls at the host
device than there really is.

Thus it's a bit hard to imagine that we could do something like "optionally
not to inherit controls" as the subdev/MC API is optional. :)

Also, the sensor subdev can be configured in the video node driver as well as
through the subdev device node. Both APIs can do the same thing but in order
to let the subdev API work as expected the video node driver must be forbidden
to configure the subdev. There is a conflict there that in order to use 
'optional' API the 'main' API behaviour must be affected....
And I really cant use V4L2 API only as is because it's too limited.
Might be that's why I see more and more often migration to OpenMAX recently.

> 
> So, my current plan is to merge the patches into an experimental tree, after
> reviewing the changeset, and test against a V4L2 application, in order to
> confirm that everything is ok.

Sure, thanks. Basically I have been testing with same application before
and after the patchset. Tomasz also tried his libv4l2 mplane patches with
this reworked driver. So in general I do not expect any surprises, but
the testing can only help us:)

> 
> I may need a couple weeks for doing that, as it will take some time for me
> to have an available window for hacking with it.

At first I need to provide you with the board setup patches for smdkv310, in order
to make m5mols camera work on this board. I depend on others to make this job
and this is taking painfully long time. Maybe after the upcomning Linaro Connect
I'll get things speed up.

---
Best regards,
Sylwester
-- 
Sylwester Nawrocki
Samsung Poland R&D Center

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-07-28 10:09   ` Sylwester Nawrocki
@ 2011-07-28 13:20     ` Mauro Carvalho Chehab
  2011-07-28 22:57       ` Sylwester Nawrocki
  0 siblings, 1 reply; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-07-28 13:20 UTC (permalink / raw)
  To: Sylwester Nawrocki; +Cc: linux-media

Em 28-07-2011 07:09, Sylwester Nawrocki escreveu:
> Hi Mauro,
> 
> On 07/28/2011 04:55 AM, Mauro Carvalho Chehab wrote:
>> Hi Sylwester,
>>
>> Em 27-07-2011 13:35, Sylwester Nawrocki escreveu:
>>> Hi Mauro,
>> In summary: The V4L2 API is not a legacy API that needs a "compatibility
>> mode". Removing controls like VIDIOC_S_INPUT, VIDIOC_*CTRL, etc in
>> favor of the media controller API is wrong. This specific patch itself seems
> 
> Yes, it's the second time you say MC API is only optional;) I should
> had formulated the summary from the other point of view. I wrote this
> in context of two: V4L2 and MC API compatibility modes. Perhaps not too
> fortunate wording.

A clear patch description helps for reviewers to understand what, why and how
the patch is doing. Sometimes, I write a one-line patch, together with a 30+
lines of patch description. Especially on tricky patches, please be verbose
at the descriptions.

>> ok, but it is easy to loose the big picture on a series of 28 patches
>> with about 4000 lines changed.
> 
> Yes, I agree. You really have to look carefully at the final result too.
> 
> When it comes to controls, as you might know, I didn't remove any
> functionality. Although the ioctl handlers are gotten rid of in the driver,
> they are handled in the control framework.

Yes, I noticed, but, on a complex driver with several subdevs, it is not that
simple to check where the controls are actually created, or if they require a
MC API call to create them or not. Especially on patch series made by the
manufacturer, I generally don't spend much time to fully understand the driver logic,
as I assume that the developers are doing the better to make the driver to work.
My main concern is to be sure that the driver will be doing the right thing,
in the light of the V4L2 API. The MC API made my work harder, as now, I need to
check also if, for the device to work, it needs some MC API calls. 

So, I have now 2 alternatives left:
  a) to test with a device; 
  b) to fully understand the driver's logic.

Both are very time-consuming, but (a) is quicker, and safer, of course provided that
I don't need to dig into several trees to get patches because not everything is not
upstream yet.

This also means that I'll need the (complex) patches for devices with MC several weeks 
before the next merge window, to give me some time to open a window for testing.

>> The media controller API is meant to be used only by specific applications
>> that might add some extra features to the driver. So, it is an optional
>> API. In all cases where both API's can do the same thing, the proper way 
>> is to use the V4L2 API only, and not the media controller API.
> 
> Yes I continually keep that in mind. But there are some corner cases when
> things are not so obvious, e.g. normally a video node inherits controls
> from a sensor subdev. So all the controls are available at the video node.
> 
> But when using the subdev API, the right thing to do at the video node is not
> to inherit sensor's controls. You could of course accumulate all controls at
> video node at all times, such as same (sensor subdev's) controls are available
> at /dev/video* and /dev/v4l-subdev*. But this is confusing to MC API aware
> application which could wrongly assume that there is more controls at the host
> device than there really is.

Accumulating sub-dev controls at the video node is the right thing to do.

An MC-aware application will need to handle with that, but that doesn't sound to
be hard. All such application would need to do is to first probe the subdev controls,
and, when parsing the videodev controls, not register controls with duplicated ID's,
or to mark them with some special attribute.

> Thus it's a bit hard to imagine that we could do something like "optionally
> not to inherit controls" as the subdev/MC API is optional. :)

This was actually implemented. There are some cases at ivtv/cx18 driver where both
the bridge and a subdev provides the same control (audio volume, for example). The
idea is to allow the bridge driver to touch at the subdev control without exposing
it to userspace, since the desire was that the bridge driver itself would expose
such control, using a logic that combines changing the subdev and the bridge registers
for volume.

> Also, the sensor subdev can be configured in the video node driver as well as
> through the subdev device node. Both APIs can do the same thing but in order
> to let the subdev API work as expected the video node driver must be forbidden
> to configure the subdev.

Why? For the sensor, a V4L2 API call will look just like a bridge driver call.
The subdev will need a mutex anyway, as two MC applications may be opening it
simultaneously. I can't see why it should forbid changing the control from the
bridge driver call.

> There is a conflict there that in order to use 
> 'optional' API the 'main' API behaviour must be affected....

It is optional from userspace perspective. A V4L2-only application should be able
to work with all drivers. However, a MC-aware application will likely be specific
for some hardware, as it will need to know some device-specific stuff.

Both kinds of applications are welcome, but dropping support for V4L2-only applications
is the wrong thing to do.

> And I really cant use V4L2 API only as is because it's too limited.

Why?

> Might be that's why I see more and more often migration to OpenMAX recently.

I don't think so. People may be adopting OpenMAX just because of some marketing strategy
from the OpenMAX forum. We don't spend money to announce V4L2 ;)

I think that writing a pure OpenMAX driver is the wrong thing to do, as, at the long 
term, it will cost _a_lot_ for the vendors to maintain something that will never be 
merged upstream.

On the other hand, a V4L2/MC <==> OpenMAX abstraction layer/library at userspace makes 
sense, as it will open support for OpenMAX-aware userspace applications. Using an 
standard there would allow someone to write an application that would work on more 
than one operational system.

Yet, if I would be requested to write a multi-OS application, I would probably be
opting to write an OS-specific driver for each OS, as it would allow the application
to be optimized for that OS. The OS-specific layer is the small part of such application.
Btw, successfully xawtv does that, supporting both V4L2 and a BSD-specific API.
The size of the driver corresponds to about 2.5% of the total size of the application.
So, writing two or tree drivers won't have any significant impact at the TCO of such
application, especially because maintaining a generic multi-OS driver requires much more
time for debugging/testing/maintaining than a per-OS driver. But, of course, this
is a matter of developer's taste.

>> So, my current plan is to merge the patches into an experimental tree, after
>> reviewing the changeset, and test against a V4L2 application, in order to
>> confirm that everything is ok.
> 
> Sure, thanks. Basically I have been testing with same application before
> and after the patchset. Tomasz also tried his libv4l2 mplane patches with
> this reworked driver. So in general I do not expect any surprises, 

Good to know.

> but the testing can only help us:)
> 
>>
>> I may need a couple weeks for doing that, as it will take some time for me
>> to have an available window for hacking with it.
> 
> At first I need to provide you with the board setup patches for smdkv310, in order
> to make m5mols camera work on this board. I depend on others to make this job
> and this is taking painfully long time. Maybe after the upcomning Linaro Connect
> I'll get things speed up.

Ok. I'll be waiting for you.

Thanks!
Mauro

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-07-28 13:20     ` Mauro Carvalho Chehab
@ 2011-07-28 22:57       ` Sylwester Nawrocki
  2011-07-29  4:02         ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 44+ messages in thread
From: Sylwester Nawrocki @ 2011-07-28 22:57 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Sylwester Nawrocki, linux-media

On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
>>> In summary: The V4L2 API is not a legacy API that needs a "compatibility
>>> mode". Removing controls like VIDIOC_S_INPUT, VIDIOC_*CTRL, etc in
>>> favor of the media controller API is wrong. This specific patch itself seems
>>
>> Yes, it's the second time you say MC API is only optional;) I should
>> had formulated the summary from the other point of view. I wrote this
>> in context of two: V4L2 and MC API compatibility modes. Perhaps not too
>> fortunate wording.
> 
> A clear patch description helps for reviewers to understand what, why and how
> the patch is doing. Sometimes, I write a one-line patch, together with a 30+
> lines of patch description. Especially on tricky patches, please be verbose
> at the descriptions.
> 
>>> ok, but it is easy to loose the big picture on a series of 28 patches
>>> with about 4000 lines changed.
>>
>> Yes, I agree. You really have to look carefully at the final result too.
>>
>> When it comes to controls, as you might know, I didn't remove any
>> functionality. Although the ioctl handlers are gotten rid of in the driver,
>> they are handled in the control framework.
> 
> Yes, I noticed, but, on a complex driver with several subdevs, it is not that
> simple to check where the controls are actually created, or if they require a
> MC API call to create them or not. Especially on patch series made by the

Sure, I fully understand.

> manufacturer, I generally don't spend much time to fully understand the driver logic,
> as I assume that the developers are doing the better to make the driver to work.
> My main concern is to be sure that the driver will be doing the right thing,
> in the light of the V4L2 API. The MC API made my work harder, as now, I need to
> check also if, for the device to work, it needs some MC API calls.
> 
> So, I have now 2 alternatives left:
>    a) to test with a device;
>    b) to fully understand the driver's logic.
> 
> Both are very time-consuming, but (a) is quicker, and safer, of course provided that
> I don't need to dig into several trees to get patches because not everything is not
> upstream yet.
> 
> This also means that I'll need the (complex) patches for devices with MC several weeks
> before the next merge window, to give me some time to open a window for testing.
> 
>>> The media controller API is meant to be used only by specific applications
>>> that might add some extra features to the driver. So, it is an optional
>>> API. In all cases where both API's can do the same thing, the proper way
>>> is to use the V4L2 API only, and not the media controller API.
>>
>> Yes I continually keep that in mind. But there are some corner cases when
>> things are not so obvious, e.g. normally a video node inherits controls
>> from a sensor subdev. So all the controls are available at the video node.
>>
>> But when using the subdev API, the right thing to do at the video node is not
>> to inherit sensor's controls. You could of course accumulate all controls at
>> video node at all times, such as same (sensor subdev's) controls are available
>> at /dev/video* and /dev/v4l-subdev*. But this is confusing to MC API aware
>> application which could wrongly assume that there is more controls at the host
>> device than there really is.
> 
> Accumulating sub-dev controls at the video node is the right thing to do.
> 
> An MC-aware application will need to handle with that, but that doesn't sound to
> be hard. All such application would need to do is to first probe the subdev controls,
> and, when parsing the videodev controls, not register controls with duplicated ID's,
> or to mark them with some special attribute.

IMHO it's not a big issue in general. Still, both subdev and the host device may 
support same control id. And then even though the control ids are same on the subdev
and the host they could mean physically different controls (since when registering 
a subdev at the host driver the host's controls take precedence and doubling subdev
controls are skipped).

Also there might be some preference at user space, at which stage of the pipeline
to apply some controls. This is where the subdev API helps, and plain video node
API does not.

> 
>> Thus it's a bit hard to imagine that we could do something like "optionally
>> not to inherit controls" as the subdev/MC API is optional. :)
> 
> This was actually implemented. There are some cases at ivtv/cx18 driver where both
> the bridge and a subdev provides the same control (audio volume, for example). The
> idea is to allow the bridge driver to touch at the subdev control without exposing
> it to userspace, since the desire was that the bridge driver itself would expose
> such control, using a logic that combines changing the subdev and the bridge registers
> for volume.

This seem like hard coding a policy in the driver;) Then there is no way (it might not
be worth the effort though) to play with volume level at both devices, e.g. to obtain
optimal S/N ratio. This is a hack...sorry, just joking ;-) Seriously, I think the
situation with the userspace subdevs is a bit different. Because with one API we
directly expose some functionality for applications, with other we code it in the
kernel, to make the devices appear uniform at user space.

> 
>> Also, the sensor subdev can be configured in the video node driver as well as
>> through the subdev device node. Both APIs can do the same thing but in order
>> to let the subdev API work as expected the video node driver must be forbidden
>> to configure the subdev.
> 
> Why? For the sensor, a V4L2 API call will look just like a bridge driver call.
> The subdev will need a mutex anyway, as two MC applications may be opening it
> simultaneously. I can't see why it should forbid changing the control from the
> bridge driver call.

Please do not forget there might be more than one subdev to configure and that
the bridge itself is also a subdev (which exposes a scaler interface, for instance).
A situation pretty much like in Figure 4.4 [1] (after the scaler there is also
a video node to configure, but we may assume that pixel resolution at the scaler
pad 1 is same as at the video node). Assuming the format and crop configuration 
flow is from sensor to host scaler direction, if we have tried to configure _all_
subdevs when the last stage of the pipeline is configured (i.e. video node) 
the whole scaler and crop/composition configuration we have been destroyed at
that time. And there is more to configure than VIDIOC_S_FMT can do.

Allowing the bridge driver to configure subdevs at all times would prevent
the subdev/MC API to work. 

> 
>> There is a conflict there that in order to use
>> 'optional' API the 'main' API behaviour must be affected....
> 
> It is optional from userspace perspective. A V4L2-only application should be able
> to work with all drivers. However, a MC-aware application will likely be specific
> for some hardware, as it will need to know some device-specific stuff.
> 
> Both kinds of applications are welcome, but dropping support for V4L2-only applications
> is the wrong thing to do.
> 
>> And I really cant use V4L2 API only as is because it's too limited.
> 
> Why?

For instance there is really yet no support for scaler and composition onto
a target buffer in the Video Capture Interface (we also use sensors with 
built in scalers). It's difficult to efficiently manage capture/preview pipelines.
It is impossible to discover the system topology.

> 
>> Might be that's why I see more and more often migration to OpenMAX recently.
> 
> I don't think so. People may be adopting OpenMAX just because of some marketing strategy
> from the OpenMAX forum. We don't spend money to announce V4L2 ;)

:) 

> 
> I think that writing a pure OpenMAX driver is the wrong thing to do, as, at the long
> term, it will cost _a_lot_ for the vendors to maintain something that will never be
> merged upstream.

In general it depends on priorities. For some chip vendors it might be more important
to provide a solution in short time frame. Getting something in mainline isn't effortless
and spending lot's of time on this for some parties is unacceptable.

> 
> On the other hand, a V4L2/MC<==>  OpenMAX abstraction layer/library at userspace makes
> sense, as it will open support for OpenMAX-aware userspace applications. Using an
> standard there would allow someone to write an application that would work on more
> than one operational system.

I think this is called OpenMAX IL (integration layer). And it seems it indeed makes
sense, given that a video4linux API isn't a bottleneck. I think with Media Controller
API it is not. MC API seems to be integrating quite nicely with OMX as IL.

> 
> Yet, if I would be requested to write a multi-OS application, I would probably be
> opting to write an OS-specific driver for each OS, as it would allow the application
> to be optimized for that OS. The OS-specific layer is the small part of such application.
> Btw, successfully xawtv does that, supporting both V4L2 and a BSD-specific API.
> The size of the driver corresponds to about 2.5% of the total size of the application.
> So, writing two or tree drivers won't have any significant impact at the TCO of such
> application, especially because maintaining a generic multi-OS driver requires much more
> time for debugging/testing/maintaining than a per-OS driver. But, of course, this
> is a matter of developer's taste.
> 
>>> So, my current plan is to merge the patches into an experimental tree, after
>>> reviewing the changeset, and test against a V4L2 application, in order to
>>> confirm that everything is ok.
>>
>> Sure, thanks. Basically I have been testing with same application before
>> and after the patchset. Tomasz also tried his libv4l2 mplane patches with
>> this reworked driver. So in general I do not expect any surprises,
> 
> Good to know.
> 
>> but the testing can only help us:)
>>
>>>
>>> I may need a couple weeks for doing that, as it will take some time for me
>>> to have an available window for hacking with it.
>>
>> At first I need to provide you with the board setup patches for smdkv310, in order
>> to make m5mols camera work on this board. I depend on others to make this job
>> and this is taking painfully long time. Maybe after the upcoming Linaro Connect
>> I'll get things speed up.
> 
> Ok. I'll be waiting for you.
> 
> Thanks!
> Mauro

[1] http://linuxtv.org/downloads/v4l-dvb-apis/subdev.html

---
Regards,
Sylwester

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-07-28 22:57       ` Sylwester Nawrocki
@ 2011-07-29  4:02         ` Mauro Carvalho Chehab
  2011-07-29  8:36           ` Laurent Pinchart
  2011-08-03 14:28           ` Sylwester Nawrocki
  0 siblings, 2 replies; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-07-29  4:02 UTC (permalink / raw)
  To: Sylwester Nawrocki; +Cc: Sylwester Nawrocki, linux-media

Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
> On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:

>> Accumulating sub-dev controls at the video node is the right thing to do.
>>
>> An MC-aware application will need to handle with that, but that doesn't sound to
>> be hard. All such application would need to do is to first probe the subdev controls,
>> and, when parsing the videodev controls, not register controls with duplicated ID's,
>> or to mark them with some special attribute.
> 
> IMHO it's not a big issue in general. Still, both subdev and the host device may 
> support same control id. And then even though the control ids are same on the subdev
> and the host they could mean physically different controls (since when registering 
> a subdev at the host driver the host's controls take precedence and doubling subdev
> controls are skipped).

True, but, except for specific usecases, the host control is enough.

> Also there might be some preference at user space, at which stage of the pipeline
> to apply some controls. This is where the subdev API helps, and plain video node
> API does not.

Again, this is for specific usecases. On such cases, what is expected is that the more
generic control will be exported via V4L2 API.

>>
>>> Thus it's a bit hard to imagine that we could do something like "optionally
>>> not to inherit controls" as the subdev/MC API is optional. :)
>>
>> This was actually implemented. There are some cases at ivtv/cx18 driver where both
>> the bridge and a subdev provides the same control (audio volume, for example). The
>> idea is to allow the bridge driver to touch at the subdev control without exposing
>> it to userspace, since the desire was that the bridge driver itself would expose
>> such control, using a logic that combines changing the subdev and the bridge registers
>> for volume.
> 
> This seem like hard coding a policy in the driver;) Then there is no way (it might not
> be worth the effort though) to play with volume level at both devices, e.g. to obtain
> optimal S/N ratio.

In general, playing with just one control is enough. Andy had a different opinion
when this issue were discussed, and he thinks that playing with both is better.
At the end, this is a developers decision, depending on how much information 
(and bug reports) he had.

> This is a hack...sorry, just joking ;-) Seriously, I think the
> situation with the userspace subdevs is a bit different. Because with one API we
> directly expose some functionality for applications, with other we code it in the
> kernel, to make the devices appear uniform at user space.

Not sure if I understood you. V4L2 export drivers functionality to userspace in an
uniform way. MC api is for special applications that might need to access some
internal functions on embedded devices.

Of course, there are some cases where it doesn't make sense to export a subdev control
via V4L2 API.

>>> Also, the sensor subdev can be configured in the video node driver as well as
>>> through the subdev device node. Both APIs can do the same thing but in order
>>> to let the subdev API work as expected the video node driver must be forbidden
>>> to configure the subdev.
>>
>> Why? For the sensor, a V4L2 API call will look just like a bridge driver call.
>> The subdev will need a mutex anyway, as two MC applications may be opening it
>> simultaneously. I can't see why it should forbid changing the control from the
>> bridge driver call.
> 
> Please do not forget there might be more than one subdev to configure and that
> the bridge itself is also a subdev (which exposes a scaler interface, for instance).
> A situation pretty much like in Figure 4.4 [1] (after the scaler there is also
> a video node to configure, but we may assume that pixel resolution at the scaler
> pad 1 is same as at the video node). Assuming the format and crop configuration 
> flow is from sensor to host scaler direction, if we have tried to configure _all_
> subdevs when the last stage of the pipeline is configured (i.e. video node) 
> the whole scaler and crop/composition configuration we have been destroyed at
> that time. And there is more to configure than VIDIOC_S_FMT can do.

Think from users perspective: all user wants is to see a video of a given resolution.
S_FMT (and a few other VIDIOC_* calls) have everything that the user wants: the
desired resolution, framerate and format.

Specialized applications indeed need more, in order to get the best images for
certain types of usages. So, MC is there.

Such applications will probably need to know exactly what's the sensor, what are
their bugs, how it is connected, what are the DSP blocks in the patch, how the
DSP algorithms are implemented, etc, in order to obtain the the perfect image.

Even on embedded devices like smartphones and tablets, I predict that both
types of applications will be developed and used: people may use a generic
application like flash player, and an specialized application provided by
the manufacturer. Users can even develop their own applications generic 
apps using V4L2 directly, at the devices that allow that.

As I said before: both application types are welcome. We just need to warrant 
that a pure V4L application will work reasonably well.

> Allowing the bridge driver to configure subdevs at all times would prevent
> the subdev/MC API to work. 

Well, then we need to think on an alternative for that. It seems an interesting
theme for the media workshop at the Kernel Summit/2011.

>>> There is a conflict there that in order to use
>>> 'optional' API the 'main' API behaviour must be affected....
>>
>> It is optional from userspace perspective. A V4L2-only application should be able
>> to work with all drivers. However, a MC-aware application will likely be specific
>> for some hardware, as it will need to know some device-specific stuff.
>>
>> Both kinds of applications are welcome, but dropping support for V4L2-only applications
>> is the wrong thing to do.
>>
>>> And I really cant use V4L2 API only as is because it's too limited.
>>
>> Why?
> 
> For instance there is really yet no support for scaler and composition onto
> a target buffer in the Video Capture Interface (we also use sensors with 
> built in scalers). It's difficult to efficiently manage capture/preview pipelines.
> It is impossible to discover the system topology.

Scaler were always supported by V4L2: if the resolution specified by S_FMT is not
what the sensor provides, then scale. All non-embedded drivers with sensor or bridge
supports scale does that.

Composition is not properly supported yet. It could make sense to add it to V4L. How do you
think MC API would help with composite?

Managing capture/preview pipelines will require some support at V4L2 level. This is
a problem that needs to be addressed there anyway, as buffers for preview/capture
need to be allocated. There's an RFC about that, but I don't think it covers the
pipelines for it.

Discovering the system topology indeed is not part of V4L2 API and will never be.
This is MC API business. There's no overlap with V4L2.

>>> Might be that's why I see more and more often migration to OpenMAX recently.
>>
>> I don't think so. People may be adopting OpenMAX just because of some marketing strategy
>> from the OpenMAX forum. We don't spend money to announce V4L2 ;)
> 
> :) 
> 
>>
>> I think that writing a pure OpenMAX driver is the wrong thing to do, as, at the long
>> term, it will cost _a_lot_ for the vendors to maintain something that will never be
>> merged upstream.
> 
> In general it depends on priorities. For some chip vendors it might be more important
> to provide a solution in short time frame. Getting something in mainline isn't effortless
> and spending lot's of time on this for some parties is unacceptable.

Adding something at a forum like OpenMAX is probably not an easy task ;) It generally
takes a long time to change something on open forum specifications. Also, participating 
on those forums require lots of money, with membership and travel expenses.

Adding a new driver that follows the API's is not a long-term effort. The delay is 
basically one kernel cycle, e. g. about 3-4 months.

Most of the delays on merging drivers for embedded systems, however, take a
way longer than that, unfortunately. From what I see, what is delaying driver
submission is due to:
	- the lack of how to work with the Linux community. Some developers take a
long time to get the 'modus operandi';
	- the need of API changes. It is still generally faster to add a new API 
addition at the Kernel than on an open forum;
	- the discussions inside the various teams (generally from the same company,
or the company and their customers) about the better way to implement some feature.

All the above also happens when developing an OpenMAX driver: companies need to
learn how to work with the OpenMax forums, API changes may be required, so they
need to participate at the forums, the several internal teams and customers
will be discussing the requirements.

I bet that there's also one additional step: to submit the driver to some company
that will check the driver compliance with the official API. Such certification
process is generally expensive and takes a long time.

At the end of the day, they'll also spend a lot of time to have the driver done,
or they'll end by releasing a "not-quite-openmax" driver, and then needing to
rework on that, due to customers complains, or to certification-compliance,
loosing time and money.

Regards,
Mauro.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-07-28  2:55 ` Mauro Carvalho Chehab
  2011-07-28 10:09   ` Sylwester Nawrocki
@ 2011-07-29  8:17   ` Sakari Ailus
  1 sibling, 0 replies; 44+ messages in thread
From: Sakari Ailus @ 2011-07-29  8:17 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Sylwester Nawrocki, linux-media

On Wed, Jul 27, 2011 at 11:55:55PM -0300, Mauro Carvalho Chehab wrote:
> Hi Sylwester,
> 
> Em 27-07-2011 13:35, Sylwester Nawrocki escreveu:
> > Hi Mauro,
> > 
> > The following changes since commit f0a21151140da01c71de636f482f2eddec2840cc:
> > 
> >   Merge tag 'v3.0' into staging/for_v3.1 (2011-07-22 13:33:14 -0300)
> > 
> > are available in the git repository at:
> > 
> >   git://git.infradead.org/users/kmpark/linux-2.6-samsung fimc-for-mauro
> > 
> > Sylwester Nawrocki (28):
> >       s5p-fimc: Add support for runtime PM in the mem-to-mem driver
> >       s5p-fimc: Add media entity initialization
> >       s5p-fimc: Remove registration of video nodes from probe()
> >       s5p-fimc: Remove sclk_cam clock handling
> >       s5p-fimc: Limit number of available inputs to one
> >       s5p-fimc: Remove sensor management code from FIMC capture driver
> >       s5p-fimc: Remove v4l2_device from video capture and m2m driver
> >       s5p-fimc: Add the media device driver
> >       s5p-fimc: Conversion to use struct v4l2_fh
> >       s5p-fimc: Conversion to the control framework
> >       s5p-fimc: Add media operations in the capture entity driver
> >       s5p-fimc: Add PM helper function for streaming control
> >       s5p-fimc: Correct color format enumeration
> >       s5p-fimc: Convert to use media pipeline operations
> >       s5p-fimc: Add subdev for the FIMC processing block
> >       s5p-fimc: Add support for camera capture in JPEG format
> >       s5p-fimc: Add v4l2_device notification support for single frame capture
> >       s5p-fimc: Use consistent names for the buffer list functions
> >       s5p-fimc: Add runtime PM support in the camera capture driver
> >       s5p-fimc: Correct crop offset alignment on exynos4
> >       s5p-fimc: Remove single-planar capability flags
> >       noon010pc30: Do not ignore errors in initial controls setup
> >       noon010pc30: Convert to the pad level ops
> >       noon010pc30: Clean up the s_power callback
> >       noon010pc30: Remove g_chip_ident operation handler
> >       s5p-csis: Handle all available power supplies
> >       s5p-csis: Rework of the system suspend/resume helpers
> >       s5p-csis: Enable v4l subdev device node
> 
> From the last time you've submitted a similar set of patches:
> 
> >> Why? The proper way to select an input is via S_INPUT. The driver 
> > may also
> >> optionally allow changing it via the media device, but it should 
> > not be
> >> a mandatory requirement, as the media device API is optional.
> > 
> > The problem I'm trying to solve here is sharing the sensors and mipi-csi receivers between multiple FIMC H/W instances. Previously the driver supported attaching a sensor to only one selected FIMC at compile time. You could, for instance, specify all sensors as the selected FIMC's platform data and then use S_INPUT to choose between them. The sensor could not be used together with any other FIMC. But this is desired due to different capabilities of the FIMC IP instances. And now, instead of hardcoding a sensor assigment to particular video node, the sensors are bound to the media device. The media device driver takes the list of sensors and attaches them one by one to subsequent FIMC instances when it is initializing. Each sensor has a link to each FIMC but only one of them is active by default. That said an user application can use selected camera by opening corresponding video node. Which camera is at which node can be queried with G_INPUT.
> > 
> > I could try to implement the previous S_INPUT behaviour, but IMHO this would lead to considerable and unnecessary driver code complication due to supporting overlapping APIs
> 
> From this current pull request:
> 
> From c6fb462c38be60a45d16a29a9e56c886ee0aa08c Mon Sep 17 00:00:00 2001
> From: Sylwester Nawrocki <s.nawrocki@samsung.com>
> Date: Fri, 10 Jun 2011 20:36:51 +0200
> Subject: s5p-fimc: Conversion to the control framework
> Cc: Linux Media Mailing List <linux-media@vger.kernel.org>
> 
> Make the driver inherit sensor controls when video node only API
> compatibility mode is enabled. The control framework allows to
> properly handle sensor controls through controls inheritance when
> pipeline is re-configured at media device level.
> 
> ...
> -       .vidioc_queryctrl               = fimc_vidioc_queryctrl,
> -       .vidioc_g_ctrl                  = fimc_vidioc_g_ctrl,
> -       .vidioc_s_ctrl                  = fimc_cap_s_ctrl,
> ...
> 
> I'll need to take some time to review this patchset. So, it will likely
> miss the bus for 3.1.
> 
> While the code inside this patch looked ok, your comments scared me ;)
> 
> In summary: The V4L2 API is not a legacy API that needs a "compatibility
> mode". Removing controls like VIDIOC_S_INPUT, VIDIOC_*CTRL, etc in
> favor of the media controller API is wrong. This specific patch itself seems
> ok, but it is easy to loose the big picture on a series of 28 patches
> with about 4000 lines changed.

I remember this was discussed among with the last pull request on the
patches. I don't remember seeing your reply in the thread after much
discussion. Have you had time to read it?

<URL:http://www.spinics.net/lists/linux-media/msg35504.html>

In short, as far as I understand the reason why this driver doesn't support
S_INPUT is exactly the same as with the OMAP 3 ISP driver. There is not
enough information for the driver to perform what should be the effect of
the user calling S_INPUT.

If S_INPUT support is to be added (which I think is the right thing to do),
it should be implemented in libv4l, not in the kernel. It involves
essentially full pipeline configuration (as do things like S_FMT) which is
typically very board dependent and doing that inside the driver would be
quite cumbersome if not unfeasible. Someone could also claim this kind of
functionality wouldn't belong to the kernel at all since most of it is
policy decisions.

Sincerely,

-- 
Sakari Ailus
sakari.ailus@iki.fi

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-07-29  4:02         ` Mauro Carvalho Chehab
@ 2011-07-29  8:36           ` Laurent Pinchart
  2011-08-09 20:05             ` Mauro Carvalho Chehab
  2011-08-03 14:28           ` Sylwester Nawrocki
  1 sibling, 1 reply; 44+ messages in thread
From: Laurent Pinchart @ 2011-07-29  8:36 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sylwester Nawrocki, Sylwester Nawrocki, linux-media, Sakari Ailus

Hi Mauro,

On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
> Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
> > On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
> >> Accumulating sub-dev controls at the video node is the right thing to
> >> do.
> >> 
> >> An MC-aware application will need to handle with that, but that doesn't
> >> sound to be hard. All such application would need to do is to first
> >> probe the subdev controls, and, when parsing the videodev controls, not
> >> register controls with duplicated ID's, or to mark them with some
> >> special attribute.
> > 
> > IMHO it's not a big issue in general. Still, both subdev and the host
> > device may support same control id. And then even though the control ids
> > are same on the subdev and the host they could mean physically different
> > controls (since when registering a subdev at the host driver the host's
> > controls take precedence and doubling subdev controls are skipped).
> 
> True, but, except for specific usecases, the host control is enough.

Not for embedded devices. In many case the control won't even be implemented 
by the host. If your system has two sensors connected to the same host, they 
will both expose an exposure time control. Which one do you configure through 
the video node ? The two sensors will likely have different bounds for the 
same control, how do you report that ?

Those controls are also quite useless for generic V4L2 applications, which 
will want auto-exposure anyway. This needs to be implemented in userspace in 
libv4l.

> > Also there might be some preference at user space, at which stage of the
> > pipeline to apply some controls. This is where the subdev API helps, and
> > plain video node API does not.
> 
> Again, this is for specific usecases. On such cases, what is expected is
> that the more generic control will be exported via V4L2 API.
> 
> >>> Thus it's a bit hard to imagine that we could do something like
> >>> "optionally not to inherit controls" as the subdev/MC API is optional.
> >>> :)
> >> 
> >> This was actually implemented. There are some cases at ivtv/cx18 driver
> >> where both the bridge and a subdev provides the same control (audio
> >> volume, for example). The idea is to allow the bridge driver to touch
> >> at the subdev control without exposing it to userspace, since the
> >> desire was that the bridge driver itself would expose such control,
> >> using a logic that combines changing the subdev and the bridge
> >> registers for volume.
> > 
> > This seem like hard coding a policy in the driver;) Then there is no way
> > (it might not be worth the effort though) to play with volume level at
> > both devices, e.g. to obtain optimal S/N ratio.
> 
> In general, playing with just one control is enough. Andy had a different
> opinion when this issue were discussed, and he thinks that playing with
> both is better. At the end, this is a developers decision, depending on
> how much information (and bug reports) he had.

ivtv/cx18 is a completely different use case, where hardware configurations 
are known, and experiments possible to find out which control(s) to use and 
how. In this case you can't know in advance what the sensor and host drivers 
will be used for. Even if you did, fine image quality tuning requires 
accessing pretty much all controls individually anyway.

> > This is a hack...sorry, just joking ;-) Seriously, I think the
> > situation with the userspace subdevs is a bit different. Because with one
> > API we directly expose some functionality for applications, with other
> > we code it in the kernel, to make the devices appear uniform at user
> > space.
> 
> Not sure if I understood you. V4L2 export drivers functionality to
> userspace in an uniform way. MC api is for special applications that might
> need to access some internal functions on embedded devices.
> 
> Of course, there are some cases where it doesn't make sense to export a
> subdev control via V4L2 API.
> 
> >>> Also, the sensor subdev can be configured in the video node driver as
> >>> well as through the subdev device node. Both APIs can do the same
> >>> thing but in order to let the subdev API work as expected the video
> >>> node driver must be forbidden to configure the subdev.
> >> 
> >> Why? For the sensor, a V4L2 API call will look just like a bridge driver
> >> call. The subdev will need a mutex anyway, as two MC applications may
> >> be opening it simultaneously. I can't see why it should forbid changing
> >> the control from the bridge driver call.
> > 
> > Please do not forget there might be more than one subdev to configure and
> > that the bridge itself is also a subdev (which exposes a scaler
> > interface, for instance). A situation pretty much like in Figure 4.4 [1]
> > (after the scaler there is also a video node to configure, but we may
> > assume that pixel resolution at the scaler pad 1 is same as at the video
> > node). Assuming the format and crop configuration flow is from sensor to
> > host scaler direction, if we have tried to configure _all_ subdevs when
> > the last stage of the pipeline is configured (i.e. video node) the whole
> > scaler and crop/composition configuration we have been destroyed at that
> > time. And there is more to configure than VIDIOC_S_FMT can do.
> 
> Think from users perspective: all user wants is to see a video of a given
> resolution. S_FMT (and a few other VIDIOC_* calls) have everything that
> the user wants: the desired resolution, framerate and format.
> 
> Specialized applications indeed need more, in order to get the best images
> for certain types of usages. So, MC is there.
> 
> Such applications will probably need to know exactly what's the sensor,
> what are their bugs, how it is connected, what are the DSP blocks in the
> patch, how the DSP algorithms are implemented, etc, in order to obtain the
> the perfect image.
> 
> Even on embedded devices like smartphones and tablets, I predict that both
> types of applications will be developed and used: people may use a generic
> application like flash player, and an specialized application provided by
> the manufacturer. Users can even develop their own applications generic
> apps using V4L2 directly, at the devices that allow that.
> 
> As I said before: both application types are welcome. We just need to
> warrant that a pure V4L application will work reasonably well.

That's why we have libv4l. The driver simply doesn't receive enough 
information to configure the hardware correctly from the VIDIOC_* calls. And 
as mentioned above, 3A algorithms, required by "simple" V4L2 applications, 
need to be implemented in userspace anyway.

> > Allowing the bridge driver to configure subdevs at all times would
> > prevent the subdev/MC API to work.
> 
> Well, then we need to think on an alternative for that. It seems an
> interesting theme for the media workshop at the Kernel Summit/2011.
>
> >>> There is a conflict there that in order to use
> >>> 'optional' API the 'main' API behaviour must be affected....
> >> 
> >> It is optional from userspace perspective. A V4L2-only application
> >> should be able to work with all drivers. However, a MC-aware
> >> application will likely be specific for some hardware, as it will need
> >> to know some device-specific stuff.
> >> 
> >> Both kinds of applications are welcome, but dropping support for
> >> V4L2-only applications is the wrong thing to do.
> >> 
> >>> And I really cant use V4L2 API only as is because it's too limited.
> >> 
> >> Why?
> > 
> > For instance there is really yet no support for scaler and composition
> > onto a target buffer in the Video Capture Interface (we also use sensors
> > with built in scalers). It's difficult to efficiently manage
> > capture/preview pipelines. It is impossible to discover the system
> > topology.
> 
> Scaler were always supported by V4L2: if the resolution specified by S_FMT
> is not what the sensor provides, then scale. All non-embedded drivers with
> sensor or bridge supports scale does that.
> 
> Composition is not properly supported yet. It could make sense to add it to
> V4L. How do you think MC API would help with composite?
> 
> Managing capture/preview pipelines will require some support at V4L2 level.
> This is a problem that needs to be addressed there anyway, as buffers for
> preview/capture need to be allocated. There's an RFC about that, but I
> don't think it covers the pipelines for it.

Managing pipelines is a policy decision, and needs to be implemented in 
userspace. Once again, the solution here is libv4l.

> Discovering the system topology indeed is not part of V4L2 API and will
> never be. This is MC API business. There's no overlap with V4L2.
> 
> >>> Might be that's why I see more and more often migration to OpenMAX
> >>> recently.
> >> 
> >> I don't think so. People may be adopting OpenMAX just because of some
> >> marketing strategy from the OpenMAX forum. We don't spend money to
> >> announce V4L2 ;)
> >> 
> > :)
> > :
> >> I think that writing a pure OpenMAX driver is the wrong thing to do, as,
> >> at the long term, it will cost _a_lot_ for the vendors to maintain
> >> something that will never be merged upstream.
> > 
> > In general it depends on priorities. For some chip vendors it might be
> > more important to provide a solution in short time frame. Getting
> > something in mainline isn't effortless and spending lot's of time on
> > this for some parties is unacceptable.
> 
> Adding something at a forum like OpenMAX is probably not an easy task ;) It
> generally takes a long time to change something on open forum
> specifications. Also, participating on those forums require lots of money,
> with membership and travel expenses.

Correct me if I'm wrong (I'm not an OpenMAX specialist), but I think it 
doesn't. OpenMAX isn't really open, and vendors implement proprietary 
extensions as needed. My understanding is that tt's easier for them to use 
OpenMAX for that, because they can more or less claim OpenMAX support even 
with the proprietary extensions, something they couldn't do with V4L2.

If we want to compete there (and I certainly do), we need to provide vendors 
with clear, easy and well documented APIs that fulfill their needs out of the 
box. That's of course an evermoving target.

> Adding a new driver that follows the API's is not a long-term effort. The
> delay is basically one kernel cycle, e. g. about 3-4 months.
> 
> Most of the delays on merging drivers for embedded systems, however, take a
> way longer than that, unfortunately. From what I see, what is delaying
> driver submission is due to:
> 	- the lack of how to work with the Linux community. Some developers take a
> long time to get the 'modus operandi';
> 	- the need of API changes. It is still generally faster to add a new API
> addition at the Kernel than on an open forum;
> 	- the discussions inside the various teams (generally from the same
> company, or the company and their customers) about the better way to
> implement some feature.
> 
> All the above also happens when developing an OpenMAX driver: companies
> need to learn how to work with the OpenMax forums, API changes may be
> required, so they need to participate at the forums, the several internal
> teams and customers will be discussing the requirements.
> 
> I bet that there's also one additional step: to submit the driver to some
> company that will check the driver compliance with the official API. Such
> certification process is generally expensive and takes a long time.

I could be wrong, but I very much doubt this ever happens.

> At the end of the day, they'll also spend a lot of time to have the driver
> done, or they'll end by releasing a "not-quite-openmax" driver, and then
> needing to rework on that, due to customers complains, or to
> certification-compliance, loosing time and money.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-07-29  4:02         ` Mauro Carvalho Chehab
  2011-07-29  8:36           ` Laurent Pinchart
@ 2011-08-03 14:28           ` Sylwester Nawrocki
  1 sibling, 0 replies; 44+ messages in thread
From: Sylwester Nawrocki @ 2011-08-03 14:28 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Sylwester Nawrocki, linux-media

Hi Mauro,

On 07/29/2011 06:02 AM, Mauro Carvalho Chehab wrote:
> Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
>> On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
> 
>>> Accumulating sub-dev controls at the video node is the right thing to do.
>>>
>>> An MC-aware application will need to handle with that, but that doesn't sound to
>>> be hard. All such application would need to do is to first probe the subdev controls,
>>> and, when parsing the videodev controls, not register controls with duplicated ID's,
>>> or to mark them with some special attribute.
>>
>> IMHO it's not a big issue in general. Still, both subdev and the host device may
>> support same control id. And then even though the control ids are same on the subdev
>> and the host they could mean physically different controls (since when registering
>> a subdev at the host driver the host's controls take precedence and doubling subdev
>> controls are skipped).
> 
> True, but, except for specific usecases, the host control is enough.

For some ones those "specific use cases" could be most important (or all) the use 
cases. :-)

> 
>> Also there might be some preference at user space, at which stage of the pipeline
>> to apply some controls. This is where the subdev API helps, and plain video node
>> API does not.
> 
> Again, this is for specific usecases. On such cases, what is expected is that the more
> generic control will be exported via V4L2 API.

IMHO it's not about use cases, just about which hardware can be covered by the API
and which can not. When the signal processing systems are becoming distributed and
more interactively reconfigurable it's getting impossible to reasonably control them
through a centralized interface.

> 
>>>
>>>> Thus it's a bit hard to imagine that we could do something like "optionally
>>>> not to inherit controls" as the subdev/MC API is optional. :)
>>>
>>> This was actually implemented. There are some cases at ivtv/cx18 driver where both
>>> the bridge and a subdev provides the same control (audio volume, for example). The
>>> idea is to allow the bridge driver to touch at the subdev control without exposing
>>> it to userspace, since the desire was that the bridge driver itself would expose
>>> such control, using a logic that combines changing the subdev and the bridge registers
>>> for volume.
>>
>> This seem like hard coding a policy in the driver;) Then there is no way (it might not
>> be worth the effort though) to play with volume level at both devices, e.g. to obtain
>> optimal S/N ratio.
> 
> In general, playing with just one control is enough. Andy had a different opinion
> when this issue were discussed, and he thinks that playing with both is better.
> At the end, this is a developers decision, depending on how much information
> (and bug reports) he had.
> 
>> This is a hack...sorry, just joking ;-) Seriously, I think the
>> situation with the userspace subdevs is a bit different. Because with one API we
>> directly expose some functionality for applications, with other we code it in the
>> kernel, to make the devices appear uniform at user space.
> 
> Not sure if I understood you. V4L2 export drivers functionality to userspace in an
> uniform way. MC api is for special applications that might need to access some
> internal functions on embedded devices.

Yes, that's what I meant.

> 
> Of course, there are some cases where it doesn't make sense to export a subdev control
> via V4L2 API.
> 
>>>> Also, the sensor subdev can be configured in the video node driver as well as
>>>> through the subdev device node. Both APIs can do the same thing but in order
>>>> to let the subdev API work as expected the video node driver must be forbidden
>>>> to configure the subdev.
>>>
>>> Why? For the sensor, a V4L2 API call will look just like a bridge driver call.
>>> The subdev will need a mutex anyway, as two MC applications may be opening it
>>> simultaneously. I can't see why it should forbid changing the control from the
>>> bridge driver call.
>>
>> Please do not forget there might be more than one subdev to configure and that
>> the bridge itself is also a subdev (which exposes a scaler interface, for instance).
>> A situation pretty much like in Figure 4.4 [1] (after the scaler there is also
>> a video node to configure, but we may assume that pixel resolution at the scaler
>> pad 1 is same as at the video node). Assuming the format and crop configuration
>> flow is from sensor to host scaler direction, if we have tried to configure _all_
>> subdevs when the last stage of the pipeline is configured (i.e. video node)
>> the whole scaler and crop/composition configuration we have been destroyed at
>> that time. And there is more to configure than VIDIOC_S_FMT can do.
> 
> Think from users perspective: all user wants is to see a video of a given resolution.
> S_FMT (and a few other VIDIOC_* calls) have everything that the user wants: the
> desired resolution, framerate and format.
> 
> Specialized applications indeed need more, in order to get the best images for
> certain types of usages. So, MC is there.
> 
> Such applications will probably need to know exactly what's the sensor, what are
> their bugs, how it is connected, what are the DSP blocks in the patch, how the
> DSP algorithms are implemented, etc, in order to obtain the the perfect image.

I suspect we might need more the specialized, H/W aware libraries rather than 
applications.

It's also about efficient hardware utilisation. It's often important to get most
out of the device, i.e. be able to precisely configure all the bits and pieces,
depending on the use case. This could be done by having as thin kernel driver
as possible and a H/W dedicated library.
 
> Even on embedded devices like smartphones and tablets, I predict that both
> types of applications will be developed and used: people may use a generic
> application like flash player, and an specialized application provided by
> the manufacturer. Users can even develop their own applications generic
> apps using V4L2 directly, at the devices that allow that.
> 
> As I said before: both application types are welcome. We just need to warrant
> that a pure V4L application will work reasonably well.

Is Laurent pointed out, more complex devices could be supported through libv4l2.

> 
>> Allowing the bridge driver to configure subdevs at all times would prevent
>> the subdev/MC API to work.
> 
> Well, then we need to think on an alternative for that. It seems an interesting
> theme for the media workshop at the Kernel Summit/2011.

Sure. But isn't it that simple as letting the driver implement MC API only and
abstract the hardware details in dedicated library/libv4l2 plugin ?
I just don't see a point in moving the adaptation layer into the kernel.

> 
>>>> There is a conflict there that in order to use
>>>> 'optional' API the 'main' API behaviour must be affected....
>>>
>>> It is optional from userspace perspective. A V4L2-only application should be able
>>> to work with all drivers. However, a MC-aware application will likely be specific
>>> for some hardware, as it will need to know some device-specific stuff.
>>>
>>> Both kinds of applications are welcome, but dropping support for V4L2-only applications
>>> is the wrong thing to do.
>>>
>>>> And I really cant use V4L2 API only as is because it's too limited.
>>>
>>> Why?
>>
>> For instance there is really yet no support for scaler and composition onto
>> a target buffer in the Video Capture Interface (we also use sensors with
>> built in scalers). It's difficult to efficiently manage capture/preview pipelines.
>> It is impossible to discover the system topology.
> 
> Scaler were always supported by V4L2: if the resolution specified by S_FMT is not
> what the sensor provides, then scale. All non-embedded drivers with sensor or bridge
> supports scale does that.

This is not really a level of support I have been thinking about. I thought about user
space knowing what exactly is scaled and where.

> 
> Composition is not properly supported yet. It could make sense to add it to V4L. How do you
> think MC API would help with composite?

Yes, the new S_SELECTION ioctl replacing S_CROP is meant to add composition support.
With MC you can support composition through cropping on a source pad, however the ioctl 
naming might yet need to be revisited.

> 
> Managing capture/preview pipelines will require some support at V4L2 level. This is
> a problem that needs to be addressed there anyway, as buffers for preview/capture
> need to be allocated. There's an RFC about that, but I don't think it covers the
> pipelines for it.

It requires fairly complex hardware configuration in hardware assisted cases.
So it's better done at MC API level. Nevertheless there is much that can be improved
at V4L2 level too.

> 
> Discovering the system topology indeed is not part of V4L2 API and will never be.
> This is MC API business. There's no overlap with V4L2.

True, however I was referring to the V4L2 API (video node) _only_.

> 
>>>> Might be that's why I see more and more often migration to OpenMAX recently.
>>>
>>> I don't think so. People may be adopting OpenMAX just because of some marketing strategy
>>> from the OpenMAX forum. We don't spend money to announce V4L2 ;)
>>
>> :)
>>
>>>
>>> I think that writing a pure OpenMAX driver is the wrong thing to do, as, at the long
>>> term, it will cost _a_lot_ for the vendors to maintain something that will never be
>>> merged upstream.
>>
>> In general it depends on priorities. For some chip vendors it might be more important
>> to provide a solution in short time frame. Getting something in mainline isn't effortless
>> and spending lot's of time on this for some parties is unacceptable.
> 
> Adding something at a forum like OpenMAX is probably not an easy task ;) It generally
> takes a long time to change something on open forum specifications. Also, participating
> on those forums require lots of money, with membership and travel expenses.
> 
> Adding a new driver that follows the API's is not a long-term effort. The delay is
> basically one kernel cycle, e. g. about 3-4 months.
> 
> Most of the delays on merging drivers for embedded systems, however, take a
> way longer than that, unfortunately. From what I see, what is delaying driver
> submission is due to:
> 	- the lack of how to work with the Linux community. Some developers take a
> long time to get the 'modus operandi';
> 	- the need of API changes. It is still generally faster to add a new API
> addition at the Kernel than on an open forum;
> 	- the discussions inside the various teams (generally from the same company,
> or the company and their customers) about the better way to implement some feature.
> 
> All the above also happens when developing an OpenMAX driver: companies need to
> learn how to work with the OpenMax forums, API changes may be required, so they
> need to participate at the forums, the several internal teams and customers
> will be discussing the requirements.
> 
> I bet that there's also one additional step: to submit the driver to some company
> that will check the driver compliance with the official API. Such certification
> process is generally expensive and takes a long time.
> 
> At the end of the day, they'll also spend a lot of time to have the driver done,
> or they'll end by releasing a "not-quite-openmax" driver, and then needing to
> rework on that, due to customers complains, or to certification-compliance,
> loosing time and money.

>From what I can see many of the big players in the graphics devices world are already
a _contributor_ members of the Khronos Group. And the membership costs are probably
a piece of cake for such a big companies.

OMX is different from V4L2 in that there is no need to discuss every trivial detail
in the API, specific for particular H/W. Adopters just add proprietary extensions
when needed, not every detail is conservatively fixed in the API.
This can't really be done in video4linux. But we also shouldn't be forcing users
to use an API that limits H/W capabilities, _if_ we are interested in supporting
modern complex devices in V4L2.

--
Regards,
Sylwester

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-07-29  8:36           ` Laurent Pinchart
@ 2011-08-09 20:05             ` Mauro Carvalho Chehab
  2011-08-09 23:18               ` Sakari Ailus
  2011-08-15 12:30               ` Laurent Pinchart
  0 siblings, 2 replies; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-09 20:05 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sylwester Nawrocki, Sylwester Nawrocki, linux-media, Sakari Ailus

Em 29-07-2011 05:36, Laurent Pinchart escreveu:
> Hi Mauro,
> 
> On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
>> Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
>>> On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
>>>> Accumulating sub-dev controls at the video node is the right thing to
>>>> do.
>>>>
>>>> An MC-aware application will need to handle with that, but that doesn't
>>>> sound to be hard. All such application would need to do is to first
>>>> probe the subdev controls, and, when parsing the videodev controls, not
>>>> register controls with duplicated ID's, or to mark them with some
>>>> special attribute.
>>>
>>> IMHO it's not a big issue in general. Still, both subdev and the host
>>> device may support same control id. And then even though the control ids
>>> are same on the subdev and the host they could mean physically different
>>> controls (since when registering a subdev at the host driver the host's
>>> controls take precedence and doubling subdev controls are skipped).
>>
>> True, but, except for specific usecases, the host control is enough.
> 
> Not for embedded devices. In many case the control won't even be implemented 
> by the host. If your system has two sensors connected to the same host, they 
> will both expose an exposure time control. Which one do you configure through 
> the video node ? The two sensors will likely have different bounds for the 
> same control, how do you report that ?

If the device has two sensors that are mutually exclusive, they should be mapped
as two different inputs. The showed control should be the one used by the currently
active input.

If the sensors aren't mutually exclusive, then two different video nodes will
be shown in userspace.

> Those controls are also quite useless for generic V4L2 applications, which 
> will want auto-exposure anyway. This needs to be implemented in userspace in 
> libv4l.

Several webcams export exposure controls. Why shouldn't those controls be exposed
to userspace anymore?

Ok, if the hardware won't support 3A algorithm, libv4l will implement it, 
eventually using an extra hardware-aware code to get the best performance 
for that specific device, but this doesn't mean that the user should always 
use it.

Btw, the 3A algorithm is one of the things I don't like on my cell phone: while
it works most of the time, sometimes I want to disable it and manually adjust,
as it produces dark images, when there's a very bright light somewhere on the
image background. Manually adjusting the exposure time and aperture is
something relevant for some users.

>>> Also there might be some preference at user space, at which stage of the
>>> pipeline to apply some controls. This is where the subdev API helps, and
>>> plain video node API does not.
>>
>> Again, this is for specific usecases. On such cases, what is expected is
>> that the more generic control will be exported via V4L2 API.
>>
>>>>> Thus it's a bit hard to imagine that we could do something like
>>>>> "optionally not to inherit controls" as the subdev/MC API is optional.
>>>>> :)
>>>>
>>>> This was actually implemented. There are some cases at ivtv/cx18 driver
>>>> where both the bridge and a subdev provides the same control (audio
>>>> volume, for example). The idea is to allow the bridge driver to touch
>>>> at the subdev control without exposing it to userspace, since the
>>>> desire was that the bridge driver itself would expose such control,
>>>> using a logic that combines changing the subdev and the bridge
>>>> registers for volume.
>>>
>>> This seem like hard coding a policy in the driver;) Then there is no way
>>> (it might not be worth the effort though) to play with volume level at
>>> both devices, e.g. to obtain optimal S/N ratio.
>>
>> In general, playing with just one control is enough. Andy had a different
>> opinion when this issue were discussed, and he thinks that playing with
>> both is better. At the end, this is a developers decision, depending on
>> how much information (and bug reports) he had.
> 
> ivtv/cx18 is a completely different use case, where hardware configurations 
> are known, and experiments possible to find out which control(s) to use and 
> how. In this case you can't know in advance what the sensor and host drivers 
> will be used for.

Why not? I never saw an embedded hardware that allows physically changing the
sensor.

> Even if you did, fine image quality tuning requires 
> accessing pretty much all controls individually anyway.

The same is also true for non-embedded hardware. The only situation where V4L2
API is not enough is when there are two controls of the same type active. For
example, 2 active volume controls, one at the audio demod, and another at the
bridge. There may have some cases where you can do the same thing at the sensor
or at a DSP block. This is where MC API gives an improvement, by allowing changing
both, instead of just one of the controls.

>>> This is a hack...sorry, just joking ;-) Seriously, I think the
>>> situation with the userspace subdevs is a bit different. Because with one
>>> API we directly expose some functionality for applications, with other
>>> we code it in the kernel, to make the devices appear uniform at user
>>> space.
>>
>> Not sure if I understood you. V4L2 export drivers functionality to
>> userspace in an uniform way. MC api is for special applications that might
>> need to access some internal functions on embedded devices.
>>
>> Of course, there are some cases where it doesn't make sense to export a
>> subdev control via V4L2 API.
>>
>>>>> Also, the sensor subdev can be configured in the video node driver as
>>>>> well as through the subdev device node. Both APIs can do the same
>>>>> thing but in order to let the subdev API work as expected the video
>>>>> node driver must be forbidden to configure the subdev.
>>>>
>>>> Why? For the sensor, a V4L2 API call will look just like a bridge driver
>>>> call. The subdev will need a mutex anyway, as two MC applications may
>>>> be opening it simultaneously. I can't see why it should forbid changing
>>>> the control from the bridge driver call.
>>>
>>> Please do not forget there might be more than one subdev to configure and
>>> that the bridge itself is also a subdev (which exposes a scaler
>>> interface, for instance). A situation pretty much like in Figure 4.4 [1]
>>> (after the scaler there is also a video node to configure, but we may
>>> assume that pixel resolution at the scaler pad 1 is same as at the video
>>> node). Assuming the format and crop configuration flow is from sensor to
>>> host scaler direction, if we have tried to configure _all_ subdevs when
>>> the last stage of the pipeline is configured (i.e. video node) the whole
>>> scaler and crop/composition configuration we have been destroyed at that
>>> time. And there is more to configure than VIDIOC_S_FMT can do.
>>
>> Think from users perspective: all user wants is to see a video of a given
>> resolution. S_FMT (and a few other VIDIOC_* calls) have everything that
>> the user wants: the desired resolution, framerate and format.
>>
>> Specialized applications indeed need more, in order to get the best images
>> for certain types of usages. So, MC is there.
>>
>> Such applications will probably need to know exactly what's the sensor,
>> what are their bugs, how it is connected, what are the DSP blocks in the
>> patch, how the DSP algorithms are implemented, etc, in order to obtain the
>> the perfect image.
>>
>> Even on embedded devices like smartphones and tablets, I predict that both
>> types of applications will be developed and used: people may use a generic
>> application like flash player, and an specialized application provided by
>> the manufacturer. Users can even develop their own applications generic
>> apps using V4L2 directly, at the devices that allow that.
>>
>> As I said before: both application types are welcome. We just need to
>> warrant that a pure V4L application will work reasonably well.
> 
> That's why we have libv4l. The driver simply doesn't receive enough 
> information to configure the hardware correctly from the VIDIOC_* calls. And 
> as mentioned above, 3A algorithms, required by "simple" V4L2 applications, 
> need to be implemented in userspace anyway.

It is OK to improve users experience via libv4l. What I'm saying is that it is
NOT OK to remove V4L2 API support from the driver, forcing users to use some
hardware plugin at libv4l.

>>> Allowing the bridge driver to configure subdevs at all times would
>>> prevent the subdev/MC API to work.
>>
>> Well, then we need to think on an alternative for that. It seems an
>> interesting theme for the media workshop at the Kernel Summit/2011.
>>
>>>>> There is a conflict there that in order to use
>>>>> 'optional' API the 'main' API behaviour must be affected....
>>>>
>>>> It is optional from userspace perspective. A V4L2-only application
>>>> should be able to work with all drivers. However, a MC-aware
>>>> application will likely be specific for some hardware, as it will need
>>>> to know some device-specific stuff.
>>>>
>>>> Both kinds of applications are welcome, but dropping support for
>>>> V4L2-only applications is the wrong thing to do.
>>>>
>>>>> And I really cant use V4L2 API only as is because it's too limited.
>>>>
>>>> Why?
>>>
>>> For instance there is really yet no support for scaler and composition
>>> onto a target buffer in the Video Capture Interface (we also use sensors
>>> with built in scalers). It's difficult to efficiently manage
>>> capture/preview pipelines. It is impossible to discover the system
>>> topology.
>>
>> Scaler were always supported by V4L2: if the resolution specified by S_FMT
>> is not what the sensor provides, then scale. All non-embedded drivers with
>> sensor or bridge supports scale does that.
>>
>> Composition is not properly supported yet. It could make sense to add it to
>> V4L. How do you think MC API would help with composite?
>>
>> Managing capture/preview pipelines will require some support at V4L2 level.
>> This is a problem that needs to be addressed there anyway, as buffers for
>> preview/capture need to be allocated. There's an RFC about that, but I
>> don't think it covers the pipelines for it.
> 
> Managing pipelines is a policy decision, and needs to be implemented in 
> userspace. Once again, the solution here is libv4l.

If V4L2 API is not enough, implementing it on libv4l won't solve, as userspace
apps will use V4L2 API for requresting it.

>> Discovering the system topology indeed is not part of V4L2 API and will
>> never be. This is MC API business. There's no overlap with V4L2.
>>
>>>>> Might be that's why I see more and more often migration to OpenMAX
>>>>> recently.
>>>>
>>>> I don't think so. People may be adopting OpenMAX just because of some
>>>> marketing strategy from the OpenMAX forum. We don't spend money to
>>>> announce V4L2 ;)
>>>>
>>> :)
>>> :
>>>> I think that writing a pure OpenMAX driver is the wrong thing to do, as,
>>>> at the long term, it will cost _a_lot_ for the vendors to maintain
>>>> something that will never be merged upstream.
>>>
>>> In general it depends on priorities. For some chip vendors it might be
>>> more important to provide a solution in short time frame. Getting
>>> something in mainline isn't effortless and spending lot's of time on
>>> this for some parties is unacceptable.
>>
>> Adding something at a forum like OpenMAX is probably not an easy task ;) It
>> generally takes a long time to change something on open forum
>> specifications. Also, participating on those forums require lots of money,
>> with membership and travel expenses.
> 
> Correct me if I'm wrong (I'm not an OpenMAX specialist), but I think it 
> doesn't. OpenMAX isn't really open, and vendors implement proprietary 
> extensions as needed. My understanding is that tt's easier for them to use 
> OpenMAX for that, because they can more or less claim OpenMAX support even 
> with the proprietary extensions, something they couldn't do with V4L2.
> 
> If we want to compete there (and I certainly do), we need to provide vendors 
> with clear, easy and well documented APIs that fulfill their needs out of the 
> box. That's of course an evermoving target.

Agreed.
> 
>> Adding a new driver that follows the API's is not a long-term effort. The
>> delay is basically one kernel cycle, e. g. about 3-4 months.
>>
>> Most of the delays on merging drivers for embedded systems, however, take a
>> way longer than that, unfortunately. From what I see, what is delaying
>> driver submission is due to:
>> 	- the lack of how to work with the Linux community. Some developers take a
>> long time to get the 'modus operandi';
>> 	- the need of API changes. It is still generally faster to add a new API
>> addition at the Kernel than on an open forum;
>> 	- the discussions inside the various teams (generally from the same
>> company, or the company and their customers) about the better way to
>> implement some feature.
>>
>> All the above also happens when developing an OpenMAX driver: companies
>> need to learn how to work with the OpenMax forums, API changes may be
>> required, so they need to participate at the forums, the several internal
>> teams and customers will be discussing the requirements.
>>
>> I bet that there's also one additional step: to submit the driver to some
>> company that will check the driver compliance with the official API. Such
>> certification process is generally expensive and takes a long time.
> 
> I could be wrong, but I very much doubt this ever happens.
> 
>> At the end of the day, they'll also spend a lot of time to have the driver
>> done, or they'll end by releasing a "not-quite-openmax" driver, and then
>> needing to rework on that, due to customers complains, or to
>> certification-compliance, loosing time and money.
> 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-09 20:05             ` Mauro Carvalho Chehab
@ 2011-08-09 23:18               ` Sakari Ailus
  2011-08-10  0:22                 ` Mauro Carvalho Chehab
  2011-08-15 12:30               ` Laurent Pinchart
  1 sibling, 1 reply; 44+ messages in thread
From: Sakari Ailus @ 2011-08-09 23:18 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Laurent Pinchart, Sylwester Nawrocki, Sylwester Nawrocki,
	linux-media, hverkuil

Hi Mauro,

On Tue, Aug 09, 2011 at 05:05:47PM -0300, Mauro Carvalho Chehab wrote:
> Em 29-07-2011 05:36, Laurent Pinchart escreveu:
> > Hi Mauro,
> > 
> > On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
> >> Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
> >>> On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
> >>>> Accumulating sub-dev controls at the video node is the right thing to
> >>>> do.
> >>>>
> >>>> An MC-aware application will need to handle with that, but that doesn't
> >>>> sound to be hard. All such application would need to do is to first
> >>>> probe the subdev controls, and, when parsing the videodev controls, not
> >>>> register controls with duplicated ID's, or to mark them with some
> >>>> special attribute.
> >>>
> >>> IMHO it's not a big issue in general. Still, both subdev and the host
> >>> device may support same control id. And then even though the control ids
> >>> are same on the subdev and the host they could mean physically different
> >>> controls (since when registering a subdev at the host driver the host's
> >>> controls take precedence and doubling subdev controls are skipped).
> >>
> >> True, but, except for specific usecases, the host control is enough.
> > 
> > Not for embedded devices. In many case the control won't even be implemented 
> > by the host. If your system has two sensors connected to the same host, they 
> > will both expose an exposure time control. Which one do you configure through 
> > the video node ? The two sensors will likely have different bounds for the 
> > same control, how do you report that ?
> 
> If the device has two sensors that are mutually exclusive, they should be
> mapped as two different inputs. The showed control should be the one used
> by the currently active input.

Video nodes should represent a dma engine rather than an image source. You
could use the same hardware to process data from memory to memory while
streaming video from a sensor to memory at the same time. This is a quite
typical use in embedded systems.

Usually boards where two sensors are connected to an isp the dependencies
are not as straightforward as the sensors being fully independent or require
exclusive access. Typically some of the hardware blocks are shared between
the two and can be used by only one sensor at a time, so instead you may not
get full functionality from both at the same time. And you need to be able
to choose which one uses that hardware block. This is exactly what Media
controller interface models perfectly.

See FIMC Media controller graph AND Sylwester's explanation on it; a few
links are actually missing from the grapg.

<URL:http://www.spinics.net/lists/linux-media/msg35504.html>
<URL:http://wstaw.org/m/2011/05/26/fimc_graph__.png>

Cc Hans.

> If the sensors aren't mutually exclusive, then two different video nodes will
> be shown in userspace.
> 
> > Those controls are also quite useless for generic V4L2 applications, which 
> > will want auto-exposure anyway. This needs to be implemented in userspace in 
> > libv4l.
> 
> Several webcams export exposure controls. Why shouldn't those controls be exposed
> to userspace anymore?

This is not a webcam, it is a software controlled high end digital camera on
a mobile device. The difference is that the functionality offered by the
hardware is at much lower level compared to a webcam; the result is that
more detailed control ia required but also much flexibility and performance
is gained.

> Ok, if the hardware won't support 3A algorithm, libv4l will implement it, 
> eventually using an extra hardware-aware code to get the best performance 
> for that specific device, but this doesn't mean that the user should always 
> use it.

Why not? What would be the alternative?

> Btw, the 3A algorithm is one of the things I don't like on my cell phone: while
> it works most of the time, sometimes I want to disable it and manually adjust,
> as it produces dark images, when there's a very bright light somewhere on the
> image background. Manually adjusting the exposure time and aperture is
> something relevant for some users.

You do want the 3A algorithms even if you use manual white balance. What the
automatic white balance algorithm produces is (among other things) gamma
tables, rgb-to-rgb conversion matrices and lens shading correction tables. I
doubt any end user, even if it was you, would like to fiddle with such large
tables directly when capturing photos. The size of this configuration could
easily be around 10 kB. A higher level control is required; colour
temperature, for instance. And its implementation involves the same
automatic white balance algorithm.

You must know your hardware very, very well to use the aforementioned low
level controls and in such a case you have no reason not to use the MC
interface to configure the device either. Configuring the device image pipe
using MC is actually a number of magnitudes less complex, and I say it's
been quite an achievement that we have such an interface which makes it
so effortless to do.

> >>> Also there might be some preference at user space, at which stage of the
> >>> pipeline to apply some controls. This is where the subdev API helps, and
> >>> plain video node API does not.
> >>
> >> Again, this is for specific usecases. On such cases, what is expected is
> >> that the more generic control will be exported via V4L2 API.
> >>
> >>>>> Thus it's a bit hard to imagine that we could do something like
> >>>>> "optionally not to inherit controls" as the subdev/MC API is optional.
> >>>>> :)
> >>>>
> >>>> This was actually implemented. There are some cases at ivtv/cx18 driver
> >>>> where both the bridge and a subdev provides the same control (audio
> >>>> volume, for example). The idea is to allow the bridge driver to touch
> >>>> at the subdev control without exposing it to userspace, since the
> >>>> desire was that the bridge driver itself would expose such control,
> >>>> using a logic that combines changing the subdev and the bridge
> >>>> registers for volume.
> >>>
> >>> This seem like hard coding a policy in the driver;) Then there is no way
> >>> (it might not be worth the effort though) to play with volume level at
> >>> both devices, e.g. to obtain optimal S/N ratio.
> >>
> >> In general, playing with just one control is enough. Andy had a different
> >> opinion when this issue were discussed, and he thinks that playing with
> >> both is better. At the end, this is a developers decision, depending on
> >> how much information (and bug reports) he had.
> > 
> > ivtv/cx18 is a completely different use case, where hardware configurations 
> > are known, and experiments possible to find out which control(s) to use and 
> > how. In this case you can't know in advance what the sensor and host drivers 
> > will be used for.
> 
> Why not? I never saw an embedded hardware that allows physically changing the
> sensor.
> 
> > Even if you did, fine image quality tuning requires 
> > accessing pretty much all controls individually anyway.
> 
> The same is also true for non-embedded hardware. The only situation where
> V4L2 API is not enough is when there are two controls of the same type
> active. For example, 2 active volume controls, one at the audio demod, and
> another at the bridge. There may have some cases where you can do the same
> thing at the sensor or at a DSP block. This is where MC API gives an
> improvement, by allowing changing both, instead of just one of the
> controls.

This may be true on non-embedded hardware. It's important to know which
hardware component implements a particular control; for example digital gain
typically would take longer to have an effect if it is set on sensor rather
than on the ISP. Also, is you set digital gain, you want to set it on the
same device where your analog gain is --- the sensor --- to avoid badly
exposed imagws.

When it comes to scaling, the scaling quality, power consumption and
performance may well be very different depending on where it is done. There
typically are data rate limitations at different parts of the pipeline. The
plain V4L2 has never been meant for this nor provides any support when doing
something like above, and these are just few examples.

> >>> This is a hack...sorry, just joking ;-) Seriously, I think the
> >>> situation with the userspace subdevs is a bit different. Because with one
> >>> API we directly expose some functionality for applications, with other
> >>> we code it in the kernel, to make the devices appear uniform at user
> >>> space.
> >>
> >> Not sure if I understood you. V4L2 export drivers functionality to
> >> userspace in an uniform way. MC api is for special applications that might
> >> need to access some internal functions on embedded devices.
> >>
> >> Of course, there are some cases where it doesn't make sense to export a
> >> subdev control via V4L2 API.
> >>
> >>>>> Also, the sensor subdev can be configured in the video node driver as
> >>>>> well as through the subdev device node. Both APIs can do the same
> >>>>> thing but in order to let the subdev API work as expected the video
> >>>>> node driver must be forbidden to configure the subdev.
> >>>>
> >>>> Why? For the sensor, a V4L2 API call will look just like a bridge driver
> >>>> call. The subdev will need a mutex anyway, as two MC applications may
> >>>> be opening it simultaneously. I can't see why it should forbid changing
> >>>> the control from the bridge driver call.
> >>>
> >>> Please do not forget there might be more than one subdev to configure and
> >>> that the bridge itself is also a subdev (which exposes a scaler
> >>> interface, for instance). A situation pretty much like in Figure 4.4 [1]
> >>> (after the scaler there is also a video node to configure, but we may
> >>> assume that pixel resolution at the scaler pad 1 is same as at the video
> >>> node). Assuming the format and crop configuration flow is from sensor to
> >>> host scaler direction, if we have tried to configure _all_ subdevs when
> >>> the last stage of the pipeline is configured (i.e. video node) the whole
> >>> scaler and crop/composition configuration we have been destroyed at that
> >>> time. And there is more to configure than VIDIOC_S_FMT can do.
> >>
> >> Think from users perspective: all user wants is to see a video of a given
> >> resolution. S_FMT (and a few other VIDIOC_* calls) have everything that
> >> the user wants: the desired resolution, framerate and format.
> >>
> >> Specialized applications indeed need more, in order to get the best images
> >> for certain types of usages. So, MC is there.
> >>
> >> Such applications will probably need to know exactly what's the sensor,
> >> what are their bugs, how it is connected, what are the DSP blocks in the
> >> patch, how the DSP algorithms are implemented, etc, in order to obtain the
> >> the perfect image.
> >>
> >> Even on embedded devices like smartphones and tablets, I predict that both
> >> types of applications will be developed and used: people may use a generic
> >> application like flash player, and an specialized application provided by
> >> the manufacturer. Users can even develop their own applications generic
> >> apps using V4L2 directly, at the devices that allow that.
> >>
> >> As I said before: both application types are welcome. We just need to
> >> warrant that a pure V4L application will work reasonably well.
> > 
> > That's why we have libv4l. The driver simply doesn't receive enough 
> > information to configure the hardware correctly from the VIDIOC_* calls. And 
> > as mentioned above, 3A algorithms, required by "simple" V4L2 applications, 
> > need to be implemented in userspace anyway.
> 
> It is OK to improve users experience via libv4l. What I'm saying is that it is
> NOT OK to remove V4L2 API support from the driver, forcing users to use some
> hardware plugin at libv4l.

Either do you know your hardware or do not know it. General purpose
applications can rely on functionality provided by libv4l, but if you do not
use it, then you need to configure the underlying device. Which is something
where the Media controller and v4l2_subdev interfaces are tremendously
useful.

> >>> Allowing the bridge driver to configure subdevs at all times would
> >>> prevent the subdev/MC API to work.
> >>
> >> Well, then we need to think on an alternative for that. It seems an
> >> interesting theme for the media workshop at the Kernel Summit/2011.
> >>
> >>>>> There is a conflict there that in order to use
> >>>>> 'optional' API the 'main' API behaviour must be affected....
> >>>>
> >>>> It is optional from userspace perspective. A V4L2-only application
> >>>> should be able to work with all drivers. However, a MC-aware
> >>>> application will likely be specific for some hardware, as it will need
> >>>> to know some device-specific stuff.
> >>>>
> >>>> Both kinds of applications are welcome, but dropping support for
> >>>> V4L2-only applications is the wrong thing to do.
> >>>>
> >>>>> And I really cant use V4L2 API only as is because it's too limited.
> >>>>
> >>>> Why?
> >>>
> >>> For instance there is really yet no support for scaler and composition
> >>> onto a target buffer in the Video Capture Interface (we also use sensors
> >>> with built in scalers). It's difficult to efficiently manage
> >>> capture/preview pipelines. It is impossible to discover the system
> >>> topology.
> >>
> >> Scaler were always supported by V4L2: if the resolution specified by S_FMT
> >> is not what the sensor provides, then scale. All non-embedded drivers with
> >> sensor or bridge supports scale does that.
> >>
> >> Composition is not properly supported yet. It could make sense to add it to
> >> V4L. How do you think MC API would help with composite?
> >>
> >> Managing capture/preview pipelines will require some support at V4L2 level.
> >> This is a problem that needs to be addressed there anyway, as buffers for
> >> preview/capture need to be allocated. There's an RFC about that, but I
> >> don't think it covers the pipelines for it.
> > 
> > Managing pipelines is a policy decision, and needs to be implemented in 
> > userspace. Once again, the solution here is libv4l.
> 
> If V4L2 API is not enough, implementing it on libv4l won't solve, as userspace
> apps will use V4L2 API for requresting it.

There are two kind of applications: specialised and generic. The generic
ones may rely on restrictive policies put in place by a libv4l plugin
whereas the specialised applications need to access the device's features
directly to get the most out of it.

Do you have a general purpose application which does implement still capture
sequence with pre-flash, for example? It does not exist now, but hopefully
will in the future.

Kind regards,

-- 
Sakari Ailus
sakari.ailus@iki.fi

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-09 23:18               ` Sakari Ailus
@ 2011-08-10  0:22                 ` Mauro Carvalho Chehab
  2011-08-10  8:41                   ` Sylwester Nawrocki
  2011-08-15 12:45                   ` Laurent Pinchart
  0 siblings, 2 replies; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-10  0:22 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Laurent Pinchart, Sylwester Nawrocki, Sylwester Nawrocki,
	linux-media, hverkuil

Em 09-08-2011 20:18, Sakari Ailus escreveu:
> Hi Mauro,
> 
> On Tue, Aug 09, 2011 at 05:05:47PM -0300, Mauro Carvalho Chehab wrote:
>> Em 29-07-2011 05:36, Laurent Pinchart escreveu:
>>> Hi Mauro,
>>>
>>> On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
>>>> Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
>>>>> On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
>>>>>> Accumulating sub-dev controls at the video node is the right thing to
>>>>>> do.
>>>>>>
>>>>>> An MC-aware application will need to handle with that, but that doesn't
>>>>>> sound to be hard. All such application would need to do is to first
>>>>>> probe the subdev controls, and, when parsing the videodev controls, not
>>>>>> register controls with duplicated ID's, or to mark them with some
>>>>>> special attribute.
>>>>>
>>>>> IMHO it's not a big issue in general. Still, both subdev and the host
>>>>> device may support same control id. And then even though the control ids
>>>>> are same on the subdev and the host they could mean physically different
>>>>> controls (since when registering a subdev at the host driver the host's
>>>>> controls take precedence and doubling subdev controls are skipped).
>>>>
>>>> True, but, except for specific usecases, the host control is enough.
>>>
>>> Not for embedded devices. In many case the control won't even be implemented 
>>> by the host. If your system has two sensors connected to the same host, they 
>>> will both expose an exposure time control. Which one do you configure through 
>>> the video node ? The two sensors will likely have different bounds for the 
>>> same control, how do you report that ?
>>
>> If the device has two sensors that are mutually exclusive, they should be
>> mapped as two different inputs. The showed control should be the one used
>> by the currently active input.
> 
> Video nodes should represent a dma engine rather than an image source.

True. Image sources are represented, on V4L2, by inputs.

> You
> could use the same hardware to process data from memory to memory while
> streaming video from a sensor to memory at the same time. This is a quite
> typical use in embedded systems.
> 
> Usually boards where two sensors are connected to an isp the dependencies
> are not as straightforward as the sensors being fully independent or require
> exclusive access. Typically some of the hardware blocks are shared between
> the two and can be used by only one sensor at a time, so instead you may not
> get full functionality from both at the same time. And you need to be able
> to choose which one uses that hardware block. This is exactly what Media
> controller interface models perfectly.

I see.

> See FIMC Media controller graph AND Sylwester's explanation on it; a few
> links are actually missing from the grapg.
> 
> <URL:http://www.spinics.net/lists/linux-media/msg35504.html>
> <URL:http://wstaw.org/m/2011/05/26/fimc_graph__.png>
> 
> Cc Hans.
> 
>> If the sensors aren't mutually exclusive, then two different video nodes will
>> be shown in userspace.
>>
>>> Those controls are also quite useless for generic V4L2 applications, which 
>>> will want auto-exposure anyway. This needs to be implemented in userspace in 
>>> libv4l.
>>
>> Several webcams export exposure controls. Why shouldn't those controls be exposed
>> to userspace anymore?
> 
> This is not a webcam,

I know, but the analogy is still valid.

> it is a software controlled high end digital camera on
> a mobile device. The difference is that the functionality offered by the
> hardware is at much lower level compared to a webcam; the result is that
> more detailed control ia required but also much flexibility and performance
> is gained.

I see. What I failed to see is why to remove control from userspace. If the
hardware is more powerful, I expect to see more controls exported, and not
removing the V4L2 functionality from the driver.

>> Ok, if the hardware won't support 3A algorithm, libv4l will implement it, 
>> eventually using an extra hardware-aware code to get the best performance 
>> for that specific device, but this doesn't mean that the user should always 
>> use it.
> 
> Why not? What would be the alternative?

User may want or need to disable the 3A algo and control some hardware parameters
hardware directly, for example, to take an over-exposed picture, to create some
neat effects, or to get some image whose exposure aperture/time is out of the
3A range.

>> Btw, the 3A algorithm is one of the things I don't like on my cell phone: while
>> it works most of the time, sometimes I want to disable it and manually adjust,
>> as it produces dark images, when there's a very bright light somewhere on the
>> image background. Manually adjusting the exposure time and aperture is
>> something relevant for some users.
> 
> You do want the 3A algorithms even if you use manual white balance. What the
> automatic white balance algorithm produces is (among other things) gamma
> tables, rgb-to-rgb conversion matrices and lens shading correction tables. I
> doubt any end user, even if it was you, would like to fiddle with such large
> tables directly when capturing photos. 

There are some hacks for several professional and amateur cameras that replace the
existing 3A algorithms by... NONE. The idea is to get the raw data directly from 
the sensor, and use some software like Gimp or Photoshop to do lens correction, 
temperature correction, whitespace ballance, etc, at post-processing.
The advantage of such type of usage is that the photographer can fine-tune the
generated image to produce what he wants, using more sophisticated (and not
real-time) algorithms.

[1] for example, one of such firmwares, that I use on my Canon Digital Camera
is available at:
    http://chdk.wikia.com/wiki/CHDK

> The size of this configuration could
> easily be around 10 kB. A higher level control is required; colour
> temperature, for instance. And its implementation involves the same
> automatic white balance algorithm.
> 
> You must know your hardware very, very well to use the aforementioned low
> level controls and in such a case you have no reason not to use the MC
> interface to configure the device either. Configuring the device image pipe
> using MC is actually a number of magnitudes less complex, and I say it's
> been quite an achievement that we have such an interface which makes it
> so effortless to do.

For sure, such kind of controls that exposes the 3A correction algorithm at the
DSP level shouldn't be exposed via V4L2 interface, but things like disabling 3A
and manually controlling the sensor, like aperture, exposure, analog zoom, etc, 
makes sense to be exported.

>>>>> Also there might be some preference at user space, at which stage of the
>>>>> pipeline to apply some controls. This is where the subdev API helps, and
>>>>> plain video node API does not.
>>>>
>>>> Again, this is for specific usecases. On such cases, what is expected is
>>>> that the more generic control will be exported via V4L2 API.
>>>>
>>>>>>> Thus it's a bit hard to imagine that we could do something like
>>>>>>> "optionally not to inherit controls" as the subdev/MC API is optional.
>>>>>>> :)
>>>>>>
>>>>>> This was actually implemented. There are some cases at ivtv/cx18 driver
>>>>>> where both the bridge and a subdev provides the same control (audio
>>>>>> volume, for example). The idea is to allow the bridge driver to touch
>>>>>> at the subdev control without exposing it to userspace, since the
>>>>>> desire was that the bridge driver itself would expose such control,
>>>>>> using a logic that combines changing the subdev and the bridge
>>>>>> registers for volume.
>>>>>
>>>>> This seem like hard coding a policy in the driver;) Then there is no way
>>>>> (it might not be worth the effort though) to play with volume level at
>>>>> both devices, e.g. to obtain optimal S/N ratio.
>>>>
>>>> In general, playing with just one control is enough. Andy had a different
>>>> opinion when this issue were discussed, and he thinks that playing with
>>>> both is better. At the end, this is a developers decision, depending on
>>>> how much information (and bug reports) he had.
>>>
>>> ivtv/cx18 is a completely different use case, where hardware configurations 
>>> are known, and experiments possible to find out which control(s) to use and 
>>> how. In this case you can't know in advance what the sensor and host drivers 
>>> will be used for.
>>
>> Why not? I never saw an embedded hardware that allows physically changing the
>> sensor.
>>
>>> Even if you did, fine image quality tuning requires 
>>> accessing pretty much all controls individually anyway.
>>
>> The same is also true for non-embedded hardware. The only situation where
>> V4L2 API is not enough is when there are two controls of the same type
>> active. For example, 2 active volume controls, one at the audio demod, and
>> another at the bridge. There may have some cases where you can do the same
>> thing at the sensor or at a DSP block. This is where MC API gives an
>> improvement, by allowing changing both, instead of just one of the
>> controls.
> 
> This may be true on non-embedded hardware. It's important to know which
> hardware component implements a particular control; for example digital gain
> typically would take longer to have an effect if it is set on sensor rather
> than on the ISP. Also, is you set digital gain, you want to set it on the
> same device where your analog gain is --- the sensor --- to avoid badly
> exposed imagws.

I'd say that the driver should expose the hardware control, if 3A is disabled.

> When it comes to scaling, the scaling quality, power consumption and
> performance may well be very different depending on where it is done. There
> typically are data rate limitations at different parts of the pipeline. The
> plain V4L2 has never been meant for this nor provides any support when doing
> something like above, and these are just few examples.

I see your point.

>>>>> This is a hack...sorry, just joking ;-) Seriously, I think the
>>>>> situation with the userspace subdevs is a bit different. Because with one
>>>>> API we directly expose some functionality for applications, with other
>>>>> we code it in the kernel, to make the devices appear uniform at user
>>>>> space.
>>>>
>>>> Not sure if I understood you. V4L2 export drivers functionality to
>>>> userspace in an uniform way. MC api is for special applications that might
>>>> need to access some internal functions on embedded devices.
>>>>
>>>> Of course, there are some cases where it doesn't make sense to export a
>>>> subdev control via V4L2 API.
>>>>
>>>>>>> Also, the sensor subdev can be configured in the video node driver as
>>>>>>> well as through the subdev device node. Both APIs can do the same
>>>>>>> thing but in order to let the subdev API work as expected the video
>>>>>>> node driver must be forbidden to configure the subdev.
>>>>>>
>>>>>> Why? For the sensor, a V4L2 API call will look just like a bridge driver
>>>>>> call. The subdev will need a mutex anyway, as two MC applications may
>>>>>> be opening it simultaneously. I can't see why it should forbid changing
>>>>>> the control from the bridge driver call.
>>>>>
>>>>> Please do not forget there might be more than one subdev to configure and
>>>>> that the bridge itself is also a subdev (which exposes a scaler
>>>>> interface, for instance). A situation pretty much like in Figure 4.4 [1]
>>>>> (after the scaler there is also a video node to configure, but we may
>>>>> assume that pixel resolution at the scaler pad 1 is same as at the video
>>>>> node). Assuming the format and crop configuration flow is from sensor to
>>>>> host scaler direction, if we have tried to configure _all_ subdevs when
>>>>> the last stage of the pipeline is configured (i.e. video node) the whole
>>>>> scaler and crop/composition configuration we have been destroyed at that
>>>>> time. And there is more to configure than VIDIOC_S_FMT can do.
>>>>
>>>> Think from users perspective: all user wants is to see a video of a given
>>>> resolution. S_FMT (and a few other VIDIOC_* calls) have everything that
>>>> the user wants: the desired resolution, framerate and format.
>>>>
>>>> Specialized applications indeed need more, in order to get the best images
>>>> for certain types of usages. So, MC is there.
>>>>
>>>> Such applications will probably need to know exactly what's the sensor,
>>>> what are their bugs, how it is connected, what are the DSP blocks in the
>>>> patch, how the DSP algorithms are implemented, etc, in order to obtain the
>>>> the perfect image.
>>>>
>>>> Even on embedded devices like smartphones and tablets, I predict that both
>>>> types of applications will be developed and used: people may use a generic
>>>> application like flash player, and an specialized application provided by
>>>> the manufacturer. Users can even develop their own applications generic
>>>> apps using V4L2 directly, at the devices that allow that.
>>>>
>>>> As I said before: both application types are welcome. We just need to
>>>> warrant that a pure V4L application will work reasonably well.
>>>
>>> That's why we have libv4l. The driver simply doesn't receive enough 
>>> information to configure the hardware correctly from the VIDIOC_* calls. And 
>>> as mentioned above, 3A algorithms, required by "simple" V4L2 applications, 
>>> need to be implemented in userspace anyway.
>>
>> It is OK to improve users experience via libv4l. What I'm saying is that it is
>> NOT OK to remove V4L2 API support from the driver, forcing users to use some
>> hardware plugin at libv4l.
> 
> Either do you know your hardware or do not know it. General purpose
> applications can rely on functionality provided by libv4l, but if you do not
> use it, then you need to configure the underlying device. Which is something
> where the Media controller and v4l2_subdev interfaces are tremendously
> useful.

Agreed, but while we don't actually have libv4l hw-specific plugins committed
at the v4l-utils git tree, all we have to warrant that the hardware will work
with a generic userspace application is the V4L2 API.

>>>>> Allowing the bridge driver to configure subdevs at all times would
>>>>> prevent the subdev/MC API to work.
>>>>
>>>> Well, then we need to think on an alternative for that. It seems an
>>>> interesting theme for the media workshop at the Kernel Summit/2011.
>>>>
>>>>>>> There is a conflict there that in order to use
>>>>>>> 'optional' API the 'main' API behaviour must be affected....
>>>>>>
>>>>>> It is optional from userspace perspective. A V4L2-only application
>>>>>> should be able to work with all drivers. However, a MC-aware
>>>>>> application will likely be specific for some hardware, as it will need
>>>>>> to know some device-specific stuff.
>>>>>>
>>>>>> Both kinds of applications are welcome, but dropping support for
>>>>>> V4L2-only applications is the wrong thing to do.
>>>>>>
>>>>>>> And I really cant use V4L2 API only as is because it's too limited.
>>>>>>
>>>>>> Why?
>>>>>
>>>>> For instance there is really yet no support for scaler and composition
>>>>> onto a target buffer in the Video Capture Interface (we also use sensors
>>>>> with built in scalers). It's difficult to efficiently manage
>>>>> capture/preview pipelines. It is impossible to discover the system
>>>>> topology.
>>>>
>>>> Scaler were always supported by V4L2: if the resolution specified by S_FMT
>>>> is not what the sensor provides, then scale. All non-embedded drivers with
>>>> sensor or bridge supports scale does that.
>>>>
>>>> Composition is not properly supported yet. It could make sense to add it to
>>>> V4L. How do you think MC API would help with composite?
>>>>
>>>> Managing capture/preview pipelines will require some support at V4L2 level.
>>>> This is a problem that needs to be addressed there anyway, as buffers for
>>>> preview/capture need to be allocated. There's an RFC about that, but I
>>>> don't think it covers the pipelines for it.
>>>
>>> Managing pipelines is a policy decision, and needs to be implemented in 
>>> userspace. Once again, the solution here is libv4l.
>>
>> If V4L2 API is not enough, implementing it on libv4l won't solve, as userspace
>> apps will use V4L2 API for requresting it.
> 
> There are two kind of applications: specialised and generic. The generic
> ones may rely on restrictive policies put in place by a libv4l plugin
> whereas the specialised applications need to access the device's features
> directly to get the most out of it.

A submitted upstream driver should be capable of working with the existing
tools/userspace.

Currently, there isn't such libv4l plugins (or, at least, I failed to see a
merged plugin there for N9, S5P, etc). Let's not upstream new drivers or remove 
functionalities from already existing drivers based on something that has yet 
to be developed.

After having it there properly working and tested independently, we may consider
patches removing V4L2 interfaces that were obsoleted in favor of using the libv4l
implementation, of course using the Kernel way of deprecating interfaces. But
doing it before having it, doesn't make any sense.

Let's not put the the cart before the horse.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-10  0:22                 ` Mauro Carvalho Chehab
@ 2011-08-10  8:41                   ` Sylwester Nawrocki
  2011-08-10 12:52                     ` Mauro Carvalho Chehab
  2011-08-15 12:45                   ` Laurent Pinchart
  1 sibling, 1 reply; 44+ messages in thread
From: Sylwester Nawrocki @ 2011-08-10  8:41 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sakari Ailus, Laurent Pinchart, Sylwester Nawrocki, linux-media,
	hverkuil

On 08/10/2011 02:22 AM, Mauro Carvalho Chehab wrote:
> Em 09-08-2011 20:18, Sakari Ailus escreveu:
>> Hi Mauro,
>>
>> On Tue, Aug 09, 2011 at 05:05:47PM -0300, Mauro Carvalho Chehab wrote:
>>> Em 29-07-2011 05:36, Laurent Pinchart escreveu:
>>>> Hi Mauro,
>>>>
>>>> On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
>>>>> Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
>>>>>> On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
>>>>>>> Accumulating sub-dev controls at the video node is the right thing to
>>>>>>> do.
>>>>>>>
>>>>>>> An MC-aware application will need to handle with that, but that doesn't
>>>>>>> sound to be hard. All such application would need to do is to first
>>>>>>> probe the subdev controls, and, when parsing the videodev controls, not
>>>>>>> register controls with duplicated ID's, or to mark them with some
>>>>>>> special attribute.
>>>>>>
>>>>>> IMHO it's not a big issue in general. Still, both subdev and the host
>>>>>> device may support same control id. And then even though the control ids
>>>>>> are same on the subdev and the host they could mean physically different
>>>>>> controls (since when registering a subdev at the host driver the host's
>>>>>> controls take precedence and doubling subdev controls are skipped).
>>>>>
>>>>> True, but, except for specific usecases, the host control is enough.
>>>>
>>>> Not for embedded devices. In many case the control won't even be implemented 
>>>> by the host. If your system has two sensors connected to the same host, they 
>>>> will both expose an exposure time control. Which one do you configure through 
>>>> the video node ? The two sensors will likely have different bounds for the 
>>>> same control, how do you report that ?
>>>
>>> If the device has two sensors that are mutually exclusive, they should be
>>> mapped as two different inputs. The showed control should be the one used
>>> by the currently active input.
>>
>> Video nodes should represent a dma engine rather than an image source.
> 
> True. Image sources are represented, on V4L2, by inputs.
> 
>> You
>> could use the same hardware to process data from memory to memory while
>> streaming video from a sensor to memory at the same time. This is a quite
>> typical use in embedded systems.
>>
>> Usually boards where two sensors are connected to an isp the dependencies
>> are not as straightforward as the sensors being fully independent or require
>> exclusive access. Typically some of the hardware blocks are shared between
>> the two and can be used by only one sensor at a time, so instead you may not
>> get full functionality from both at the same time. And you need to be able
>> to choose which one uses that hardware block. This is exactly what Media
>> controller interface models perfectly.
> 
> I see.
> 
>> See FIMC Media controller graph AND Sylwester's explanation on it; a few
>> links are actually missing from the grapg.
>>
>> <URL:http://www.spinics.net/lists/linux-media/msg35504.html>
>> <URL:http://wstaw.org/m/2011/05/26/fimc_graph__.png>
>>
>> Cc Hans.
>>
>>> If the sensors aren't mutually exclusive, then two different video nodes will
>>> be shown in userspace.
>>>
>>>> Those controls are also quite useless for generic V4L2 applications, which 
>>>> will want auto-exposure anyway. This needs to be implemented in userspace in 
>>>> libv4l.
>>>
>>> Several webcams export exposure controls. Why shouldn't those controls be exposed
>>> to userspace anymore?
>>
>> This is not a webcam,
> 
> I know, but the analogy is still valid.
> 
>> it is a software controlled high end digital camera on
>> a mobile device. The difference is that the functionality offered by the
>> hardware is at much lower level compared to a webcam; the result is that
>> more detailed control ia required but also much flexibility and performance
>> is gained.
> 
> I see. What I failed to see is why to remove control from userspace. If the
> hardware is more powerful, I expect to see more controls exported, and not
> removing the V4L2 functionality from the driver.
> 
>>> Ok, if the hardware won't support 3A algorithm, libv4l will implement it, 
>>> eventually using an extra hardware-aware code to get the best performance 
>>> for that specific device, but this doesn't mean that the user should always 
>>> use it.
>>
>> Why not? What would be the alternative?
> 
> User may want or need to disable the 3A algo and control some hardware parameters
> hardware directly, for example, to take an over-exposed picture, to create some
> neat effects, or to get some image whose exposure aperture/time is out of the
> 3A range.
> 
>>> Btw, the 3A algorithm is one of the things I don't like on my cell phone: while
>>> it works most of the time, sometimes I want to disable it and manually adjust,
>>> as it produces dark images, when there's a very bright light somewhere on the
>>> image background. Manually adjusting the exposure time and aperture is
>>> something relevant for some users.
>>
>> You do want the 3A algorithms even if you use manual white balance. What the
>> automatic white balance algorithm produces is (among other things) gamma
>> tables, rgb-to-rgb conversion matrices and lens shading correction tables. I
>> doubt any end user, even if it was you, would like to fiddle with such large
>> tables directly when capturing photos. 
> 
> There are some hacks for several professional and amateur cameras that replace the
> existing 3A algorithms by... NONE. The idea is to get the raw data directly from 
> the sensor, and use some software like Gimp or Photoshop to do lens correction, 
> temperature correction, whitespace ballance, etc, at post-processing.
> The advantage of such type of usage is that the photographer can fine-tune the
> generated image to produce what he wants, using more sophisticated (and not
> real-time) algorithms.
> 
> [1] for example, one of such firmwares, that I use on my Canon Digital Camera
> is available at:
>     http://chdk.wikia.com/wiki/CHDK
> 
>> The size of this configuration could
>> easily be around 10 kB. A higher level control is required; colour
>> temperature, for instance. And its implementation involves the same
>> automatic white balance algorithm.
>>
>> You must know your hardware very, very well to use the aforementioned low
>> level controls and in such a case you have no reason not to use the MC
>> interface to configure the device either. Configuring the device image pipe
>> using MC is actually a number of magnitudes less complex, and I say it's
>> been quite an achievement that we have such an interface which makes it
>> so effortless to do.
> 
> For sure, such kind of controls that exposes the 3A correction algorithm at the
> DSP level shouldn't be exposed via V4L2 interface, but things like disabling 3A
> and manually controlling the sensor, like aperture, exposure, analog zoom, etc, 
> makes sense to be exported.
> 
>>>>>> Also there might be some preference at user space, at which stage of the
>>>>>> pipeline to apply some controls. This is where the subdev API helps, and
>>>>>> plain video node API does not.
>>>>>
>>>>> Again, this is for specific usecases. On such cases, what is expected is
>>>>> that the more generic control will be exported via V4L2 API.
>>>>>
>>>>>>>> Thus it's a bit hard to imagine that we could do something like
>>>>>>>> "optionally not to inherit controls" as the subdev/MC API is optional.
>>>>>>>> :)
>>>>>>>
>>>>>>> This was actually implemented. There are some cases at ivtv/cx18 driver
>>>>>>> where both the bridge and a subdev provides the same control (audio
>>>>>>> volume, for example). The idea is to allow the bridge driver to touch
>>>>>>> at the subdev control without exposing it to userspace, since the
>>>>>>> desire was that the bridge driver itself would expose such control,
>>>>>>> using a logic that combines changing the subdev and the bridge
>>>>>>> registers for volume.
>>>>>>
>>>>>> This seem like hard coding a policy in the driver;) Then there is no way
>>>>>> (it might not be worth the effort though) to play with volume level at
>>>>>> both devices, e.g. to obtain optimal S/N ratio.
>>>>>
>>>>> In general, playing with just one control is enough. Andy had a different
>>>>> opinion when this issue were discussed, and he thinks that playing with
>>>>> both is better. At the end, this is a developers decision, depending on
>>>>> how much information (and bug reports) he had.
>>>>
>>>> ivtv/cx18 is a completely different use case, where hardware configurations 
>>>> are known, and experiments possible to find out which control(s) to use and 
>>>> how. In this case you can't know in advance what the sensor and host drivers 
>>>> will be used for.
>>>
>>> Why not? I never saw an embedded hardware that allows physically changing the
>>> sensor.

I understood Laurent's statement that you can have same ISP driver deployed on
multiple boards fitted with various sensors. Hence the multiple configurations
that cannot be known in advance,

>>>
>>>> Even if you did, fine image quality tuning requires 
>>>> accessing pretty much all controls individually anyway.
>>>
>>> The same is also true for non-embedded hardware. The only situation where
>>> V4L2 API is not enough is when there are two controls of the same type
>>> active. For example, 2 active volume controls, one at the audio demod, and
>>> another at the bridge. There may have some cases where you can do the same
>>> thing at the sensor or at a DSP block. This is where MC API gives an
>>> improvement, by allowing changing both, instead of just one of the
>>> controls.
>>
>> This may be true on non-embedded hardware. It's important to know which
>> hardware component implements a particular control; for example digital gain
>> typically would take longer to have an effect if it is set on sensor rather
>> than on the ISP. Also, is you set digital gain, you want to set it on the
>> same device where your analog gain is --- the sensor --- to avoid badly
>> exposed imagws.
> 
> I'd say that the driver should expose the hardware control, if 3A is disabled.
> 
>> When it comes to scaling, the scaling quality, power consumption and
>> performance may well be very different depending on where it is done. There
>> typically are data rate limitations at different parts of the pipeline. The
>> plain V4L2 has never been meant for this nor provides any support when doing
>> something like above, and these are just few examples.
> 
> I see your point.
> 
>>>>>> This is a hack...sorry, just joking ;-) Seriously, I think the
>>>>>> situation with the userspace subdevs is a bit different. Because with one
>>>>>> API we directly expose some functionality for applications, with other
>>>>>> we code it in the kernel, to make the devices appear uniform at user
>>>>>> space.
>>>>>
>>>>> Not sure if I understood you. V4L2 export drivers functionality to
>>>>> userspace in an uniform way. MC api is for special applications that might
>>>>> need to access some internal functions on embedded devices.
>>>>>
>>>>> Of course, there are some cases where it doesn't make sense to export a
>>>>> subdev control via V4L2 API.
>>>>>
>>>>>>>> Also, the sensor subdev can be configured in the video node driver as
>>>>>>>> well as through the subdev device node. Both APIs can do the same
>>>>>>>> thing but in order to let the subdev API work as expected the video
>>>>>>>> node driver must be forbidden to configure the subdev.
>>>>>>>
>>>>>>> Why? For the sensor, a V4L2 API call will look just like a bridge driver
>>>>>>> call. The subdev will need a mutex anyway, as two MC applications may
>>>>>>> be opening it simultaneously. I can't see why it should forbid changing
>>>>>>> the control from the bridge driver call.
>>>>>>
>>>>>> Please do not forget there might be more than one subdev to configure and
>>>>>> that the bridge itself is also a subdev (which exposes a scaler
>>>>>> interface, for instance). A situation pretty much like in Figure 4.4 [1]
>>>>>> (after the scaler there is also a video node to configure, but we may
>>>>>> assume that pixel resolution at the scaler pad 1 is same as at the video
>>>>>> node). Assuming the format and crop configuration flow is from sensor to
>>>>>> host scaler direction, if we have tried to configure _all_ subdevs when
>>>>>> the last stage of the pipeline is configured (i.e. video node) the whole
>>>>>> scaler and crop/composition configuration we have been destroyed at that
>>>>>> time. And there is more to configure than VIDIOC_S_FMT can do.
>>>>>
>>>>> Think from users perspective: all user wants is to see a video of a given
>>>>> resolution. S_FMT (and a few other VIDIOC_* calls) have everything that
>>>>> the user wants: the desired resolution, framerate and format.
>>>>>
>>>>> Specialized applications indeed need more, in order to get the best images
>>>>> for certain types of usages. So, MC is there.
>>>>>
>>>>> Such applications will probably need to know exactly what's the sensor,
>>>>> what are their bugs, how it is connected, what are the DSP blocks in the
>>>>> patch, how the DSP algorithms are implemented, etc, in order to obtain the
>>>>> the perfect image.
>>>>>
>>>>> Even on embedded devices like smartphones and tablets, I predict that both
>>>>> types of applications will be developed and used: people may use a generic
>>>>> application like flash player, and an specialized application provided by
>>>>> the manufacturer. Users can even develop their own applications generic
>>>>> apps using V4L2 directly, at the devices that allow that.
>>>>>
>>>>> As I said before: both application types are welcome. We just need to
>>>>> warrant that a pure V4L application will work reasonably well.
>>>>
>>>> That's why we have libv4l. The driver simply doesn't receive enough 
>>>> information to configure the hardware correctly from the VIDIOC_* calls. And 
>>>> as mentioned above, 3A algorithms, required by "simple" V4L2 applications, 
>>>> need to be implemented in userspace anyway.
>>>
>>> It is OK to improve users experience via libv4l. What I'm saying is that it is
>>> NOT OK to remove V4L2 API support from the driver, forcing users to use some
>>> hardware plugin at libv4l.
>>
>> Either do you know your hardware or do not know it. General purpose
>> applications can rely on functionality provided by libv4l, but if you do not
>> use it, then you need to configure the underlying device. Which is something
>> where the Media controller and v4l2_subdev interfaces are tremendously
>> useful.
> 
> Agreed, but while we don't actually have libv4l hw-specific plugins committed
> at the v4l-utils git tree, all we have to warrant that the hardware will work
> with a generic userspace application is the V4L2 API.
> 
>>>>>> Allowing the bridge driver to configure subdevs at all times would
>>>>>> prevent the subdev/MC API to work.
>>>>>
>>>>> Well, then we need to think on an alternative for that. It seems an
>>>>> interesting theme for the media workshop at the Kernel Summit/2011.
>>>>>
>>>>>>>> There is a conflict there that in order to use
>>>>>>>> 'optional' API the 'main' API behaviour must be affected....
>>>>>>>
>>>>>>> It is optional from userspace perspective. A V4L2-only application
>>>>>>> should be able to work with all drivers. However, a MC-aware
>>>>>>> application will likely be specific for some hardware, as it will need
>>>>>>> to know some device-specific stuff.
>>>>>>>
>>>>>>> Both kinds of applications are welcome, but dropping support for
>>>>>>> V4L2-only applications is the wrong thing to do.
>>>>>>>
>>>>>>>> And I really cant use V4L2 API only as is because it's too limited.
>>>>>>>
>>>>>>> Why?
>>>>>>
>>>>>> For instance there is really yet no support for scaler and composition
>>>>>> onto a target buffer in the Video Capture Interface (we also use sensors
>>>>>> with built in scalers). It's difficult to efficiently manage
>>>>>> capture/preview pipelines. It is impossible to discover the system
>>>>>> topology.
>>>>>
>>>>> Scaler were always supported by V4L2: if the resolution specified by S_FMT
>>>>> is not what the sensor provides, then scale. All non-embedded drivers with
>>>>> sensor or bridge supports scale does that.
>>>>>
>>>>> Composition is not properly supported yet. It could make sense to add it to
>>>>> V4L. How do you think MC API would help with composite?
>>>>>
>>>>> Managing capture/preview pipelines will require some support at V4L2 level.
>>>>> This is a problem that needs to be addressed there anyway, as buffers for
>>>>> preview/capture need to be allocated. There's an RFC about that, but I
>>>>> don't think it covers the pipelines for it.
>>>>
>>>> Managing pipelines is a policy decision, and needs to be implemented in 
>>>> userspace. Once again, the solution here is libv4l.
>>>
>>> If V4L2 API is not enough, implementing it on libv4l won't solve, as userspace
>>> apps will use V4L2 API for requresting it.
>>
>> There are two kind of applications: specialised and generic. The generic
>> ones may rely on restrictive policies put in place by a libv4l plugin
>> whereas the specialised applications need to access the device's features
>> directly to get the most out of it.
> 
> A submitted upstream driver should be capable of working with the existing
> tools/userspace.
> 
> Currently, there isn't such libv4l plugins (or, at least, I failed to see a
> merged plugin there for N9, S5P, etc). Let's not upstream new drivers or remove 
> functionalities from already existing drivers based on something that has yet 
> to be developed.
> 
> After having it there properly working and tested independently, we may consider
> patches removing V4L2 interfaces that were obsoleted in favor of using the libv4l
> implementation, of course using the Kernel way of deprecating interfaces. But
> doing it before having it, doesn't make any sense.
> 
> Let's not put the the cart before the horse.

That's a good point. My long term plan was to deprecate and remove duplicated ioctls
at the driver _once_ support for regular V4L2 interface on top of MC/subdev API
is added at the v4l2 libraries. But this will happen after I create an initial.. 
*cough* openmax IL for the driver. Which is not what the Tigers like best..

--
Regards,
Sylwester

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-10  8:41                   ` Sylwester Nawrocki
@ 2011-08-10 12:52                     ` Mauro Carvalho Chehab
  0 siblings, 0 replies; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-10 12:52 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Sakari Ailus, Laurent Pinchart, Sylwester Nawrocki, linux-media,
	hverkuil

Em 10-08-2011 05:41, Sylwester Nawrocki escreveu:
>>>> Why not? I never saw an embedded hardware that allows physically changing the
>>>> sensor.
> 
> I understood Laurent's statement that you can have same ISP driver deployed on
> multiple boards fitted with various sensors. Hence the multiple configurations
> that cannot be known in advance,

True, but such kind of dependence should solved either at config time or at
probe time. It doesn't make any sense to show that a hardware is present, when
it is not. This applies to both V4L or MC APIs (and also to sysfs).

>>>> If V4L2 API is not enough, implementing it on libv4l won't solve, as userspace
>>>> apps will use V4L2 API for requresting it.
>>>
>>> There are two kind of applications: specialised and generic. The generic
>>> ones may rely on restrictive policies put in place by a libv4l plugin
>>> whereas the specialised applications need to access the device's features
>>> directly to get the most out of it.
>>
>> A submitted upstream driver should be capable of working with the existing
>> tools/userspace.
>>
>> Currently, there isn't such libv4l plugins (or, at least, I failed to see a
>> merged plugin there for N9, S5P, etc). Let's not upstream new drivers or remove 
>> functionalities from already existing drivers based on something that has yet 
>> to be developed.
>>
>> After having it there properly working and tested independently, we may consider
>> patches removing V4L2 interfaces that were obsoleted in favor of using the libv4l
>> implementation, of course using the Kernel way of deprecating interfaces. But
>> doing it before having it, doesn't make any sense.
>>
>> Let's not put the the cart before the horse.
> 
> That's a good point. My long term plan was to deprecate and remove duplicated ioctls
> at the driver _once_ support for regular V4L2 interface on top of MC/subdev API
> is added at the v4l2 libraries. But this will happen after I create an initial.. 
> *cough* openmax IL for the driver. Which is not what the Tigers like best..

Ok.
> 
> --
> Regards,
> Sylwester


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-09 20:05             ` Mauro Carvalho Chehab
  2011-08-09 23:18               ` Sakari Ailus
@ 2011-08-15 12:30               ` Laurent Pinchart
  2011-08-16  0:13                 ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 44+ messages in thread
From: Laurent Pinchart @ 2011-08-15 12:30 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sylwester Nawrocki, Sylwester Nawrocki, linux-media, Sakari Ailus

Hi Mauro,

On Tuesday 09 August 2011 22:05:47 Mauro Carvalho Chehab wrote:
> Em 29-07-2011 05:36, Laurent Pinchart escreveu:
> > On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
> >> Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
> >>> On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
> >>>> Accumulating sub-dev controls at the video node is the right thing to
> >>>> do.
> >>>> 
> >>>> An MC-aware application will need to handle with that, but that
> >>>> doesn't sound to be hard. All such application would need to do is to
> >>>> first probe the subdev controls, and, when parsing the videodev
> >>>> controls, not register controls with duplicated ID's, or to mark them
> >>>> with some special attribute.
> >>> 
> >>> IMHO it's not a big issue in general. Still, both subdev and the host
> >>> device may support same control id. And then even though the control
> >>> ids are same on the subdev and the host they could mean physically
> >>> different controls (since when registering a subdev at the host driver
> >>> the host's controls take precedence and doubling subdev controls are
> >>> skipped).
> >> 
> >> True, but, except for specific usecases, the host control is enough.
> > 
> > Not for embedded devices. In many case the control won't even be
> > implemented by the host. If your system has two sensors connected to the
> > same host, they will both expose an exposure time control. Which one do
> > you configure through the video node ? The two sensors will likely have
> > different bounds for the same control, how do you report that ?
> 
> If the device has two sensors that are mutually exclusive, they should be
> mapped as two different inputs. The showed control should be the one used
> by the currently active input.
> 
> If the sensors aren't mutually exclusive, then two different video nodes
> will be shown in userspace.

It's more complex than that. The OMAP3 ISP driver exposes 7 video nodes 
regardless of the number of sensors. Sensors can be mutually exclusive or not, 
depending on the board. S_INPUT has its use cases, but is less useful on 
embedded hardware.

> > Those controls are also quite useless for generic V4L2 applications,
> > which will want auto-exposure anyway. This needs to be implemented in
> > userspace in libv4l.
> 
> Several webcams export exposure controls. Why shouldn't those controls be
> exposed to userspace anymore?
> 
> Ok, if the hardware won't support 3A algorithm, libv4l will implement it,
> eventually using an extra hardware-aware code to get the best performance
> for that specific device, but this doesn't mean that the user should always
> use it.
>
> Btw, the 3A algorithm is one of the things I don't like on my cell phone:
> while it works most of the time, sometimes I want to disable it and
> manually adjust, as it produces dark images, when there's a very bright
> light somewhere on the image background. Manually adjusting the exposure
> time and aperture is something relevant for some users.

It is, but on embedded devices that usually requires the application to be 
hardware-aware. Exposure time limits depend on blanking, which in turn 
influences the frame rate along with the pixel clock (often configurable as 
well). Programming those settings wrong can exceed the ISP available 
bandwidth.

The world unfortunately stopped being simple some time ago :-)

> >>> Also there might be some preference at user space, at which stage of
> >>> the pipeline to apply some controls. This is where the subdev API
> >>> helps, and plain video node API does not.
> >> 
> >> Again, this is for specific usecases. On such cases, what is expected is
> >> that the more generic control will be exported via V4L2 API.
> >> 
> >>>>> Thus it's a bit hard to imagine that we could do something like
> >>>>> "optionally not to inherit controls" as the subdev/MC API is
> >>>>> optional.
> >>>>> 
> >>>>> :)
> >>>> 
> >>>> This was actually implemented. There are some cases at ivtv/cx18
> >>>> driver where both the bridge and a subdev provides the same control
> >>>> (audio volume, for example). The idea is to allow the bridge driver
> >>>> to touch at the subdev control without exposing it to userspace,
> >>>> since the desire was that the bridge driver itself would expose such
> >>>> control, using a logic that combines changing the subdev and the
> >>>> bridge registers for volume.
> >>> 
> >>> This seem like hard coding a policy in the driver;) Then there is no
> >>> way (it might not be worth the effort though) to play with volume
> >>> level at both devices, e.g. to obtain optimal S/N ratio.
> >> 
> >> In general, playing with just one control is enough. Andy had a
> >> different opinion when this issue were discussed, and he thinks that
> >> playing with both is better. At the end, this is a developers decision,
> >> depending on how much information (and bug reports) he had.
> > 
> > ivtv/cx18 is a completely different use case, where hardware
> > configurations are known, and experiments possible to find out which
> > control(s) to use and how. In this case you can't know in advance what
> > the sensor and host drivers will be used for.
> 
> Why not?

My point is that the ISP driver developer can't know in advance which sensor 
will be used systems that don't exist yet.

> I never saw an embedded hardware that allows physically changing the sensor.

Beagleboard + pluggable sensor board.

> > Even if you did, fine image quality tuning requires accessing pretty much
> > all controls individually anyway.
> 
> The same is also true for non-embedded hardware. The only situation where
> V4L2 API is not enough is when there are two controls of the same type
> active. For example, 2 active volume controls, one at the audio demod, and
> another at the bridge. There may have some cases where you can do the same
> thing at the sensor or at a DSP block. This is where MC API gives an
> improvement, by allowing changing both, instead of just one of the
> controls.

To be precise it's the V4L2 subdev userspace API that allows that, not the MC 
API.

> >>> This is a hack...sorry, just joking ;-) Seriously, I think the
> >>> situation with the userspace subdevs is a bit different. Because with
> >>> one API we directly expose some functionality for applications, with
> >>> other we code it in the kernel, to make the devices appear uniform at
> >>> user space.
> >> 
> >> Not sure if I understood you. V4L2 export drivers functionality to
> >> userspace in an uniform way. MC api is for special applications that
> >> might need to access some internal functions on embedded devices.
> >> 
> >> Of course, there are some cases where it doesn't make sense to export a
> >> subdev control via V4L2 API.
> >> 
> >>>>> Also, the sensor subdev can be configured in the video node driver as
> >>>>> well as through the subdev device node. Both APIs can do the same
> >>>>> thing but in order to let the subdev API work as expected the video
> >>>>> node driver must be forbidden to configure the subdev.
> >>>> 
> >>>> Why? For the sensor, a V4L2 API call will look just like a bridge
> >>>> driver call. The subdev will need a mutex anyway, as two MC
> >>>> applications may be opening it simultaneously. I can't see why it
> >>>> should forbid changing the control from the bridge driver call.
> >>> 
> >>> Please do not forget there might be more than one subdev to configure
> >>> and that the bridge itself is also a subdev (which exposes a scaler
> >>> interface, for instance). A situation pretty much like in Figure 4.4
> >>> [1] (after the scaler there is also a video node to configure, but we
> >>> may assume that pixel resolution at the scaler pad 1 is same as at the
> >>> video node). Assuming the format and crop configuration flow is from
> >>> sensor to host scaler direction, if we have tried to configure _all_
> >>> subdevs when the last stage of the pipeline is configured (i.e. video
> >>> node) the whole scaler and crop/composition configuration we have been
> >>> destroyed at that time. And there is more to configure than
> >>> VIDIOC_S_FMT can do.
> >> 
> >> Think from users perspective: all user wants is to see a video of a
> >> given resolution. S_FMT (and a few other VIDIOC_* calls) have
> >> everything that the user wants: the desired resolution, framerate and
> >> format.
> >> 
> >> Specialized applications indeed need more, in order to get the best
> >> images for certain types of usages. So, MC is there.
> >> 
> >> Such applications will probably need to know exactly what's the sensor,
> >> what are their bugs, how it is connected, what are the DSP blocks in the
> >> patch, how the DSP algorithms are implemented, etc, in order to obtain
> >> the the perfect image.
> >> 
> >> Even on embedded devices like smartphones and tablets, I predict that
> >> both types of applications will be developed and used: people may use a
> >> generic application like flash player, and an specialized application
> >> provided by the manufacturer. Users can even develop their own
> >> applications generic apps using V4L2 directly, at the devices that
> >> allow that.
> >> 
> >> As I said before: both application types are welcome. We just need to
> >> warrant that a pure V4L application will work reasonably well.
> > 
> > That's why we have libv4l. The driver simply doesn't receive enough
> > information to configure the hardware correctly from the VIDIOC_* calls.
> > And as mentioned above, 3A algorithms, required by "simple" V4L2
> > applications, need to be implemented in userspace anyway.
> 
> It is OK to improve users experience via libv4l. What I'm saying is that it
> is NOT OK to remove V4L2 API support from the driver, forcing users to use
> some hardware plugin at libv4l.

Let me be clear on this. I'm *NOT* advocating removing V4L2 API support from 
any driver (well, on the drivers I can currently think of, if you show me a 
wifi driver that implements a V4L2 interface I might change my mind :-)).

The V4L2 API has been designed mostly for desktop hardware. Thanks to its 
clean design we can use it for embedded hardware, even though it requires 
extensions. What we need to do is to define which parts of the whole API apply 
as-is to embedded hardware, and which should better be left unused.

> >>> Allowing the bridge driver to configure subdevs at all times would
> >>> prevent the subdev/MC API to work.
> >> 
> >> Well, then we need to think on an alternative for that. It seems an
> >> interesting theme for the media workshop at the Kernel Summit/2011.
> >> 
> >>>>> There is a conflict there that in order to use
> >>>>> 'optional' API the 'main' API behaviour must be affected....
> >>>> 
> >>>> It is optional from userspace perspective. A V4L2-only application
> >>>> should be able to work with all drivers. However, a MC-aware
> >>>> application will likely be specific for some hardware, as it will need
> >>>> to know some device-specific stuff.
> >>>> 
> >>>> Both kinds of applications are welcome, but dropping support for
> >>>> V4L2-only applications is the wrong thing to do.
> >>>> 
> >>>>> And I really cant use V4L2 API only as is because it's too limited.
> >>>> 
> >>>> Why?
> >>> 
> >>> For instance there is really yet no support for scaler and composition
> >>> onto a target buffer in the Video Capture Interface (we also use
> >>> sensors with built in scalers). It's difficult to efficiently manage
> >>> capture/preview pipelines. It is impossible to discover the system
> >>> topology.
> >> 
> >> Scaler were always supported by V4L2: if the resolution specified by
> >> S_FMT is not what the sensor provides, then scale. All non-embedded
> >> drivers with sensor or bridge supports scale does that.
> >> 
> >> Composition is not properly supported yet. It could make sense to add it
> >> to V4L. How do you think MC API would help with composite?
> >> 
> >> Managing capture/preview pipelines will require some support at V4L2
> >> level. This is a problem that needs to be addressed there anyway, as
> >> buffers for preview/capture need to be allocated. There's an RFC about
> >> that, but I don't think it covers the pipelines for it.
> > 
> > Managing pipelines is a policy decision, and needs to be implemented in
> > userspace. Once again, the solution here is libv4l.
> 
> If V4L2 API is not enough, implementing it on libv4l won't solve, as
> userspace apps will use V4L2 API for requresting it.

We need to let userspace configure the pipeline. That's what the MC + V4L2 
APIs are for. The driver must no make policy decisions though, that must be 
left to userspace.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-10  0:22                 ` Mauro Carvalho Chehab
  2011-08-10  8:41                   ` Sylwester Nawrocki
@ 2011-08-15 12:45                   ` Laurent Pinchart
  2011-08-16  0:21                     ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 44+ messages in thread
From: Laurent Pinchart @ 2011-08-15 12:45 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sakari Ailus, Sylwester Nawrocki, Sylwester Nawrocki,
	linux-media, hverkuil

Hi Mauro,

On Wednesday 10 August 2011 02:22:00 Mauro Carvalho Chehab wrote:
> Em 09-08-2011 20:18, Sakari Ailus escreveu:
> > On Tue, Aug 09, 2011 at 05:05:47PM -0300, Mauro Carvalho Chehab wrote:
> >> Em 29-07-2011 05:36, Laurent Pinchart escreveu:
> >>> On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
> >>>> Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
> >>>>> On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:

[snip]

> >>> Those controls are also quite useless for generic V4L2 applications,
> >>> which will want auto-exposure anyway. This needs to be implemented in
> >>> userspace in libv4l.
> >> 
> >> Several webcams export exposure controls. Why shouldn't those controls
> >> be exposed to userspace anymore?
> > 
> > This is not a webcam,
> 
> I know, but the analogy is still valid.
> 
> > it is a software controlled high end digital camera on
> > a mobile device. The difference is that the functionality offered by the
> > hardware is at much lower level compared to a webcam; the result is that
> > more detailed control ia required but also much flexibility and
> > performance is gained.
> 
> I see. What I failed to see is why to remove control from userspace. If the
> hardware is more powerful, I expect to see more controls exported, and not
> removing the V4L2 functionality from the driver.

We're not trying to remove the controls. We expose them differently. That's a 
big difference :-)

> >> Ok, if the hardware won't support 3A algorithm, libv4l will implement
> >> it, eventually using an extra hardware-aware code to get the best
> >> performance for that specific device, but this doesn't mean that the
> >> user should always use it.
> > 
> > Why not? What would be the alternative?
> 
> User may want or need to disable the 3A algo and control some hardware
> parameters hardware directly, for example, to take an over-exposed
> picture, to create some neat effects, or to get some image whose exposure
> aperture/time is out of the 3A range.
> 
> >> Btw, the 3A algorithm is one of the things I don't like on my cell
> >> phone: while it works most of the time, sometimes I want to disable it
> >> and manually adjust, as it produces dark images, when there's a very
> >> bright light somewhere on the image background. Manually adjusting the
> >> exposure time and aperture is something relevant for some users.
> > 
> > You do want the 3A algorithms even if you use manual white balance. What
> > the automatic white balance algorithm produces is (among other things)
> > gamma tables, rgb-to-rgb conversion matrices and lens shading correction
> > tables. I doubt any end user, even if it was you, would like to fiddle
> > with such large tables directly when capturing photos.
> 
> There are some hacks for several professional and amateur cameras that
> replace the existing 3A algorithms by... NONE. The idea is to get the raw
> data directly from the sensor, and use some software like Gimp or
> Photoshop to do lens correction, temperature correction, whitespace
> ballance, etc, at post-processing. The advantage of such type of usage is
> that the photographer can fine-tune the generated image to produce what he
> wants, using more sophisticated (and not real-time) algorithms.
> 
> [1] for example, one of such firmwares, that I use on my Canon Digital
> Camera is available at:
>     http://chdk.wikia.com/wiki/CHDK

That's something you can very easily do with

http://git.ideasonboard.org/?p=media-ctl.git;a=summary

to configure the pipeline and

http://git.ideasonboard.org/?p=yavta.git;a=summary

to set controls and capture video. The later uses standard V4L2 ioctls only, 
even to set control on subdevs.

> > The size of this configuration could
> > easily be around 10 kB. A higher level control is required; colour
> > temperature, for instance. And its implementation involves the same
> > automatic white balance algorithm.
> > 
> > You must know your hardware very, very well to use the aforementioned low
> > level controls and in such a case you have no reason not to use the MC
> > interface to configure the device either. Configuring the device image
> > pipe using MC is actually a number of magnitudes less complex, and I say
> > it's been quite an achievement that we have such an interface which
> > makes it so effortless to do.
> 
> For sure, such kind of controls that exposes the 3A correction algorithm at
> the DSP level shouldn't be exposed via V4L2 interface, but things like
> disabling 3A and manually controlling the sensor, like aperture, exposure,
> analog zoom, etc, makes sense to be exported.

Drivers can't export 3A enable/disable controls, as 3A is implement in 
userspace. All manual controls are exported on subdev nodes, there's no issue 
with that. Any application (including the 3A implementation in libv4l) can use 
them.

> >>>>> Also there might be some preference at user space, at which stage of
> >>>>> the pipeline to apply some controls. This is where the subdev API
> >>>>> helps, and plain video node API does not.
> >>>> 
> >>>> Again, this is for specific usecases. On such cases, what is expected
> >>>> is that the more generic control will be exported via V4L2 API.
> >>>> 
> >>>>>>> Thus it's a bit hard to imagine that we could do something like
> >>>>>>> "optionally not to inherit controls" as the subdev/MC API is
> >>>>>>> optional.
> >>>>>>> 
> >>>>>>> :)
> >>>>>> 
> >>>>>> This was actually implemented. There are some cases at ivtv/cx18
> >>>>>> driver where both the bridge and a subdev provides the same control
> >>>>>> (audio volume, for example). The idea is to allow the bridge driver
> >>>>>> to touch at the subdev control without exposing it to userspace,
> >>>>>> since the desire was that the bridge driver itself would expose
> >>>>>> such control, using a logic that combines changing the subdev and
> >>>>>> the bridge registers for volume.
> >>>>> 
> >>>>> This seem like hard coding a policy in the driver;) Then there is no
> >>>>> way (it might not be worth the effort though) to play with volume
> >>>>> level at both devices, e.g. to obtain optimal S/N ratio.
> >>>> 
> >>>> In general, playing with just one control is enough. Andy had a
> >>>> different opinion when this issue were discussed, and he thinks that
> >>>> playing with both is better. At the end, this is a developers
> >>>> decision, depending on how much information (and bug reports) he had.
> >>> 
> >>> ivtv/cx18 is a completely different use case, where hardware
> >>> configurations are known, and experiments possible to find out which
> >>> control(s) to use and how. In this case you can't know in advance what
> >>> the sensor and host drivers will be used for.
> >> 
> >> Why not? I never saw an embedded hardware that allows physically
> >> changing the sensor.
> >> 
> >>> Even if you did, fine image quality tuning requires
> >>> accessing pretty much all controls individually anyway.
> >> 
> >> The same is also true for non-embedded hardware. The only situation
> >> where V4L2 API is not enough is when there are two controls of the same
> >> type active. For example, 2 active volume controls, one at the audio
> >> demod, and another at the bridge. There may have some cases where you
> >> can do the same thing at the sensor or at a DSP block. This is where MC
> >> API gives an improvement, by allowing changing both, instead of just
> >> one of the controls.
> > 
> > This may be true on non-embedded hardware. It's important to know which
> > hardware component implements a particular control; for example digital
> > gain typically would take longer to have an effect if it is set on
> > sensor rather than on the ISP. Also, is you set digital gain, you want
> > to set it on the same device where your analog gain is --- the sensor
> > --- to avoid badly exposed imagws.
> 
> I'd say that the driver should expose the hardware control, if 3A is
> disabled.

Regardless of the software 3A state drivers will expose all controls.

> > When it comes to scaling, the scaling quality, power consumption and
> > performance may well be very different depending on where it is done.
> > There typically are data rate limitations at different parts of the
> > pipeline. The plain V4L2 has never been meant for this nor provides any
> > support when doing something like above, and these are just few
> > examples.
> 
> I see your point.
> 
> >>>>> This is a hack...sorry, just joking ;-) Seriously, I think the
> >>>>> situation with the userspace subdevs is a bit different. Because with
> >>>>> one API we directly expose some functionality for applications, with
> >>>>> other we code it in the kernel, to make the devices appear uniform
> >>>>> at user space.
> >>>> 
> >>>> Not sure if I understood you. V4L2 export drivers functionality to
> >>>> userspace in an uniform way. MC api is for special applications that
> >>>> might need to access some internal functions on embedded devices.
> >>>> 
> >>>> Of course, there are some cases where it doesn't make sense to export
> >>>> a subdev control via V4L2 API.
> >>>> 
> >>>>>>> Also, the sensor subdev can be configured in the video node driver
> >>>>>>> as well as through the subdev device node. Both APIs can do the
> >>>>>>> same thing but in order to let the subdev API work as expected the
> >>>>>>> video node driver must be forbidden to configure the subdev.
> >>>>>> 
> >>>>>> Why? For the sensor, a V4L2 API call will look just like a bridge
> >>>>>> driver call. The subdev will need a mutex anyway, as two MC
> >>>>>> applications may be opening it simultaneously. I can't see why it
> >>>>>> should forbid changing the control from the bridge driver call.
> >>>>> 
> >>>>> Please do not forget there might be more than one subdev to configure
> >>>>> and that the bridge itself is also a subdev (which exposes a scaler
> >>>>> interface, for instance). A situation pretty much like in Figure 4.4
> >>>>> [1] (after the scaler there is also a video node to configure, but
> >>>>> we may assume that pixel resolution at the scaler pad 1 is same as
> >>>>> at the video node). Assuming the format and crop configuration flow
> >>>>> is from sensor to host scaler direction, if we have tried to
> >>>>> configure _all_ subdevs when the last stage of the pipeline is
> >>>>> configured (i.e. video node) the whole scaler and crop/composition
> >>>>> configuration we have been destroyed at that time. And there is more
> >>>>> to configure than VIDIOC_S_FMT can do.
> >>>> 
> >>>> Think from users perspective: all user wants is to see a video of a
> >>>> given resolution. S_FMT (and a few other VIDIOC_* calls) have
> >>>> everything that the user wants: the desired resolution, framerate and
> >>>> format.
> >>>> 
> >>>> Specialized applications indeed need more, in order to get the best
> >>>> images for certain types of usages. So, MC is there.
> >>>> 
> >>>> Such applications will probably need to know exactly what's the
> >>>> sensor, what are their bugs, how it is connected, what are the DSP
> >>>> blocks in the patch, how the DSP algorithms are implemented, etc, in
> >>>> order to obtain the the perfect image.
> >>>> 
> >>>> Even on embedded devices like smartphones and tablets, I predict that
> >>>> both types of applications will be developed and used: people may use
> >>>> a generic application like flash player, and an specialized
> >>>> application provided by the manufacturer. Users can even develop
> >>>> their own applications generic apps using V4L2 directly, at the
> >>>> devices that allow that.
> >>>> 
> >>>> As I said before: both application types are welcome. We just need to
> >>>> warrant that a pure V4L application will work reasonably well.
> >>> 
> >>> That's why we have libv4l. The driver simply doesn't receive enough
> >>> information to configure the hardware correctly from the VIDIOC_*
> >>> calls. And as mentioned above, 3A algorithms, required by "simple"
> >>> V4L2 applications, need to be implemented in userspace anyway.
> >> 
> >> It is OK to improve users experience via libv4l. What I'm saying is that
> >> it is NOT OK to remove V4L2 API support from the driver, forcing users
> >> to use some hardware plugin at libv4l.
> > 
> > Either do you know your hardware or do not know it. General purpose
> > applications can rely on functionality provided by libv4l, but if you do
> > not use it, then you need to configure the underlying device. Which is
> > something where the Media controller and v4l2_subdev interfaces are
> > tremendously useful.
> 
> Agreed, but while we don't actually have libv4l hw-specific plugins
> committed at the v4l-utils git tree, all we have to warrant that the
> hardware will work with a generic userspace application is the V4L2 API.
> 
> >>>>> Allowing the bridge driver to configure subdevs at all times would
> >>>>> prevent the subdev/MC API to work.
> >>>> 
> >>>> Well, then we need to think on an alternative for that. It seems an
> >>>> interesting theme for the media workshop at the Kernel Summit/2011.
> >>>> 
> >>>>>>> There is a conflict there that in order to use
> >>>>>>> 'optional' API the 'main' API behaviour must be affected....
> >>>>>> 
> >>>>>> It is optional from userspace perspective. A V4L2-only application
> >>>>>> should be able to work with all drivers. However, a MC-aware
> >>>>>> application will likely be specific for some hardware, as it will
> >>>>>> need to know some device-specific stuff.
> >>>>>> 
> >>>>>> Both kinds of applications are welcome, but dropping support for
> >>>>>> V4L2-only applications is the wrong thing to do.
> >>>>>> 
> >>>>>>> And I really cant use V4L2 API only as is because it's too limited.
> >>>>>> 
> >>>>>> Why?
> >>>>> 
> >>>>> For instance there is really yet no support for scaler and
> >>>>> composition onto a target buffer in the Video Capture Interface (we
> >>>>> also use sensors with built in scalers). It's difficult to
> >>>>> efficiently manage capture/preview pipelines. It is impossible to
> >>>>> discover the system topology.
> >>>> 
> >>>> Scaler were always supported by V4L2: if the resolution specified by
> >>>> S_FMT is not what the sensor provides, then scale. All non-embedded
> >>>> drivers with sensor or bridge supports scale does that.
> >>>> 
> >>>> Composition is not properly supported yet. It could make sense to add
> >>>> it to V4L. How do you think MC API would help with composite?
> >>>> 
> >>>> Managing capture/preview pipelines will require some support at V4L2
> >>>> level. This is a problem that needs to be addressed there anyway, as
> >>>> buffers for preview/capture need to be allocated. There's an RFC
> >>>> about that, but I don't think it covers the pipelines for it.
> >>> 
> >>> Managing pipelines is a policy decision, and needs to be implemented in
> >>> userspace. Once again, the solution here is libv4l.
> >> 
> >> If V4L2 API is not enough, implementing it on libv4l won't solve, as
> >> userspace apps will use V4L2 API for requresting it.
> > 
> > There are two kind of applications: specialised and generic. The generic
> > ones may rely on restrictive policies put in place by a libv4l plugin
> > whereas the specialised applications need to access the device's features
> > directly to get the most out of it.
> 
> A submitted upstream driver should be capable of working with the existing
> tools/userspace.
> 
> Currently, there isn't such libv4l plugins (or, at least, I failed to see a
> merged plugin there for N9, S5P, etc). Let's not upstream new drivers or
> remove functionalities from already existing drivers based on something
> that has yet to be developed.
> 
> After having it there properly working and tested independently, we may
> consider patches removing V4L2 interfaces that were obsoleted in favor of
> using the libv4l implementation, of course using the Kernel way of
> deprecating interfaces. But doing it before having it, doesn't make any
> sense.

Once again we're not trying to remove controls, but expose them differently. 

> Let's not put the the cart before the horse.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-15 12:30               ` Laurent Pinchart
@ 2011-08-16  0:13                 ` Mauro Carvalho Chehab
  2011-08-16  8:57                   ` Laurent Pinchart
  0 siblings, 1 reply; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-16  0:13 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sylwester Nawrocki, Sylwester Nawrocki, linux-media, Sakari Ailus

Em 15-08-2011 09:30, Laurent Pinchart escreveu:
> Hi Mauro,
> 
> On Tuesday 09 August 2011 22:05:47 Mauro Carvalho Chehab wrote:
>> Em 29-07-2011 05:36, Laurent Pinchart escreveu:
>>> On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
>>>> Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
>>>>> On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
>>>>>> Accumulating sub-dev controls at the video node is the right thing to
>>>>>> do.
>>>>>>
>>>>>> An MC-aware application will need to handle with that, but that
>>>>>> doesn't sound to be hard. All such application would need to do is to
>>>>>> first probe the subdev controls, and, when parsing the videodev
>>>>>> controls, not register controls with duplicated ID's, or to mark them
>>>>>> with some special attribute.
>>>>>
>>>>> IMHO it's not a big issue in general. Still, both subdev and the host
>>>>> device may support same control id. And then even though the control
>>>>> ids are same on the subdev and the host they could mean physically
>>>>> different controls (since when registering a subdev at the host driver
>>>>> the host's controls take precedence and doubling subdev controls are
>>>>> skipped).
>>>>
>>>> True, but, except for specific usecases, the host control is enough.
>>>
>>> Not for embedded devices. In many case the control won't even be
>>> implemented by the host. If your system has two sensors connected to the
>>> same host, they will both expose an exposure time control. Which one do
>>> you configure through the video node ? The two sensors will likely have
>>> different bounds for the same control, how do you report that ?
>>
>> If the device has two sensors that are mutually exclusive, they should be
>> mapped as two different inputs. The showed control should be the one used
>> by the currently active input.
>>
>> If the sensors aren't mutually exclusive, then two different video nodes
>> will be shown in userspace.
> 
> It's more complex than that. The OMAP3 ISP driver exposes 7 video nodes 
> regardless of the number of sensors. Sensors can be mutually exclusive or not, 
> depending on the board. S_INPUT has its use cases, but is less useful on 
> embedded hardware.

Sorry, but exposing a video node that can't be used doesn't make sense.

>>> Those controls are also quite useless for generic V4L2 applications,
>>> which will want auto-exposure anyway. This needs to be implemented in
>>> userspace in libv4l.
>>
>> Several webcams export exposure controls. Why shouldn't those controls be
>> exposed to userspace anymore?
>>
>> Ok, if the hardware won't support 3A algorithm, libv4l will implement it,
>> eventually using an extra hardware-aware code to get the best performance
>> for that specific device, but this doesn't mean that the user should always
>> use it.
>>
>> Btw, the 3A algorithm is one of the things I don't like on my cell phone:
>> while it works most of the time, sometimes I want to disable it and
>> manually adjust, as it produces dark images, when there's a very bright
>> light somewhere on the image background. Manually adjusting the exposure
>> time and aperture is something relevant for some users.
> 
> It is, but on embedded devices that usually requires the application to be 
> hardware-aware. Exposure time limits depend on blanking, which in turn 
> influences the frame rate along with the pixel clock (often configurable as 
> well). Programming those settings wrong can exceed the ISP available 
> bandwidth.
> 
> The world unfortunately stopped being simple some time ago :-)
> 
>>>>> Also there might be some preference at user space, at which stage of
>>>>> the pipeline to apply some controls. This is where the subdev API
>>>>> helps, and plain video node API does not.
>>>>
>>>> Again, this is for specific usecases. On such cases, what is expected is
>>>> that the more generic control will be exported via V4L2 API.
>>>>
>>>>>>> Thus it's a bit hard to imagine that we could do something like
>>>>>>> "optionally not to inherit controls" as the subdev/MC API is
>>>>>>> optional.
>>>>>>>
>>>>>>> :)
>>>>>>
>>>>>> This was actually implemented. There are some cases at ivtv/cx18
>>>>>> driver where both the bridge and a subdev provides the same control
>>>>>> (audio volume, for example). The idea is to allow the bridge driver
>>>>>> to touch at the subdev control without exposing it to userspace,
>>>>>> since the desire was that the bridge driver itself would expose such
>>>>>> control, using a logic that combines changing the subdev and the
>>>>>> bridge registers for volume.
>>>>>
>>>>> This seem like hard coding a policy in the driver;) Then there is no
>>>>> way (it might not be worth the effort though) to play with volume
>>>>> level at both devices, e.g. to obtain optimal S/N ratio.
>>>>
>>>> In general, playing with just one control is enough. Andy had a
>>>> different opinion when this issue were discussed, and he thinks that
>>>> playing with both is better. At the end, this is a developers decision,
>>>> depending on how much information (and bug reports) he had.
>>>
>>> ivtv/cx18 is a completely different use case, where hardware
>>> configurations are known, and experiments possible to find out which
>>> control(s) to use and how. In this case you can't know in advance what
>>> the sensor and host drivers will be used for.
>>
>> Why not?
> 
> My point is that the ISP driver developer can't know in advance which sensor 
> will be used systems that don't exist yet.

As far as such hardware is projected, someone will know. It is a simple
trivial patch to associate a new hardware with a hardware profile at the 
platform data.

Also, on most cases, probing a sensor is as trivial as reading a sensor
ID during device probe. This applies, for example, for all Omnivision
sensors.

We do things like that all the times for PC world, as nobody knows what
webcam someone would plug on his PC.

> 
>> I never saw an embedded hardware that allows physically changing the sensor.
> 
> Beagleboard + pluggable sensor board.

Development systems like beagleboard, pandaboard, Exynos SMDK, etc, aren't 
embeeded hardware. They're development kits. I don't mind if, for those kits
the developer that is playing with it has to pass a mode parameter and/or 
run some open harware-aware small application that makes the driver to select 
the sensor type he is using, but, if the hardware is, instead, a N9 or a
Galaxy Tab (or whatever embedded hardware), the driver should expose just
the sensors that exists on such hardware. It shouldn't be ever allowed to
change it on userspace, using whatever API on those hardware.

>>> Even if you did, fine image quality tuning requires accessing pretty much
>>> all controls individually anyway.
>>
>> The same is also true for non-embedded hardware. The only situation where
>> V4L2 API is not enough is when there are two controls of the same type
>> active. For example, 2 active volume controls, one at the audio demod, and
>> another at the bridge. There may have some cases where you can do the same
>> thing at the sensor or at a DSP block. This is where MC API gives an
>> improvement, by allowing changing both, instead of just one of the
>> controls.
> 
> To be precise it's the V4L2 subdev userspace API that allows that, not the MC 
> API.
> 
>>>>> This is a hack...sorry, just joking ;-) Seriously, I think the
>>>>> situation with the userspace subdevs is a bit different. Because with
>>>>> one API we directly expose some functionality for applications, with
>>>>> other we code it in the kernel, to make the devices appear uniform at
>>>>> user space.
>>>>
>>>> Not sure if I understood you. V4L2 export drivers functionality to
>>>> userspace in an uniform way. MC api is for special applications that
>>>> might need to access some internal functions on embedded devices.
>>>>
>>>> Of course, there are some cases where it doesn't make sense to export a
>>>> subdev control via V4L2 API.
>>>>
>>>>>>> Also, the sensor subdev can be configured in the video node driver as
>>>>>>> well as through the subdev device node. Both APIs can do the same
>>>>>>> thing but in order to let the subdev API work as expected the video
>>>>>>> node driver must be forbidden to configure the subdev.
>>>>>>
>>>>>> Why? For the sensor, a V4L2 API call will look just like a bridge
>>>>>> driver call. The subdev will need a mutex anyway, as two MC
>>>>>> applications may be opening it simultaneously. I can't see why it
>>>>>> should forbid changing the control from the bridge driver call.
>>>>>
>>>>> Please do not forget there might be more than one subdev to configure
>>>>> and that the bridge itself is also a subdev (which exposes a scaler
>>>>> interface, for instance). A situation pretty much like in Figure 4.4
>>>>> [1] (after the scaler there is also a video node to configure, but we
>>>>> may assume that pixel resolution at the scaler pad 1 is same as at the
>>>>> video node). Assuming the format and crop configuration flow is from
>>>>> sensor to host scaler direction, if we have tried to configure _all_
>>>>> subdevs when the last stage of the pipeline is configured (i.e. video
>>>>> node) the whole scaler and crop/composition configuration we have been
>>>>> destroyed at that time. And there is more to configure than
>>>>> VIDIOC_S_FMT can do.
>>>>
>>>> Think from users perspective: all user wants is to see a video of a
>>>> given resolution. S_FMT (and a few other VIDIOC_* calls) have
>>>> everything that the user wants: the desired resolution, framerate and
>>>> format.
>>>>
>>>> Specialized applications indeed need more, in order to get the best
>>>> images for certain types of usages. So, MC is there.
>>>>
>>>> Such applications will probably need to know exactly what's the sensor,
>>>> what are their bugs, how it is connected, what are the DSP blocks in the
>>>> patch, how the DSP algorithms are implemented, etc, in order to obtain
>>>> the the perfect image.
>>>>
>>>> Even on embedded devices like smartphones and tablets, I predict that
>>>> both types of applications will be developed and used: people may use a
>>>> generic application like flash player, and an specialized application
>>>> provided by the manufacturer. Users can even develop their own
>>>> applications generic apps using V4L2 directly, at the devices that
>>>> allow that.
>>>>
>>>> As I said before: both application types are welcome. We just need to
>>>> warrant that a pure V4L application will work reasonably well.
>>>
>>> That's why we have libv4l. The driver simply doesn't receive enough
>>> information to configure the hardware correctly from the VIDIOC_* calls.
>>> And as mentioned above, 3A algorithms, required by "simple" V4L2
>>> applications, need to be implemented in userspace anyway.
>>
>> It is OK to improve users experience via libv4l. What I'm saying is that it
>> is NOT OK to remove V4L2 API support from the driver, forcing users to use
>> some hardware plugin at libv4l.
> 
> Let me be clear on this. I'm *NOT* advocating removing V4L2 API support from 
> any driver (well, on the drivers I can currently think of, if you show me a 
> wifi driver that implements a V4L2 interface I might change my mind :-)).

This thread is all about a patch series partially removing V4L2 API support.

> The V4L2 API has been designed mostly for desktop hardware. Thanks to its 
> clean design we can use it for embedded hardware, even though it requires 
> extensions. What we need to do is to define which parts of the whole API apply 
> as-is to embedded hardware, and which should better be left unused.

Agreed. I'm all in favor on discussing this topic. However, before he have such 
iscussions and come into a common understanding, I won't take any patch removing 
V4L2 API support, nor I'll accept any driver that won't allow it to be usable
by a V4L2 userspace application.

>>>>> Allowing the bridge driver to configure subdevs at all times would
>>>>> prevent the subdev/MC API to work.
>>>>
>>>> Well, then we need to think on an alternative for that. It seems an
>>>> interesting theme for the media workshop at the Kernel Summit/2011.
>>>>
>>>>>>> There is a conflict there that in order to use
>>>>>>> 'optional' API the 'main' API behaviour must be affected....
>>>>>>
>>>>>> It is optional from userspace perspective. A V4L2-only application
>>>>>> should be able to work with all drivers. However, a MC-aware
>>>>>> application will likely be specific for some hardware, as it will need
>>>>>> to know some device-specific stuff.
>>>>>>
>>>>>> Both kinds of applications are welcome, but dropping support for
>>>>>> V4L2-only applications is the wrong thing to do.
>>>>>>
>>>>>>> And I really cant use V4L2 API only as is because it's too limited.
>>>>>>
>>>>>> Why?
>>>>>
>>>>> For instance there is really yet no support for scaler and composition
>>>>> onto a target buffer in the Video Capture Interface (we also use
>>>>> sensors with built in scalers). It's difficult to efficiently manage
>>>>> capture/preview pipelines. It is impossible to discover the system
>>>>> topology.
>>>>
>>>> Scaler were always supported by V4L2: if the resolution specified by
>>>> S_FMT is not what the sensor provides, then scale. All non-embedded
>>>> drivers with sensor or bridge supports scale does that.
>>>>
>>>> Composition is not properly supported yet. It could make sense to add it
>>>> to V4L. How do you think MC API would help with composite?
>>>>
>>>> Managing capture/preview pipelines will require some support at V4L2
>>>> level. This is a problem that needs to be addressed there anyway, as
>>>> buffers for preview/capture need to be allocated. There's an RFC about
>>>> that, but I don't think it covers the pipelines for it.
>>>
>>> Managing pipelines is a policy decision, and needs to be implemented in
>>> userspace. Once again, the solution here is libv4l.
>>
>> If V4L2 API is not enough, implementing it on libv4l won't solve, as
>> userspace apps will use V4L2 API for requresting it.
> 
> We need to let userspace configure the pipeline. That's what the MC + V4L2 
> APIs are for. The driver must no make policy decisions though, that must be 
> left to userspace.

Showing only the hardware that is supported by an embedded device is not a
policy decision. It is hardware detection or platform data configuration. 

Regards,
Mauro

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-15 12:45                   ` Laurent Pinchart
@ 2011-08-16  0:21                     ` Mauro Carvalho Chehab
  2011-08-16  8:59                       ` Laurent Pinchart
  0 siblings, 1 reply; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-16  0:21 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sakari Ailus, Sylwester Nawrocki, Sylwester Nawrocki,
	linux-media, hverkuil

Em 15-08-2011 05:45, Laurent Pinchart escreveu:
> Hi Mauro,

>> After having it there properly working and tested independently, we may
>> consider patches removing V4L2 interfaces that were obsoleted in favor of
>> using the libv4l implementation, of course using the Kernel way of
>> deprecating interfaces. But doing it before having it, doesn't make any
>> sense.
> 
> Once again we're not trying to remove controls, but expose them differently. 

See the comments at the patch series, starting from the patches that removes
support for S_INPUT. I'm aware that the controls will be exposed by both
MC and V4L2 API, althrough having ways to expose/hide controls via V4L2 API
makes patch review a way more complicated than it used to be before the MC
patches.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-16  0:13                 ` Mauro Carvalho Chehab
@ 2011-08-16  8:57                   ` Laurent Pinchart
  2011-08-16 15:30                     ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 44+ messages in thread
From: Laurent Pinchart @ 2011-08-16  8:57 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sylwester Nawrocki, Sylwester Nawrocki, linux-media, Sakari Ailus

Hi Mauro,

On Tuesday 16 August 2011 02:13:00 Mauro Carvalho Chehab wrote:
> Em 15-08-2011 09:30, Laurent Pinchart escreveu:
> > On Tuesday 09 August 2011 22:05:47 Mauro Carvalho Chehab wrote:
> >> Em 29-07-2011 05:36, Laurent Pinchart escreveu:
> >>> On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
> >>>> Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
> >>>>> On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
> >>>>>> Accumulating sub-dev controls at the video node is the right thing
> >>>>>> to do.
> >>>>>> 
> >>>>>> An MC-aware application will need to handle with that, but that
> >>>>>> doesn't sound to be hard. All such application would need to do is
> >>>>>> to first probe the subdev controls, and, when parsing the videodev
> >>>>>> controls, not register controls with duplicated ID's, or to mark
> >>>>>> them with some special attribute.
> >>>>> 
> >>>>> IMHO it's not a big issue in general. Still, both subdev and the host
> >>>>> device may support same control id. And then even though the control
> >>>>> ids are same on the subdev and the host they could mean physically
> >>>>> different controls (since when registering a subdev at the host
> >>>>> driver the host's controls take precedence and doubling subdev
> >>>>> controls are skipped).
> >>>> 
> >>>> True, but, except for specific usecases, the host control is enough.
> >>> 
> >>> Not for embedded devices. In many case the control won't even be
> >>> implemented by the host. If your system has two sensors connected to
> >>> the same host, they will both expose an exposure time control. Which
> >>> one do you configure through the video node ? The two sensors will
> >>> likely have different bounds for the same control, how do you report
> >>> that ?
> >> 
> >> If the device has two sensors that are mutually exclusive, they should
> >> be mapped as two different inputs. The showed control should be the one
> >> used by the currently active input.
> >> 
> >> If the sensors aren't mutually exclusive, then two different video nodes
> >> will be shown in userspace.
> > 
> > It's more complex than that. The OMAP3 ISP driver exposes 7 video nodes
> > regardless of the number of sensors. Sensors can be mutually exclusive or
> > not, depending on the board. S_INPUT has its use cases, but is less
> > useful on embedded hardware.
> 
> Sorry, but exposing a video node that can't be used doesn't make sense.

All those 7 video nodes can be used, depending on userspace's needs.

> >>> Those controls are also quite useless for generic V4L2 applications,
> >>> which will want auto-exposure anyway. This needs to be implemented in
> >>> userspace in libv4l.
> >> 
> >> Several webcams export exposure controls. Why shouldn't those controls
> >> be exposed to userspace anymore?
> >> 
> >> Ok, if the hardware won't support 3A algorithm, libv4l will implement
> >> it, eventually using an extra hardware-aware code to get the best
> >> performance for that specific device, but this doesn't mean that the
> >> user should always use it.
> >> 
> >> Btw, the 3A algorithm is one of the things I don't like on my cell
> >> phone: while it works most of the time, sometimes I want to disable it
> >> and manually adjust, as it produces dark images, when there's a very
> >> bright light somewhere on the image background. Manually adjusting the
> >> exposure time and aperture is something relevant for some users.
> > 
> > It is, but on embedded devices that usually requires the application to
> > be hardware-aware. Exposure time limits depend on blanking, which in
> > turn influences the frame rate along with the pixel clock (often
> > configurable as well). Programming those settings wrong can exceed the
> > ISP available bandwidth.
> > 
> > The world unfortunately stopped being simple some time ago :-)
> > 
> >>>>> Also there might be some preference at user space, at which stage of
> >>>>> the pipeline to apply some controls. This is where the subdev API
> >>>>> helps, and plain video node API does not.
> >>>> 
> >>>> Again, this is for specific usecases. On such cases, what is expected
> >>>> is that the more generic control will be exported via V4L2 API.
> >>>> 
> >>>>>>> Thus it's a bit hard to imagine that we could do something like
> >>>>>>> "optionally not to inherit controls" as the subdev/MC API is
> >>>>>>> optional.
> >>>>>>> 
> >>>>>>> :)
> >>>>>> 
> >>>>>> This was actually implemented. There are some cases at ivtv/cx18
> >>>>>> driver where both the bridge and a subdev provides the same control
> >>>>>> (audio volume, for example). The idea is to allow the bridge driver
> >>>>>> to touch at the subdev control without exposing it to userspace,
> >>>>>> since the desire was that the bridge driver itself would expose such
> >>>>>> control, using a logic that combines changing the subdev and the
> >>>>>> bridge registers for volume.
> >>>>> 
> >>>>> This seem like hard coding a policy in the driver;) Then there is no
> >>>>> way (it might not be worth the effort though) to play with volume
> >>>>> level at both devices, e.g. to obtain optimal S/N ratio.
> >>>> 
> >>>> In general, playing with just one control is enough. Andy had a
> >>>> different opinion when this issue were discussed, and he thinks that
> >>>> playing with both is better. At the end, this is a developers
> >>>> decision, depending on how much information (and bug reports) he had.
> >>> 
> >>> ivtv/cx18 is a completely different use case, where hardware
> >>> configurations are known, and experiments possible to find out which
> >>> control(s) to use and how. In this case you can't know in advance what
> >>> the sensor and host drivers will be used for.
> >> 
> >> Why not?
> > 
> > My point is that the ISP driver developer can't know in advance which
> > sensor will be used systems that don't exist yet.
> 
> As far as such hardware is projected, someone will know. It is a simple
> trivial patch to associate a new hardware with a hardware profile at the
> platform data.

Platform data must contain hardware descriptions only, not policies. This is 
even clearer now that ARM is moving away from board code to the Device Tree.

> Also, on most cases, probing a sensor is as trivial as reading a sensor
> ID during device probe. This applies, for example, for all Omnivision
> sensors.
> 
> We do things like that all the times for PC world, as nobody knows what
> webcam someone would plug on his PC.

Sorry, but that's not related. You simply can't decide in an embedded ISP 
driver how to deal with sensor controls, as the system will be used in a too 
wide variety of applications and hardware configurations. All controls need to 
be exposed, period.

> >> I never saw an embedded hardware that allows physically changing the
> >> sensor.
> > 
> > Beagleboard + pluggable sensor board.
> 
> Development systems like beagleboard, pandaboard, Exynos SMDK, etc, aren't
> embeeded hardware. They're development kits.

People create end-user products based on those kits. That make them first-
class embedded hardware like any other.

> I don't mind if, for those kits the developer that is playing with it has to
> pass a mode parameter and/or run some open harware-aware small application
> that makes the driver to select the sensor type he is using, but, if the
> hardware is, instead, a N9 or a Galaxy Tab (or whatever embedded hardware),
> the driver should expose just the sensors that exists on such hardware. It
> shouldn't be ever allowed to change it on userspace, using whatever API on
> those hardware.

Who talked about changing sensors from userspace on those systems ? Platform 
data (regardless of whether it comes from board code, device tree, or 
something else) will contain a hardware description, and the kernel will 
create the right devices and load the right drivers. The issue we're 
discussing is how to expose controls for those devices to userspace, and that 
needs to be done through subdev nodes.

> >>> Even if you did, fine image quality tuning requires accessing pretty
> >>> much all controls individually anyway.
> >> 
> >> The same is also true for non-embedded hardware. The only situation
> >> where V4L2 API is not enough is when there are two controls of the same
> >> type active. For example, 2 active volume controls, one at the audio
> >> demod, and another at the bridge. There may have some cases where you
> >> can do the same thing at the sensor or at a DSP block. This is where MC
> >> API gives an improvement, by allowing changing both, instead of just
> >> one of the controls.
> > 
> > To be precise it's the V4L2 subdev userspace API that allows that, not
> > the MC API.
> > 
> >>>>> This is a hack...sorry, just joking ;-) Seriously, I think the
> >>>>> situation with the userspace subdevs is a bit different. Because with
> >>>>> one API we directly expose some functionality for applications, with
> >>>>> other we code it in the kernel, to make the devices appear uniform at
> >>>>> user space.
> >>>> 
> >>>> Not sure if I understood you. V4L2 export drivers functionality to
> >>>> userspace in an uniform way. MC api is for special applications that
> >>>> might need to access some internal functions on embedded devices.
> >>>> 
> >>>> Of course, there are some cases where it doesn't make sense to export
> >>>> a subdev control via V4L2 API.
> >>>> 
> >>>>>>> Also, the sensor subdev can be configured in the video node driver
> >>>>>>> as well as through the subdev device node. Both APIs can do the
> >>>>>>> same thing but in order to let the subdev API work as expected the
> >>>>>>> video node driver must be forbidden to configure the subdev.
> >>>>>> 
> >>>>>> Why? For the sensor, a V4L2 API call will look just like a bridge
> >>>>>> driver call. The subdev will need a mutex anyway, as two MC
> >>>>>> applications may be opening it simultaneously. I can't see why it
> >>>>>> should forbid changing the control from the bridge driver call.
> >>>>> 
> >>>>> Please do not forget there might be more than one subdev to configure
> >>>>> and that the bridge itself is also a subdev (which exposes a scaler
> >>>>> interface, for instance). A situation pretty much like in Figure 4.4
> >>>>> [1] (after the scaler there is also a video node to configure, but we
> >>>>> may assume that pixel resolution at the scaler pad 1 is same as at
> >>>>> the video node). Assuming the format and crop configuration flow is
> >>>>> from sensor to host scaler direction, if we have tried to configure
> >>>>> _all_ subdevs when the last stage of the pipeline is configured
> >>>>> (i.e. video node) the whole scaler and crop/composition
> >>>>> configuration we have been destroyed at that time. And there is more
> >>>>> to configure than
> >>>>> VIDIOC_S_FMT can do.
> >>>> 
> >>>> Think from users perspective: all user wants is to see a video of a
> >>>> given resolution. S_FMT (and a few other VIDIOC_* calls) have
> >>>> everything that the user wants: the desired resolution, framerate and
> >>>> format.
> >>>> 
> >>>> Specialized applications indeed need more, in order to get the best
> >>>> images for certain types of usages. So, MC is there.
> >>>> 
> >>>> Such applications will probably need to know exactly what's the
> >>>> sensor, what are their bugs, how it is connected, what are the DSP
> >>>> blocks in the patch, how the DSP algorithms are implemented, etc, in
> >>>> order to obtain the the perfect image.
> >>>> 
> >>>> Even on embedded devices like smartphones and tablets, I predict that
> >>>> both types of applications will be developed and used: people may use
> >>>> a generic application like flash player, and an specialized
> >>>> application provided by the manufacturer. Users can even develop
> >>>> their own applications generic apps using V4L2 directly, at the
> >>>> devices that allow that.
> >>>> 
> >>>> As I said before: both application types are welcome. We just need to
> >>>> warrant that a pure V4L application will work reasonably well.
> >>> 
> >>> That's why we have libv4l. The driver simply doesn't receive enough
> >>> information to configure the hardware correctly from the VIDIOC_*
> >>> calls. And as mentioned above, 3A algorithms, required by "simple"
> >>> V4L2 applications, need to be implemented in userspace anyway.
> >> 
> >> It is OK to improve users experience via libv4l. What I'm saying is that
> >> it is NOT OK to remove V4L2 API support from the driver, forcing users
> >> to use some hardware plugin at libv4l.
> > 
> > Let me be clear on this. I'm *NOT* advocating removing V4L2 API support
> > from any driver (well, on the drivers I can currently think of, if you
> > show me a wifi driver that implements a V4L2 interface I might change my
> > mind :-)).
> 
> This thread is all about a patch series partially removing V4L2 API
> support.

Because that specific part of the API doesn't make sense for this use case. 
You wouldn't object to removing S_INPUT support from a video output driver, as 
it wouldn't make sense either.

> > The V4L2 API has been designed mostly for desktop hardware. Thanks to its
> > clean design we can use it for embedded hardware, even though it requires
> > extensions. What we need to do is to define which parts of the whole API
> > apply as-is to embedded hardware, and which should better be left
> > unused.
> 
> Agreed. I'm all in favor on discussing this topic. However, before he have
> such iscussions and come into a common understanding, I won't take any
> patch removing V4L2 API support, nor I'll accept any driver that won't
> allow it to be usable by a V4L2 userspace application.

A pure V4L2 userspace application, knowing about video device nodes only, can 
still use the driver. Not all advanced features will be available.

> >>>>> Allowing the bridge driver to configure subdevs at all times would
> >>>>> prevent the subdev/MC API to work.
> >>>> 
> >>>> Well, then we need to think on an alternative for that. It seems an
> >>>> interesting theme for the media workshop at the Kernel Summit/2011.
> >>>> 
> >>>>>>> There is a conflict there that in order to use
> >>>>>>> 'optional' API the 'main' API behaviour must be affected....
> >>>>>> 
> >>>>>> It is optional from userspace perspective. A V4L2-only application
> >>>>>> should be able to work with all drivers. However, a MC-aware
> >>>>>> application will likely be specific for some hardware, as it will
> >>>>>> need to know some device-specific stuff.
> >>>>>> 
> >>>>>> Both kinds of applications are welcome, but dropping support for
> >>>>>> V4L2-only applications is the wrong thing to do.
> >>>>>> 
> >>>>>>> And I really cant use V4L2 API only as is because it's too limited.
> >>>>>> 
> >>>>>> Why?
> >>>>> 
> >>>>> For instance there is really yet no support for scaler and
> >>>>> composition onto a target buffer in the Video Capture Interface (we
> >>>>> also use sensors with built in scalers). It's difficult to
> >>>>> efficiently manage capture/preview pipelines. It is impossible to
> >>>>> discover the system topology.
> >>>> 
> >>>> Scaler were always supported by V4L2: if the resolution specified by
> >>>> S_FMT is not what the sensor provides, then scale. All non-embedded
> >>>> drivers with sensor or bridge supports scale does that.
> >>>> 
> >>>> Composition is not properly supported yet. It could make sense to add
> >>>> it to V4L. How do you think MC API would help with composite?
> >>>> 
> >>>> Managing capture/preview pipelines will require some support at V4L2
> >>>> level. This is a problem that needs to be addressed there anyway, as
> >>>> buffers for preview/capture need to be allocated. There's an RFC about
> >>>> that, but I don't think it covers the pipelines for it.
> >>> 
> >>> Managing pipelines is a policy decision, and needs to be implemented in
> >>> userspace. Once again, the solution here is libv4l.
> >> 
> >> If V4L2 API is not enough, implementing it on libv4l won't solve, as
> >> userspace apps will use V4L2 API for requresting it.
> > 
> > We need to let userspace configure the pipeline. That's what the MC +
> > V4L2 APIs are for. The driver must no make policy decisions though, that
> > must be left to userspace.
> 
> Showing only the hardware that is supported by an embedded device is not a
> policy decision. It is hardware detection or platform data configuration.

I don't think that's what we disagree on.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-16  0:21                     ` Mauro Carvalho Chehab
@ 2011-08-16  8:59                       ` Laurent Pinchart
  2011-08-16 15:07                         ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 44+ messages in thread
From: Laurent Pinchart @ 2011-08-16  8:59 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sakari Ailus, Sylwester Nawrocki, Sylwester Nawrocki,
	linux-media, hverkuil

Hi Mauro,

On Tuesday 16 August 2011 02:21:09 Mauro Carvalho Chehab wrote:
> Em 15-08-2011 05:45, Laurent Pinchart escreveu:
> >> After having it there properly working and tested independently, we may
> >> consider patches removing V4L2 interfaces that were obsoleted in favor
> >> of using the libv4l implementation, of course using the Kernel way of
> >> deprecating interfaces. But doing it before having it, doesn't make any
> >> sense.
> > 
> > Once again we're not trying to remove controls, but expose them
> > differently.
> 
> See the comments at the patch series, starting from the patches that
> removes support for S_INPUT. I'm aware that the controls will be exposed
> by both MC and V4L2 API, althrough having ways to expose/hide controls via
> V4L2 API makes patch review a way more complicated than it used to be
> before the MC patches.

The MC API doesn't expose controls. Controls are still exposed by the V4L2 API 
only, but V4L2 can then expose them on subdev nodes in addition to video 
nodes.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-16  8:59                       ` Laurent Pinchart
@ 2011-08-16 15:07                         ` Mauro Carvalho Chehab
  0 siblings, 0 replies; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-16 15:07 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sakari Ailus, Sylwester Nawrocki, Sylwester Nawrocki,
	linux-media, hverkuil

Em 16-08-2011 01:59, Laurent Pinchart escreveu:
> Hi Mauro,
> 
> On Tuesday 16 August 2011 02:21:09 Mauro Carvalho Chehab wrote:
>> Em 15-08-2011 05:45, Laurent Pinchart escreveu:
>>>> After having it there properly working and tested independently, we may
>>>> consider patches removing V4L2 interfaces that were obsoleted in favor
>>>> of using the libv4l implementation, of course using the Kernel way of
>>>> deprecating interfaces. But doing it before having it, doesn't make any
>>>> sense.
>>>
>>> Once again we're not trying to remove controls, but expose them
>>> differently.
>>
>> See the comments at the patch series, starting from the patches that
>> removes support for S_INPUT. I'm aware that the controls will be exposed
>> by both MC and V4L2 API, althrough having ways to expose/hide controls via
>> V4L2 API makes patch review a way more complicated than it used to be
>> before the MC patches.
> 
> The MC API doesn't expose controls. Controls are still exposed by the V4L2 API 
> only, but V4L2 can then expose them on subdev nodes in addition to video 
> nodes.

To be clear, when I'm talking about MC, I'm meaning also the subdev nodes API,
as a pure V4L2 api application doesn't know anything about them.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-16  8:57                   ` Laurent Pinchart
@ 2011-08-16 15:30                     ` Mauro Carvalho Chehab
  2011-08-16 15:44                       ` Laurent Pinchart
  2011-08-16 21:47                       ` Sylwester Nawrocki
  0 siblings, 2 replies; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-16 15:30 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sylwester Nawrocki, Sylwester Nawrocki, linux-media, Sakari Ailus

Em 16-08-2011 01:57, Laurent Pinchart escreveu:
> Hi Mauro,
 
>>> My point is that the ISP driver developer can't know in advance which
>>> sensor will be used systems that don't exist yet.
>>
>> As far as such hardware is projected, someone will know. It is a simple
>> trivial patch to associate a new hardware with a hardware profile at the
>> platform data.
> 
> Platform data must contain hardware descriptions only, not policies. This is 
> even clearer now that ARM is moving away from board code to the Device Tree.

Again, a cell phone with one frontal camera and one hear camera has two
sensor inputs only. This is not a "policy". It is a hardware constraint.
The driver should allow setting the pipeline for both sensors via S_INPUT,
otherwise a V4L2 only userspace application won't work.

It is as simple as that.

>> Also, on most cases, probing a sensor is as trivial as reading a sensor
>> ID during device probe. This applies, for example, for all Omnivision
>> sensors.
>>
>> We do things like that all the times for PC world, as nobody knows what
>> webcam someone would plug on his PC.
> 
> Sorry, but that's not related. You simply can't decide in an embedded ISP 
> driver how to deal with sensor controls, as the system will be used in a too 
> wide variety of applications and hardware configurations. All controls need to 
> be exposed, period.

We're not talking about controls. We're talking about providing the needed
V4L2 support to allow an userspace application to access the hardware
sensor.

>>>> I never saw an embedded hardware that allows physically changing the
>>>> sensor.
>>>
>>> Beagleboard + pluggable sensor board.
>>
>> Development systems like beagleboard, pandaboard, Exynos SMDK, etc, aren't
>> embeeded hardware. They're development kits.
> 
> People create end-user products based on those kits. That make them first-
> class embedded hardware like any other.

No doubt they should be supported, but it doesn't make sense to create tons
of input pipelines to be used for S_INPUT for each different type of possible
sensor. Somehow, userspace needs to tell what's the sensor that he attached
to the hardware, or the driver should suport auto-detecting it.

In other words, I see 2 options for that:
	1) add hardware auto-detection at the sensor logic. At driver probe,
try to probe all sensors, if it is a hardware development kit;
	2) add one new parameter at the driver: "sensors". If the hardware
is one of those kits, this parameter will allow the developer to specify
the used sensors. It is the same logic as we do with userspace TV and
grabber cards without eeprom or any other way to auto-detect the hardware.

>> I don't mind if, for those kits the developer that is playing with it has to
>> pass a mode parameter and/or run some open harware-aware small application
>> that makes the driver to select the sensor type he is using, but, if the
>> hardware is, instead, a N9 or a Galaxy Tab (or whatever embedded hardware),
>> the driver should expose just the sensors that exists on such hardware. It
>> shouldn't be ever allowed to change it on userspace, using whatever API on
>> those hardware.
> 
> Who talked about changing sensors from userspace on those systems ? Platform 
> data (regardless of whether it comes from board code, device tree, or 
> something else) will contain a hardware description, and the kernel will 
> create the right devices and load the right drivers. The issue we're 
> discussing is how to expose controls for those devices to userspace, and that 
> needs to be done through subdev nodes.

The issue that is under discussion is the removal of S_INPUT from the samsung 
driver, and the comments at the patches that talks about removing V4L2 API
support in favor of using a MC-only API for some fundamental things.

For example, with a patch like this one, only one sensor will be supported
without the MC API (either the front or back sensor on a multi-sensor camera):
	http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/47751733a322a241927f9238b8ab1441389c9c41

Look at the comment on this patch also:
	http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/c6fb462c38be60a45d16a29a9e56c886ee0aa08c

What is called "API compatibility mode" is not clear, but it transmitted
me that the idea is to expose the controls only via subnodes.

Those are the rationale for those discussions: V4L2 API is not being deprecated
in favor of MC API, e. g. controls shouldn't be hidden from the V4L2 API without
a good reason.

>>>>> Even if you did, fine image quality tuning requires accessing pretty
>>>>> much all controls individually anyway.
>>>>
>>>> The same is also true for non-embedded hardware. The only situation
>>>> where V4L2 API is not enough is when there are two controls of the same
>>>> type active. For example, 2 active volume controls, one at the audio
>>>> demod, and another at the bridge. There may have some cases where you
>>>> can do the same thing at the sensor or at a DSP block. This is where MC
>>>> API gives an improvement, by allowing changing both, instead of just
>>>> one of the controls.
>>>
>>> To be precise it's the V4L2 subdev userspace API that allows that, not
>>> the MC API.
>>>
>>>>>>> This is a hack...sorry, just joking ;-) Seriously, I think the
>>>>>>> situation with the userspace subdevs is a bit different. Because with
>>>>>>> one API we directly expose some functionality for applications, with
>>>>>>> other we code it in the kernel, to make the devices appear uniform at
>>>>>>> user space.
>>>>>>
>>>>>> Not sure if I understood you. V4L2 export drivers functionality to
>>>>>> userspace in an uniform way. MC api is for special applications that
>>>>>> might need to access some internal functions on embedded devices.
>>>>>>
>>>>>> Of course, there are some cases where it doesn't make sense to export
>>>>>> a subdev control via V4L2 API.
>>>>>>
>>>>>>>>> Also, the sensor subdev can be configured in the video node driver
>>>>>>>>> as well as through the subdev device node. Both APIs can do the
>>>>>>>>> same thing but in order to let the subdev API work as expected the
>>>>>>>>> video node driver must be forbidden to configure the subdev.
>>>>>>>>
>>>>>>>> Why? For the sensor, a V4L2 API call will look just like a bridge
>>>>>>>> driver call. The subdev will need a mutex anyway, as two MC
>>>>>>>> applications may be opening it simultaneously. I can't see why it
>>>>>>>> should forbid changing the control from the bridge driver call.
>>>>>>>
>>>>>>> Please do not forget there might be more than one subdev to configure
>>>>>>> and that the bridge itself is also a subdev (which exposes a scaler
>>>>>>> interface, for instance). A situation pretty much like in Figure 4.4
>>>>>>> [1] (after the scaler there is also a video node to configure, but we
>>>>>>> may assume that pixel resolution at the scaler pad 1 is same as at
>>>>>>> the video node). Assuming the format and crop configuration flow is
>>>>>>> from sensor to host scaler direction, if we have tried to configure
>>>>>>> _all_ subdevs when the last stage of the pipeline is configured
>>>>>>> (i.e. video node) the whole scaler and crop/composition
>>>>>>> configuration we have been destroyed at that time. And there is more
>>>>>>> to configure than
>>>>>>> VIDIOC_S_FMT can do.
>>>>>>
>>>>>> Think from users perspective: all user wants is to see a video of a
>>>>>> given resolution. S_FMT (and a few other VIDIOC_* calls) have
>>>>>> everything that the user wants: the desired resolution, framerate and
>>>>>> format.
>>>>>>
>>>>>> Specialized applications indeed need more, in order to get the best
>>>>>> images for certain types of usages. So, MC is there.
>>>>>>
>>>>>> Such applications will probably need to know exactly what's the
>>>>>> sensor, what are their bugs, how it is connected, what are the DSP
>>>>>> blocks in the patch, how the DSP algorithms are implemented, etc, in
>>>>>> order to obtain the the perfect image.
>>>>>>
>>>>>> Even on embedded devices like smartphones and tablets, I predict that
>>>>>> both types of applications will be developed and used: people may use
>>>>>> a generic application like flash player, and an specialized
>>>>>> application provided by the manufacturer. Users can even develop
>>>>>> their own applications generic apps using V4L2 directly, at the
>>>>>> devices that allow that.
>>>>>>
>>>>>> As I said before: both application types are welcome. We just need to
>>>>>> warrant that a pure V4L application will work reasonably well.
>>>>>
>>>>> That's why we have libv4l. The driver simply doesn't receive enough
>>>>> information to configure the hardware correctly from the VIDIOC_*
>>>>> calls. And as mentioned above, 3A algorithms, required by "simple"
>>>>> V4L2 applications, need to be implemented in userspace anyway.
>>>>
>>>> It is OK to improve users experience via libv4l. What I'm saying is that
>>>> it is NOT OK to remove V4L2 API support from the driver, forcing users
>>>> to use some hardware plugin at libv4l.
>>>
>>> Let me be clear on this. I'm *NOT* advocating removing V4L2 API support
>>> from any driver (well, on the drivers I can currently think of, if you
>>> show me a wifi driver that implements a V4L2 interface I might change my
>>> mind :-)).
>>
>> This thread is all about a patch series partially removing V4L2 API
>> support.
> 
> Because that specific part of the API doesn't make sense for this use case. 
> You wouldn't object to removing S_INPUT support from a video output driver, as 
> it wouldn't make sense either.

A device with two sensors input where just one node can be switched to use
either input is a typical case where S_INPUT needs to be provided.

> 
>>> The V4L2 API has been designed mostly for desktop hardware. Thanks to its
>>> clean design we can use it for embedded hardware, even though it requires
>>> extensions. What we need to do is to define which parts of the whole API
>>> apply as-is to embedded hardware, and which should better be left
>>> unused.
>>
>> Agreed. I'm all in favor on discussing this topic. However, before he have
>> such iscussions and come into a common understanding, I won't take any
>> patch removing V4L2 API support, nor I'll accept any driver that won't
>> allow it to be usable by a V4L2 userspace application.
> 
> A pure V4L2 userspace application, knowing about video device nodes only, can 
> still use the driver. Not all advanced features will be available.

That's exactly what I'm saying.

Regards,
Mauro.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-16 15:30                     ` Mauro Carvalho Chehab
@ 2011-08-16 15:44                       ` Laurent Pinchart
  2011-08-16 22:36                         ` Mauro Carvalho Chehab
  2011-08-16 21:47                       ` Sylwester Nawrocki
  1 sibling, 1 reply; 44+ messages in thread
From: Laurent Pinchart @ 2011-08-16 15:44 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sylwester Nawrocki, Sylwester Nawrocki, linux-media, Sakari Ailus

Hi Mauro,

On Tuesday 16 August 2011 17:30:47 Mauro Carvalho Chehab wrote:
> Em 16-08-2011 01:57, Laurent Pinchart escreveu:
> >>> My point is that the ISP driver developer can't know in advance which
> >>> sensor will be used systems that don't exist yet.
> >> 
> >> As far as such hardware is projected, someone will know. It is a simple
> >> trivial patch to associate a new hardware with a hardware profile at the
> >> platform data.
> > 
> > Platform data must contain hardware descriptions only, not policies. This
> > is even clearer now that ARM is moving away from board code to the
> > Device Tree.
> 
> Again, a cell phone with one frontal camera and one hear camera has two
> sensor inputs only. This is not a "policy". It is a hardware constraint.
> The driver should allow setting the pipeline for both sensors via S_INPUT,
> otherwise a V4L2 only userspace application won't work.
> 
> It is as simple as that.

When capturing from the main sensor on the OMAP3 ISP you need to capture raw 
data to memory on a video node, feed it back to the hardware through another 
video node, and finally capture it on a third video node. A V4L2-only 
userspace application won't work. That's how the hardware is, we can't do much 
about that.

> >> Also, on most cases, probing a sensor is as trivial as reading a sensor
> >> ID during device probe. This applies, for example, for all Omnivision
> >> sensors.
> >> 
> >> We do things like that all the times for PC world, as nobody knows what
> >> webcam someone would plug on his PC.
> > 
> > Sorry, but that's not related. You simply can't decide in an embedded ISP
> > driver how to deal with sensor controls, as the system will be used in a
> > too wide variety of applications and hardware configurations. All
> > controls need to be exposed, period.
> 
> We're not talking about controls. We're talking about providing the needed
> V4L2 support to allow an userspace application to access the hardware
> sensor.

OK, so we're discussing S_INPUT. Let's discuss controls later :-)

> >>>> I never saw an embedded hardware that allows physically changing the
> >>>> sensor.
> >>> 
> >>> Beagleboard + pluggable sensor board.
> >> 
> >> Development systems like beagleboard, pandaboard, Exynos SMDK, etc,
> >> aren't embeeded hardware. They're development kits.
> > 
> > People create end-user products based on those kits. That make them
> > first- class embedded hardware like any other.
> 
> No doubt they should be supported, but it doesn't make sense to create tons
> of input pipelines to be used for S_INPUT for each different type of
> possible sensor. Somehow, userspace needs to tell what's the sensor that
> he attached to the hardware, or the driver should suport auto-detecting
> it.

We're not creating tons of input pipelines. Look at 
http://www.ideasonboard.org/media/omap3isp.ps , every video node (in yellow) 
has its purpose.

> In other words, I see 2 options for that:
> 	1) add hardware auto-detection at the sensor logic. At driver probe,
> try to probe all sensors, if it is a hardware development kit;

We've worked quite hard to remove I2C device probing from the kernel, let's 
not add it back.

> 	2) add one new parameter at the driver: "sensors". If the hardware
> is one of those kits, this parameter will allow the developer to specify
> the used sensors. It is the same logic as we do with userspace TV and
> grabber cards without eeprom or any other way to auto-detect the hardware.

This will likely be done through the Device Tree.

> >> I don't mind if, for those kits the developer that is playing with it
> >> has to pass a mode parameter and/or run some open harware-aware small
> >> application that makes the driver to select the sensor type he is
> >> using, but, if the hardware is, instead, a N9 or a Galaxy Tab (or
> >> whatever embedded hardware), the driver should expose just the sensors
> >> that exists on such hardware. It shouldn't be ever allowed to change it
> >> on userspace, using whatever API on those hardware.
> > 
> > Who talked about changing sensors from userspace on those systems ?
> > Platform data (regardless of whether it comes from board code, device
> > tree, or something else) will contain a hardware description, and the
> > kernel will create the right devices and load the right drivers. The
> > issue we're discussing is how to expose controls for those devices to
> > userspace, and that needs to be done through subdev nodes.
> 
> The issue that is under discussion is the removal of S_INPUT from the
> samsung driver, and the comments at the patches that talks about removing
> V4L2 API support in favor of using a MC-only API for some fundamental
> things.
> 
> For example, with a patch like this one, only one sensor will be supported
> without the MC API (either the front or back sensor on a multi-sensor
> camera):
> http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/47751733a32
> 2a241927f9238b8ab1441389c9c41
> 
> Look at the comment on this patch also:
> 	http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/c6fb462c38b
> e60a45d16a29a9e56c886ee0aa08c
> 
> What is called "API compatibility mode" is not clear, but it transmitted
> me that the idea is to expose the controls only via subnodes.
> 
> Those are the rationale for those discussions: V4L2 API is not being
> deprecated in favor of MC API, e. g. controls shouldn't be hidden from the
> V4L2 API without a good reason.

Controls need to move to subdev nodes for embedded devices because there's 
simply no way to expose multiple identical controls through a video node. 
Please also have a look at the diagram I linked to above, and tell me though 
which video node sensor controls should be exposed. There's no simple answer 
to that.

> >>>>> Even if you did, fine image quality tuning requires accessing pretty
> >>>>> much all controls individually anyway.
> >>>> 
> >>>> The same is also true for non-embedded hardware. The only situation
> >>>> where V4L2 API is not enough is when there are two controls of the
> >>>> same type active. For example, 2 active volume controls, one at the
> >>>> audio demod, and another at the bridge. There may have some cases
> >>>> where you can do the same thing at the sensor or at a DSP block. This
> >>>> is where MC API gives an improvement, by allowing changing both,
> >>>> instead of just one of the controls.
> >>> 
> >>> To be precise it's the V4L2 subdev userspace API that allows that, not
> >>> the MC API.
> >>> 
> >>>>>>> This is a hack...sorry, just joking ;-) Seriously, I think the
> >>>>>>> situation with the userspace subdevs is a bit different. Because
> >>>>>>> with one API we directly expose some functionality for
> >>>>>>> applications, with other we code it in the kernel, to make the
> >>>>>>> devices appear uniform at user space.
> >>>>>> 
> >>>>>> Not sure if I understood you. V4L2 export drivers functionality to
> >>>>>> userspace in an uniform way. MC api is for special applications that
> >>>>>> might need to access some internal functions on embedded devices.
> >>>>>> 
> >>>>>> Of course, there are some cases where it doesn't make sense to
> >>>>>> export a subdev control via V4L2 API.
> >>>>>> 
> >>>>>>>>> Also, the sensor subdev can be configured in the video node
> >>>>>>>>> driver as well as through the subdev device node. Both APIs can
> >>>>>>>>> do the same thing but in order to let the subdev API work as
> >>>>>>>>> expected the video node driver must be forbidden to configure
> >>>>>>>>> the subdev.
> >>>>>>>> 
> >>>>>>>> Why? For the sensor, a V4L2 API call will look just like a bridge
> >>>>>>>> driver call. The subdev will need a mutex anyway, as two MC
> >>>>>>>> applications may be opening it simultaneously. I can't see why it
> >>>>>>>> should forbid changing the control from the bridge driver call.
> >>>>>>> 
> >>>>>>> Please do not forget there might be more than one subdev to
> >>>>>>> configure and that the bridge itself is also a subdev (which
> >>>>>>> exposes a scaler interface, for instance). A situation pretty much
> >>>>>>> like in Figure 4.4 [1] (after the scaler there is also a video
> >>>>>>> node to configure, but we may assume that pixel resolution at the
> >>>>>>> scaler pad 1 is same as at the video node). Assuming the format
> >>>>>>> and crop configuration flow is from sensor to host scaler
> >>>>>>> direction, if we have tried to configure _all_ subdevs when the
> >>>>>>> last stage of the pipeline is configured (i.e. video node) the
> >>>>>>> whole scaler and crop/composition
> >>>>>>> configuration we have been destroyed at that time. And there is
> >>>>>>> more to configure than
> >>>>>>> VIDIOC_S_FMT can do.
> >>>>>> 
> >>>>>> Think from users perspective: all user wants is to see a video of a
> >>>>>> given resolution. S_FMT (and a few other VIDIOC_* calls) have
> >>>>>> everything that the user wants: the desired resolution, framerate
> >>>>>> and format.
> >>>>>> 
> >>>>>> Specialized applications indeed need more, in order to get the best
> >>>>>> images for certain types of usages. So, MC is there.
> >>>>>> 
> >>>>>> Such applications will probably need to know exactly what's the
> >>>>>> sensor, what are their bugs, how it is connected, what are the DSP
> >>>>>> blocks in the patch, how the DSP algorithms are implemented, etc, in
> >>>>>> order to obtain the the perfect image.
> >>>>>> 
> >>>>>> Even on embedded devices like smartphones and tablets, I predict
> >>>>>> that both types of applications will be developed and used: people
> >>>>>> may use a generic application like flash player, and an specialized
> >>>>>> application provided by the manufacturer. Users can even develop
> >>>>>> their own applications generic apps using V4L2 directly, at the
> >>>>>> devices that allow that.
> >>>>>> 
> >>>>>> As I said before: both application types are welcome. We just need
> >>>>>> to warrant that a pure V4L application will work reasonably well.
> >>>>> 
> >>>>> That's why we have libv4l. The driver simply doesn't receive enough
> >>>>> information to configure the hardware correctly from the VIDIOC_*
> >>>>> calls. And as mentioned above, 3A algorithms, required by "simple"
> >>>>> V4L2 applications, need to be implemented in userspace anyway.
> >>>> 
> >>>> It is OK to improve users experience via libv4l. What I'm saying is
> >>>> that it is NOT OK to remove V4L2 API support from the driver, forcing
> >>>> users to use some hardware plugin at libv4l.
> >>> 
> >>> Let me be clear on this. I'm *NOT* advocating removing V4L2 API support
> >>> from any driver (well, on the drivers I can currently think of, if you
> >>> show me a wifi driver that implements a V4L2 interface I might change
> >>> my mind :-)).
> >> 
> >> This thread is all about a patch series partially removing V4L2 API
> >> support.
> > 
> > Because that specific part of the API doesn't make sense for this use
> > case. You wouldn't object to removing S_INPUT support from a video
> > output driver, as it wouldn't make sense either.
> 
> A device with two sensors input where just one node can be switched to use
> either input is a typical case where S_INPUT needs to be provided.

No. S_INPUT shouldn't be use to select between sensors. The hardware pipeline 
is more complex than just that. We can't make it all fit in the S_INPUT API.

For instance, when switching between a b&w and a color sensor you will need to 
reconfigure the whole pipeline to select the right gamma table, white balance 
parameters, color conversion matrix, ... That's not something we want to 
hardcode in the kernel. This needs to be done from userspace.

> >>> The V4L2 API has been designed mostly for desktop hardware. Thanks to
> >>> its clean design we can use it for embedded hardware, even though it
> >>> requires extensions. What we need to do is to define which parts of
> >>> the whole API apply as-is to embedded hardware, and which should
> >>> better be left unused.
> >> 
> >> Agreed. I'm all in favor on discussing this topic. However, before he
> >> have such iscussions and come into a common understanding, I won't take
> >> any patch removing V4L2 API support, nor I'll accept any driver that
> >> won't allow it to be usable by a V4L2 userspace application.
> > 
> > A pure V4L2 userspace application, knowing about video device nodes only,
> > can still use the driver. Not all advanced features will be available.
> 
> That's exactly what I'm saying.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-16 15:30                     ` Mauro Carvalho Chehab
  2011-08-16 15:44                       ` Laurent Pinchart
@ 2011-08-16 21:47                       ` Sylwester Nawrocki
  2011-08-17  6:13                         ` Embedded device and the V4L2 API support - Was: " Mauro Carvalho Chehab
  1 sibling, 1 reply; 44+ messages in thread
From: Sylwester Nawrocki @ 2011-08-16 21:47 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Laurent Pinchart, Sylwester Nawrocki, linux-media, Sakari Ailus

Hello,

On 08/16/2011 05:30 PM, Mauro Carvalho Chehab wrote:
> Em 16-08-2011 01:57, Laurent Pinchart escreveu:
>> Hi Mauro,
> 
>>>> My point is that the ISP driver developer can't know in advance which
>>>> sensor will be used systems that don't exist yet.
>>>
>>> As far as such hardware is projected, someone will know. It is a simple
>>> trivial patch to associate a new hardware with a hardware profile at the
>>> platform data.

My question here would be whether we want to have a kernel patch fixing up
the configuration for each H/W case or rather a user space script that will
set up a data processing chain ?

>>
>> Platform data must contain hardware descriptions only, not policies. This is
>> even clearer now that ARM is moving away from board code to the Device Tree.
> 
> Again, a cell phone with one frontal camera and one hear camera has two
> sensor inputs only. This is not a "policy". It is a hardware constraint.

There is a policy because you have to tweak the device configuration for a specific
board and sensor. It seems impossible to have fully functional drivers in the mainline
with such a simplistic approach.


> The driver should allow setting the pipeline for both sensors via S_INPUT,
> otherwise a V4L2 only userspace application won't work.
> 
> It is as simple as that.
> 
>>> Also, on most cases, probing a sensor is as trivial as reading a sensor
>>> ID during device probe. This applies, for example, for all Omnivision
>>> sensors.
>>>
>>> We do things like that all the times for PC world, as nobody knows what
>>> webcam someone would plug on his PC.
>>
>> Sorry, but that's not related. You simply can't decide in an embedded ISP
>> driver how to deal with sensor controls, as the system will be used in a too
>> wide variety of applications and hardware configurations. All controls need to
>> be exposed, period.
> 
> We're not talking about controls. We're talking about providing the needed
> V4L2 support to allow an userspace application to access the hardware
> sensor.

Which involves the controls as well.

> 
>>>>> I never saw an embedded hardware that allows physically changing the
>>>>> sensor.
>>>>
>>>> Beagleboard + pluggable sensor board.
>>>
>>> Development systems like beagleboard, pandaboard, Exynos SMDK, etc, aren't
>>> embeeded hardware. They're development kits.
>>
>> People create end-user products based on those kits. That make them first-
>> class embedded hardware like any other.
> 
> No doubt they should be supported, but it doesn't make sense to create tons
> of input pipelines to be used for S_INPUT for each different type of possible
> sensor. Somehow, userspace needs to tell what's the sensor that he attached
> to the hardware, or the driver should suport auto-detecting it.
> 
> In other words, I see 2 options for that:
> 	1) add hardware auto-detection at the sensor logic. At driver probe,
> try to probe all sensors, if it is a hardware development kit;

Isn't it better to let the application discover what type of sensor
the kernel exposes and apply a pre-defined ISP configuration for it ? 
If I am not mistaken, this is what happens currently in OMAP3 ISP systems.
Should the base parameters for the I2C client registration be taken from
platform_data/the flatted device tree.

> 	2) add one new parameter at the driver: "sensors". If the hardware
> is one of those kits, this parameter will allow the developer to specify
> the used sensors. It is the same logic as we do with userspace TV and
> grabber cards without eeprom or any other way to auto-detect the hardware.

Sorry, this does not look sane to me.. What is the driver supposed to do
with such a list of sensors ? Hard code the pipeline configurations for them ?

> 
>>> I don't mind if, for those kits the developer that is playing with it has to
>>> pass a mode parameter and/or run some open harware-aware small application
>>> that makes the driver to select the sensor type he is using, but, if the
>>> hardware is, instead, a N9 or a Galaxy Tab (or whatever embedded hardware),
>>> the driver should expose just the sensors that exists on such hardware. It
>>> shouldn't be ever allowed to change it on userspace, using whatever API on
>>> those hardware.
>>
>> Who talked about changing sensors from userspace on those systems ? Platform
>> data (regardless of whether it comes from board code, device tree, or
>> something else) will contain a hardware description, and the kernel will
>> create the right devices and load the right drivers. The issue we're
>> discussing is how to expose controls for those devices to userspace, and that
>> needs to be done through subdev nodes.
> 
> The issue that is under discussion is the removal of S_INPUT from the samsung
> driver, and the comments at the patches that talks about removing V4L2 API
> support in favor of using a MC-only API for some fundamental things.
> 
> For example, with a patch like this one, only one sensor will be supported
> without the MC API (either the front or back sensor on a multi-sensor camera):
> 	http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/47751733a322a241927f9238b8ab1441389c9c41

That's not true. There are 3 or 4 host interfaces on those SoC's.
Each exports a capture video node. After the above patch the front and rear camera
are available on separate video nodes on input 0, rather than on a single video node
and distinct inputs. I don't really get why there is so much noise about so
meaningless detail. I decided to use MC API because I had to add support for
MIPI-CSI bus. This API seemed a natural choice as there had to be added an extra
subdevice between sensor and the host - the mipi csi receiver subdev.

By using the subdev user space API the use cases like this one can be easily 
supported:

| sensor subdev | -> | mipi-csi subdev | -x-> | FIMC0 | -> /dev/video? => encoder
                                          | 
                                          x-> | FIMC1 (scaler) | -> /dev/video? => preview

To lower the main system bus bandwidth by attaching a sensor to both host interfaces
directly through the camera data buses.

The mipi-csi subdev and the sensor need to be configured separately from
the host if, otherwise there is a conflict which /dev/video? is to control
the subdevs.

Let's try to use S_INPUT here, assuming that both subdev are set up
by the /dev/video? driver. 

The questions that need to be answered first :
a) at which v4l2 entity to register the subdevs ?
b) how to arbiter the subdev access from multiple /dev/video? ?
...


| sensor1 subdev | -> | mipi-csi subdev | -x-> | FIMC0 | -> /dev/video?
                                           |
| sensor2 subdev | -------------->---------x

The problem with S_INPUT is that it requires adding/removing elements
in the pipeline which belongs to the media device domain.

> 
> Look at the comment on this patch also:
> 	http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/c6fb462c38be60a45d16a29a9e56c886ee0aa08c
> 
> What is called "API compatibility mode" is not clear, but it transmitted
> me that the idea is to expose the controls only via subnodes.

The "API compatibility mode" is a simple boolean which tells whether:
a) the sensor and the mipi csi receiver are configured internally in the kernel
   from the capture video node level (an alternative is to do it for each subdev
   in user space);
b) the sensor controls are inherited by the video node or not.

I wish I could find better ways than a sysfs entry to handle the above issues.

Most likely I'll remove all redundant V4L2 ioctls from the driver, like 
VIDIOC_CROPCAP, VIDIOC_[S/G]_CROP, and all the subdevs configuration from the
driver. Then there is no any compatibility issue, only single API - subdev
user-space API!... OK, now I'm hiding:) ...

> 
> Those are the rationale for those discussions: V4L2 API is not being deprecated
> in favor of MC API, e. g. controls shouldn't be hidden from the V4L2 API without
> a good reason.

Indeed. One remark though - I really haven't restricted the S_INPUT call just for
fun of it. :-)
Is that really a problem that the sensors have been distributed across multiple
video nodes ?

>>>>>> Even if you did, fine image quality tuning requires accessing pretty
>>>>>> much all controls individually anyway.
>>>>>
>>>>> The same is also true for non-embedded hardware. The only situation
>>>>> where V4L2 API is not enough is when there are two controls of the same
>>>>> type active. For example, 2 active volume controls, one at the audio
>>>>> demod, and another at the bridge. There may have some cases where you
>>>>> can do the same thing at the sensor or at a DSP block. This is where MC
>>>>> API gives an improvement, by allowing changing both, instead of just
>>>>> one of the controls.
>>>>
>>>> To be precise it's the V4L2 subdev userspace API that allows that, not
>>>> the MC API.
>>>>
>>>>>>>> This is a hack...sorry, just joking ;-) Seriously, I think the
>>>>>>>> situation with the userspace subdevs is a bit different. Because with
>>>>>>>> one API we directly expose some functionality for applications, with
>>>>>>>> other we code it in the kernel, to make the devices appear uniform at
>>>>>>>> user space.
>>>>>>>
>>>>>>> Not sure if I understood you. V4L2 export drivers functionality to
>>>>>>> userspace in an uniform way. MC api is for special applications that
>>>>>>> might need to access some internal functions on embedded devices.
>>>>>>>
>>>>>>> Of course, there are some cases where it doesn't make sense to export
>>>>>>> a subdev control via V4L2 API.
>>>>>>>
>>>>>>>>>> Also, the sensor subdev can be configured in the video node driver
>>>>>>>>>> as well as through the subdev device node. Both APIs can do the
>>>>>>>>>> same thing but in order to let the subdev API work as expected the
>>>>>>>>>> video node driver must be forbidden to configure the subdev.
>>>>>>>>>
>>>>>>>>> Why? For the sensor, a V4L2 API call will look just like a bridge
>>>>>>>>> driver call. The subdev will need a mutex anyway, as two MC
>>>>>>>>> applications may be opening it simultaneously. I can't see why it
>>>>>>>>> should forbid changing the control from the bridge driver call.
>>>>>>>>
>>>>>>>> Please do not forget there might be more than one subdev to configure
>>>>>>>> and that the bridge itself is also a subdev (which exposes a scaler
>>>>>>>> interface, for instance). A situation pretty much like in Figure 4.4
>>>>>>>> [1] (after the scaler there is also a video node to configure, but we
>>>>>>>> may assume that pixel resolution at the scaler pad 1 is same as at
>>>>>>>> the video node). Assuming the format and crop configuration flow is
>>>>>>>> from sensor to host scaler direction, if we have tried to configure
>>>>>>>> _all_ subdevs when the last stage of the pipeline is configured
>>>>>>>> (i.e. video node) the whole scaler and crop/composition
>>>>>>>> configuration we have been destroyed at that time. And there is more
>>>>>>>> to configure than
>>>>>>>> VIDIOC_S_FMT can do.
>>>>>>>
>>>>>>> Think from users perspective: all user wants is to see a video of a
>>>>>>> given resolution. S_FMT (and a few other VIDIOC_* calls) have
>>>>>>> everything that the user wants: the desired resolution, framerate and
>>>>>>> format.
>>>>>>>
>>>>>>> Specialized applications indeed need more, in order to get the best
>>>>>>> images for certain types of usages. So, MC is there.
>>>>>>>
>>>>>>> Such applications will probably need to know exactly what's the
>>>>>>> sensor, what are their bugs, how it is connected, what are the DSP
>>>>>>> blocks in the patch, how the DSP algorithms are implemented, etc, in
>>>>>>> order to obtain the the perfect image.
>>>>>>>
>>>>>>> Even on embedded devices like smartphones and tablets, I predict that
>>>>>>> both types of applications will be developed and used: people may use
>>>>>>> a generic application like flash player, and an specialized
>>>>>>> application provided by the manufacturer. Users can even develop
>>>>>>> their own applications generic apps using V4L2 directly, at the
>>>>>>> devices that allow that.
>>>>>>>
>>>>>>> As I said before: both application types are welcome. We just need to
>>>>>>> warrant that a pure V4L application will work reasonably well.
>>>>>>
>>>>>> That's why we have libv4l. The driver simply doesn't receive enough
>>>>>> information to configure the hardware correctly from the VIDIOC_*
>>>>>> calls. And as mentioned above, 3A algorithms, required by "simple"
>>>>>> V4L2 applications, need to be implemented in userspace anyway.
>>>>>
>>>>> It is OK to improve users experience via libv4l. What I'm saying is that
>>>>> it is NOT OK to remove V4L2 API support from the driver, forcing users
>>>>> to use some hardware plugin at libv4l.
>>>>
>>>> Let me be clear on this. I'm *NOT* advocating removing V4L2 API support
>>>> from any driver (well, on the drivers I can currently think of, if you
>>>> show me a wifi driver that implements a V4L2 interface I might change my
>>>> mind :-)).
>>>
>>> This thread is all about a patch series partially removing V4L2 API
>>> support.
>>
>> Because that specific part of the API doesn't make sense for this use case.
>> You wouldn't object to removing S_INPUT support from a video output driver, as
>> it wouldn't make sense either.
> 
> A device with two sensors input where just one node can be switched to use
> either input is a typical case where S_INPUT needs to be provided.

For simple webcams that V4L2 is mostly used for, that's true.


--
Regards,
Sylwester

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-16 15:44                       ` Laurent Pinchart
@ 2011-08-16 22:36                         ` Mauro Carvalho Chehab
  2011-08-17  7:57                           ` Laurent Pinchart
  2011-08-17 12:33                           ` Sakari Ailus
  0 siblings, 2 replies; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-16 22:36 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sylwester Nawrocki, Sylwester Nawrocki, linux-media, Sakari Ailus

Em 16-08-2011 08:44, Laurent Pinchart escreveu:
> Hi Mauro,
> 
> On Tuesday 16 August 2011 17:30:47 Mauro Carvalho Chehab wrote:
>> Em 16-08-2011 01:57, Laurent Pinchart escreveu:
>>>>> My point is that the ISP driver developer can't know in advance which
>>>>> sensor will be used systems that don't exist yet.
>>>>
>>>> As far as such hardware is projected, someone will know. It is a simple
>>>> trivial patch to associate a new hardware with a hardware profile at the
>>>> platform data.
>>>
>>> Platform data must contain hardware descriptions only, not policies. This
>>> is even clearer now that ARM is moving away from board code to the
>>> Device Tree.
>>
>> Again, a cell phone with one frontal camera and one hear camera has two
>> sensor inputs only. This is not a "policy". It is a hardware constraint.
>> The driver should allow setting the pipeline for both sensors via S_INPUT,
>> otherwise a V4L2 only userspace application won't work.
>>
>> It is as simple as that.
> 
> When capturing from the main sensor on the OMAP3 ISP you need to capture raw 
> data to memory on a video node, feed it back to the hardware through another 
> video node, and finally capture it on a third video node. A V4L2-only 
> userspace application won't work. That's how the hardware is, we can't do much 
> about that.

The raw data conversion is one of the functions that libv4l should do. So, if
you've already submitted the patches for libv4l to do the hardware loopback
trick, or add a function there to convert the raw data into a common format,
that should be ok. Otherwise, we have a problem, that needs to be fixed.

> 
>>>> Also, on most cases, probing a sensor is as trivial as reading a sensor
>>>> ID during device probe. This applies, for example, for all Omnivision
>>>> sensors.
>>>>
>>>> We do things like that all the times for PC world, as nobody knows what
>>>> webcam someone would plug on his PC.
>>>
>>> Sorry, but that's not related. You simply can't decide in an embedded ISP
>>> driver how to deal with sensor controls, as the system will be used in a
>>> too wide variety of applications and hardware configurations. All
>>> controls need to be exposed, period.
>>
>> We're not talking about controls. We're talking about providing the needed
>> V4L2 support to allow an userspace application to access the hardware
>> sensor.
> 
> OK, so we're discussing S_INPUT. Let's discuss controls later :-)
> 
>>>>>> I never saw an embedded hardware that allows physically changing the
>>>>>> sensor.
>>>>>
>>>>> Beagleboard + pluggable sensor board.
>>>>
>>>> Development systems like beagleboard, pandaboard, Exynos SMDK, etc,
>>>> aren't embeeded hardware. They're development kits.
>>>
>>> People create end-user products based on those kits. That make them
>>> first- class embedded hardware like any other.
>>
>> No doubt they should be supported, but it doesn't make sense to create tons
>> of input pipelines to be used for S_INPUT for each different type of
>> possible sensor. Somehow, userspace needs to tell what's the sensor that
>> he attached to the hardware, or the driver should suport auto-detecting
>> it.
> 
> We're not creating tons of input pipelines. Look at 
> http://www.ideasonboard.org/media/omap3isp.ps , every video node (in yellow) 
> has its purpose.

Not sure if I it understood well. The subdevs 8-11 are the sensors, right?

>> In other words, I see 2 options for that:
>> 	1) add hardware auto-detection at the sensor logic. At driver probe,
>> try to probe all sensors, if it is a hardware development kit;
> 
> We've worked quite hard to remove I2C device probing from the kernel, let's 
> not add it back.

We do I2C probing on several drivers. It is there for devices where
the cards entry is not enough to identify the hardware. For example,
two different devices with the same USB ID generally uses that.
If the hardware information is not enough, there's nothing wrong
on doing that.

>> 	2) add one new parameter at the driver: "sensors". If the hardware
>> is one of those kits, this parameter will allow the developer to specify
>> the used sensors. It is the same logic as we do with userspace TV and
>> grabber cards without eeprom or any other way to auto-detect the hardware.
> 
> This will likely be done through the Device Tree.

I don't mind much about the way it is implemented, but it should be there
on a place where people can find it and use it.

Devices that requires the user to pass some parameters to the Kernel in order
for the driver to find the hardware are economic class devices. A first
class device should work as-is, after the driver is loaded.

>>>> I don't mind if, for those kits the developer that is playing with it
>>>> has to pass a mode parameter and/or run some open harware-aware small
>>>> application that makes the driver to select the sensor type he is
>>>> using, but, if the hardware is, instead, a N9 or a Galaxy Tab (or
>>>> whatever embedded hardware), the driver should expose just the sensors
>>>> that exists on such hardware. It shouldn't be ever allowed to change it
>>>> on userspace, using whatever API on those hardware.
>>>
>>> Who talked about changing sensors from userspace on those systems ?
>>> Platform data (regardless of whether it comes from board code, device
>>> tree, or something else) will contain a hardware description, and the
>>> kernel will create the right devices and load the right drivers. The
>>> issue we're discussing is how to expose controls for those devices to
>>> userspace, and that needs to be done through subdev nodes.
>>
>> The issue that is under discussion is the removal of S_INPUT from the
>> samsung driver, and the comments at the patches that talks about removing
>> V4L2 API support in favor of using a MC-only API for some fundamental
>> things.
>>
>> For example, with a patch like this one, only one sensor will be supported
>> without the MC API (either the front or back sensor on a multi-sensor
>> camera):
>> http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/47751733a32
>> 2a241927f9238b8ab1441389c9c41
>>
>> Look at the comment on this patch also:
>> 	http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/c6fb462c38b
>> e60a45d16a29a9e56c886ee0aa08c
>>
>> What is called "API compatibility mode" is not clear, but it transmitted
>> me that the idea is to expose the controls only via subnodes.
>>
>> Those are the rationale for those discussions: V4L2 API is not being
>> deprecated in favor of MC API, e. g. controls shouldn't be hidden from the
>> V4L2 API without a good reason.
> 
> Controls need to move to subdev nodes for embedded devices because there's 
> simply no way to expose multiple identical controls through a video node. 
> Please also have a look at the diagram I linked to above, and tell me though 
> which video node sensor controls should be exposed. There's no simple answer 
> to that.

Again, not sure if I understood well your diagram. What device will be 
controlling and receiving the video streaming from the sensor? /dev/video0? 

If so, this is the one that should be exposing the controls for the selected input sensor.

> 
>>>>>>> Even if you did, fine image quality tuning requires accessing pretty
>>>>>>> much all controls individually anyway.
>>>>>>
>>>>>> The same is also true for non-embedded hardware. The only situation
>>>>>> where V4L2 API is not enough is when there are two controls of the
>>>>>> same type active. For example, 2 active volume controls, one at the
>>>>>> audio demod, and another at the bridge. There may have some cases
>>>>>> where you can do the same thing at the sensor or at a DSP block. This
>>>>>> is where MC API gives an improvement, by allowing changing both,
>>>>>> instead of just one of the controls.
>>>>>
>>>>> To be precise it's the V4L2 subdev userspace API that allows that, not
>>>>> the MC API.
>>>>>
>>>>>>>>> This is a hack...sorry, just joking ;-) Seriously, I think the
>>>>>>>>> situation with the userspace subdevs is a bit different. Because
>>>>>>>>> with one API we directly expose some functionality for
>>>>>>>>> applications, with other we code it in the kernel, to make the
>>>>>>>>> devices appear uniform at user space.
>>>>>>>>
>>>>>>>> Not sure if I understood you. V4L2 export drivers functionality to
>>>>>>>> userspace in an uniform way. MC api is for special applications that
>>>>>>>> might need to access some internal functions on embedded devices.
>>>>>>>>
>>>>>>>> Of course, there are some cases where it doesn't make sense to
>>>>>>>> export a subdev control via V4L2 API.
>>>>>>>>
>>>>>>>>>>> Also, the sensor subdev can be configured in the video node
>>>>>>>>>>> driver as well as through the subdev device node. Both APIs can
>>>>>>>>>>> do the same thing but in order to let the subdev API work as
>>>>>>>>>>> expected the video node driver must be forbidden to configure
>>>>>>>>>>> the subdev.
>>>>>>>>>>
>>>>>>>>>> Why? For the sensor, a V4L2 API call will look just like a bridge
>>>>>>>>>> driver call. The subdev will need a mutex anyway, as two MC
>>>>>>>>>> applications may be opening it simultaneously. I can't see why it
>>>>>>>>>> should forbid changing the control from the bridge driver call.
>>>>>>>>>
>>>>>>>>> Please do not forget there might be more than one subdev to
>>>>>>>>> configure and that the bridge itself is also a subdev (which
>>>>>>>>> exposes a scaler interface, for instance). A situation pretty much
>>>>>>>>> like in Figure 4.4 [1] (after the scaler there is also a video
>>>>>>>>> node to configure, but we may assume that pixel resolution at the
>>>>>>>>> scaler pad 1 is same as at the video node). Assuming the format
>>>>>>>>> and crop configuration flow is from sensor to host scaler
>>>>>>>>> direction, if we have tried to configure _all_ subdevs when the
>>>>>>>>> last stage of the pipeline is configured (i.e. video node) the
>>>>>>>>> whole scaler and crop/composition
>>>>>>>>> configuration we have been destroyed at that time. And there is
>>>>>>>>> more to configure than
>>>>>>>>> VIDIOC_S_FMT can do.
>>>>>>>>
>>>>>>>> Think from users perspective: all user wants is to see a video of a
>>>>>>>> given resolution. S_FMT (and a few other VIDIOC_* calls) have
>>>>>>>> everything that the user wants: the desired resolution, framerate
>>>>>>>> and format.
>>>>>>>>
>>>>>>>> Specialized applications indeed need more, in order to get the best
>>>>>>>> images for certain types of usages. So, MC is there.
>>>>>>>>
>>>>>>>> Such applications will probably need to know exactly what's the
>>>>>>>> sensor, what are their bugs, how it is connected, what are the DSP
>>>>>>>> blocks in the patch, how the DSP algorithms are implemented, etc, in
>>>>>>>> order to obtain the the perfect image.
>>>>>>>>
>>>>>>>> Even on embedded devices like smartphones and tablets, I predict
>>>>>>>> that both types of applications will be developed and used: people
>>>>>>>> may use a generic application like flash player, and an specialized
>>>>>>>> application provided by the manufacturer. Users can even develop
>>>>>>>> their own applications generic apps using V4L2 directly, at the
>>>>>>>> devices that allow that.
>>>>>>>>
>>>>>>>> As I said before: both application types are welcome. We just need
>>>>>>>> to warrant that a pure V4L application will work reasonably well.
>>>>>>>
>>>>>>> That's why we have libv4l. The driver simply doesn't receive enough
>>>>>>> information to configure the hardware correctly from the VIDIOC_*
>>>>>>> calls. And as mentioned above, 3A algorithms, required by "simple"
>>>>>>> V4L2 applications, need to be implemented in userspace anyway.
>>>>>>
>>>>>> It is OK to improve users experience via libv4l. What I'm saying is
>>>>>> that it is NOT OK to remove V4L2 API support from the driver, forcing
>>>>>> users to use some hardware plugin at libv4l.
>>>>>
>>>>> Let me be clear on this. I'm *NOT* advocating removing V4L2 API support
>>>>> from any driver (well, on the drivers I can currently think of, if you
>>>>> show me a wifi driver that implements a V4L2 interface I might change
>>>>> my mind :-)).
>>>>
>>>> This thread is all about a patch series partially removing V4L2 API
>>>> support.
>>>
>>> Because that specific part of the API doesn't make sense for this use
>>> case. You wouldn't object to removing S_INPUT support from a video
>>> output driver, as it wouldn't make sense either.
>>
>> A device with two sensors input where just one node can be switched to use
>> either input is a typical case where S_INPUT needs to be provided.
> 
> No. S_INPUT shouldn't be use to select between sensors. The hardware pipeline 
> is more complex than just that. We can't make it all fit in the S_INPUT API.
> 
> For instance, when switching between a b&w and a color sensor you will need to 
> reconfigure the whole pipeline to select the right gamma table, white balance 
> parameters, color conversion matrix, ... That's not something we want to 
> hardcode in the kernel. This needs to be done from userspace.

This is something that, if it is not written somehwere, no userspace
applications not developed by the hardware vendor will ever work.

I don't see any code for that any at the kernel or at libv4l. Am I missing
something?

Regards,
Mauro

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-16 21:47                       ` Sylwester Nawrocki
@ 2011-08-17  6:13                         ` Mauro Carvalho Chehab
  2011-08-20 11:27                           ` Sylwester Nawrocki
  0 siblings, 1 reply; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-17  6:13 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Laurent Pinchart, Sylwester Nawrocki, linux-media, Sakari Ailus

It seems that there are too many miss understandings or maybe we're just
talking the same thing on different ways.

So, instead of answering again, let's re-start this discussion on a
different way.

One of the requirements that it was discussed a lot on both mailing
lists and on the Media Controllers meetings that we had (or, at least
in the ones where I've participated) is that:

	"A pure V4L2 userspace application, knowing about video device
	 nodes only, can still use the driver. Not all advanced features 
	 will be available."

This is easily said than done. Also, different understandings can be
obtained by a simple phrase like that.

The solution for this problem is to make a compliance profile that
drivers need to implement. We should define such profile, change
the existing drivers to properly implement it and enforce it for the
newcoming drivers.

Btw, I think we should also work on a profile for other kinds of hardware
as well, but the thing is that, as some things can now be implemented
using two different API's, we need to define the minimal requirements
for the V4L2 implementation.


For me, the above requirement means that, at least, the following features
need to be present:

1) The media driver should properly detect the existing hardware and
should expose the available sensors for capture via the V4L2 API.

For hardware development kits, it should be possible to specify the
hardware sensor(s) at runtime via some tool at the v4l-utils tree 
(or on another tree hosted at linuxtv.org or clearly indicated at
the Kernel tree Documentation files) or via a modprobe parameter.

2) Different sensors present at the hardware may be exposed either
via S_INPUT or, if they're completely independent, via two different
node interface;

3) The active sensor basic controls to adjust color, bright, aperture time
and exposition time, if the hardware directly supports them;

4) The driver should implement the streaming ioctls and/or the read() method;

5) It should be possible to configure the frame rate, if the sensor supports it;

6) It should be possible to configure the crap area, if the sensor supports it.

7) It should be possible to configure the format standard and resolution

...
(the above list is not exhaustive. It is just a few obvious things that are
clear to me - I'm almost sure that I've forgot something).

We'll also end by having some optional requirements, like the DV timings ioctls
that also needs to be covered by the SoC hardware profile.

In practice, the above requirements should be converted into a list of features
and ioctl's that needs to be implemented on every SoC driver that implements
a capture or output video streaming device.

My suggestion is that we should start the discussions by filling the macro
requirements. Once we agree on that, we can make a list of the V4L and MC
ioctl's and convert them into a per-ioctl series of requirements.

Regards,
Mauro



^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-16 22:36                         ` Mauro Carvalho Chehab
@ 2011-08-17  7:57                           ` Laurent Pinchart
  2011-08-17 12:25                             ` Mauro Carvalho Chehab
  2011-08-17 12:33                           ` Sakari Ailus
  1 sibling, 1 reply; 44+ messages in thread
From: Laurent Pinchart @ 2011-08-17  7:57 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sylwester Nawrocki, Sylwester Nawrocki, linux-media, Sakari Ailus

Hi Mauro,

On Wednesday 17 August 2011 00:36:44 Mauro Carvalho Chehab wrote:
> Em 16-08-2011 08:44, Laurent Pinchart escreveu:
> > On Tuesday 16 August 2011 17:30:47 Mauro Carvalho Chehab wrote:
> >> Em 16-08-2011 01:57, Laurent Pinchart escreveu:
> >>>>> My point is that the ISP driver developer can't know in advance which
> >>>>> sensor will be used systems that don't exist yet.
> >>>> 
> >>>> As far as such hardware is projected, someone will know. It is a
> >>>> simple trivial patch to associate a new hardware with a hardware
> >>>> profile at the platform data.
> >>> 
> >>> Platform data must contain hardware descriptions only, not policies.
> >>> This is even clearer now that ARM is moving away from board code to
> >>> the Device Tree.
> >> 
> >> Again, a cell phone with one frontal camera and one hear camera has two
> >> sensor inputs only. This is not a "policy". It is a hardware constraint.
> >> The driver should allow setting the pipeline for both sensors via
> >> S_INPUT, otherwise a V4L2 only userspace application won't work.
> >> 
> >> It is as simple as that.
> > 
> > When capturing from the main sensor on the OMAP3 ISP you need to capture
> > raw data to memory on a video node, feed it back to the hardware through
> > another video node, and finally capture it on a third video node. A
> > V4L2-only userspace application won't work. That's how the hardware is,
> > we can't do much about that.
> 
> The raw data conversion is one of the functions that libv4l should do. So,
> if you've already submitted the patches for libv4l to do the hardware
> loopback trick, or add a function there to convert the raw data into a
> common format, that should be ok. Otherwise, we have a problem, that needs
> to be fixed.

That's not what this is about. Bayer to YUV conversion needs to happen in 
hardware in this case for performance reasons. The hardware will also perform 
scaling on YUV, as well as many image processing tasks (white balance, defect 
pixel correction, gamma correction, noise filtering, ...).

> >>>> Also, on most cases, probing a sensor is as trivial as reading a
> >>>> sensor ID during device probe. This applies, for example, for all
> >>>> Omnivision sensors.
> >>>> 
> >>>> We do things like that all the times for PC world, as nobody knows
> >>>> what webcam someone would plug on his PC.
> >>> 
> >>> Sorry, but that's not related. You simply can't decide in an embedded
> >>> ISP driver how to deal with sensor controls, as the system will be
> >>> used in a too wide variety of applications and hardware
> >>> configurations. All controls need to be exposed, period.
> >> 
> >> We're not talking about controls. We're talking about providing the
> >> needed V4L2 support to allow an userspace application to access the
> >> hardware sensor.
> > 
> > OK, so we're discussing S_INPUT. Let's discuss controls later :-)
> > 
> >>>>>> I never saw an embedded hardware that allows physically changing the
> >>>>>> sensor.
> >>>>> 
> >>>>> Beagleboard + pluggable sensor board.
> >>>> 
> >>>> Development systems like beagleboard, pandaboard, Exynos SMDK, etc,
> >>>> aren't embeeded hardware. They're development kits.
> >>> 
> >>> People create end-user products based on those kits. That make them
> >>> first- class embedded hardware like any other.
> >> 
> >> No doubt they should be supported, but it doesn't make sense to create
> >> tons of input pipelines to be used for S_INPUT for each different type
> >> of possible sensor. Somehow, userspace needs to tell what's the sensor
> >> that he attached to the hardware, or the driver should suport
> >> auto-detecting it.
> > 
> > We're not creating tons of input pipelines. Look at
> > http://www.ideasonboard.org/media/omap3isp.ps , every video node (in
> > yellow) has its purpose.
> 
> Not sure if I it understood well. The subdevs 8-11 are the sensors, right?

et8ek8 and vs6555 are the sensors. ad5820 is the lens controller and adp1653 
the flash controller. All other subdevs (green blocks) are part of the ISP.

> >> In other words, I see 2 options for that:
> >> 	1) add hardware auto-detection at the sensor logic. At driver probe,
> >> 
> >> try to probe all sensors, if it is a hardware development kit;
> > 
> > We've worked quite hard to remove I2C device probing from the kernel,
> > let's not add it back.
> 
> We do I2C probing on several drivers. It is there for devices where
> the cards entry is not enough to identify the hardware. For example,
> two different devices with the same USB ID generally uses that.
> If the hardware information is not enough, there's nothing wrong
> on doing that.

Except that probing might destroy the hardware in the general case. We can 
only probe I2C devices on a bus that we know will not contain any other 
sensitive devices.

> >> 	2) add one new parameter at the driver: "sensors". If the hardware
> >> 
> >> is one of those kits, this parameter will allow the developer to specify
> >> the used sensors. It is the same logic as we do with userspace TV and
> >> grabber cards without eeprom or any other way to auto-detect the
> >> hardware.
> > 
> > This will likely be done through the Device Tree.
> 
> I don't mind much about the way it is implemented, but it should be there
> on a place where people can find it and use it.
> 
> Devices that requires the user to pass some parameters to the Kernel in
> order for the driver to find the hardware are economic class devices. A
> first class device should work as-is, after the driver is loaded.

I don't think there has ever been any disagreement on this, we can consider 
the matter settled.

> >>>> I don't mind if, for those kits the developer that is playing with it
> >>>> has to pass a mode parameter and/or run some open harware-aware small
> >>>> application that makes the driver to select the sensor type he is
> >>>> using, but, if the hardware is, instead, a N9 or a Galaxy Tab (or
> >>>> whatever embedded hardware), the driver should expose just the sensors
> >>>> that exists on such hardware. It shouldn't be ever allowed to change
> >>>> it on userspace, using whatever API on those hardware.
> >>> 
> >>> Who talked about changing sensors from userspace on those systems ?
> >>> Platform data (regardless of whether it comes from board code, device
> >>> tree, or something else) will contain a hardware description, and the
> >>> kernel will create the right devices and load the right drivers. The
> >>> issue we're discussing is how to expose controls for those devices to
> >>> userspace, and that needs to be done through subdev nodes.
> >> 
> >> The issue that is under discussion is the removal of S_INPUT from the
> >> samsung driver, and the comments at the patches that talks about
> >> removing V4L2 API support in favor of using a MC-only API for some
> >> fundamental things.
> >> 
> >> For example, with a patch like this one, only one sensor will be
> >> supported without the MC API (either the front or back sensor on a
> >> multi-sensor camera):
> >> http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/47751733a
> >> 32 2a241927f9238b8ab1441389c9c41
> >> 
> >> Look at the comment on this patch also:
> >> 	http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/c6fb462c
> >> 	38b
> >> 
> >> e60a45d16a29a9e56c886ee0aa08c
> >> 
> >> What is called "API compatibility mode" is not clear, but it transmitted
> >> me that the idea is to expose the controls only via subnodes.
> >> 
> >> Those are the rationale for those discussions: V4L2 API is not being
> >> deprecated in favor of MC API, e. g. controls shouldn't be hidden from
> >> the V4L2 API without a good reason.
> > 
> > Controls need to move to subdev nodes for embedded devices because
> > there's simply no way to expose multiple identical controls through a
> > video node. Please also have a look at the diagram I linked to above,
> > and tell me though which video node sensor controls should be exposed.
> > There's no simple answer to that.
> 
> Again, not sure if I understood well your diagram. What device will be
> controlling and receiving the video streaming from the sensor? /dev/video0?
> 
> If so, this is the one that should be exposing the controls for the
> selected input sensor.

When capturing data from the main sensor (et8ek8), a common pipeline is to 
capture data from /dev/video1, feed it back through /dev/video0 and capture 
the final result from /dev/video6. Another common alternative is to capture it 
from /dev/video2, feed it back to /dev/video3 and capture the final result 
from /dev/video6. Regardless of which above configuration is used, 
applications can also need to capture images on /dev/video4 and /dev/video6 
concurrently. This all depends on what kind of image the application wants and 
what hardware processing it wants to apply.

This isn't even a complex case, there are much more complex hardware out there 
that we want to support.

> >>>>>>> Even if you did, fine image quality tuning requires accessing
> >>>>>>> pretty much all controls individually anyway.
> >>>>>> 
> >>>>>> The same is also true for non-embedded hardware. The only situation
> >>>>>> where V4L2 API is not enough is when there are two controls of the
> >>>>>> same type active. For example, 2 active volume controls, one at the
> >>>>>> audio demod, and another at the bridge. There may have some cases
> >>>>>> where you can do the same thing at the sensor or at a DSP block.
> >>>>>> This is where MC API gives an improvement, by allowing changing
> >>>>>> both, instead of just one of the controls.
> >>>>> 
> >>>>> To be precise it's the V4L2 subdev userspace API that allows that,
> >>>>> not the MC API.
> >>>>> 
> >>>>>>>>> This is a hack...sorry, just joking ;-) Seriously, I think the
> >>>>>>>>> situation with the userspace subdevs is a bit different. Because
> >>>>>>>>> with one API we directly expose some functionality for
> >>>>>>>>> applications, with other we code it in the kernel, to make the
> >>>>>>>>> devices appear uniform at user space.
> >>>>>>>> 
> >>>>>>>> Not sure if I understood you. V4L2 export drivers functionality to
> >>>>>>>> userspace in an uniform way. MC api is for special applications
> >>>>>>>> that might need to access some internal functions on embedded
> >>>>>>>> devices.
> >>>>>>>> 
> >>>>>>>> Of course, there are some cases where it doesn't make sense to
> >>>>>>>> export a subdev control via V4L2 API.
> >>>>>>>> 
> >>>>>>>>>>> Also, the sensor subdev can be configured in the video node
> >>>>>>>>>>> driver as well as through the subdev device node. Both APIs can
> >>>>>>>>>>> do the same thing but in order to let the subdev API work as
> >>>>>>>>>>> expected the video node driver must be forbidden to configure
> >>>>>>>>>>> the subdev.
> >>>>>>>>>> 
> >>>>>>>>>> Why? For the sensor, a V4L2 API call will look just like a
> >>>>>>>>>> bridge driver call. The subdev will need a mutex anyway, as two
> >>>>>>>>>> MC applications may be opening it simultaneously. I can't see
> >>>>>>>>>> why it should forbid changing the control from the bridge
> >>>>>>>>>> driver call.
> >>>>>>>>> 
> >>>>>>>>> Please do not forget there might be more than one subdev to
> >>>>>>>>> configure and that the bridge itself is also a subdev (which
> >>>>>>>>> exposes a scaler interface, for instance). A situation pretty
> >>>>>>>>> much like in Figure 4.4 [1] (after the scaler there is also a
> >>>>>>>>> video node to configure, but we may assume that pixel resolution
> >>>>>>>>> at the scaler pad 1 is same as at the video node). Assuming the
> >>>>>>>>> format and crop configuration flow is from sensor to host scaler
> >>>>>>>>> direction, if we have tried to configure _all_ subdevs when the
> >>>>>>>>> last stage of the pipeline is configured (i.e. video node) the
> >>>>>>>>> whole scaler and crop/composition
> >>>>>>>>> configuration we have been destroyed at that time. And there is
> >>>>>>>>> more to configure than
> >>>>>>>>> VIDIOC_S_FMT can do.
> >>>>>>>> 
> >>>>>>>> Think from users perspective: all user wants is to see a video of
> >>>>>>>> a given resolution. S_FMT (and a few other VIDIOC_* calls) have
> >>>>>>>> everything that the user wants: the desired resolution, framerate
> >>>>>>>> and format.
> >>>>>>>> 
> >>>>>>>> Specialized applications indeed need more, in order to get the
> >>>>>>>> best images for certain types of usages. So, MC is there.
> >>>>>>>> 
> >>>>>>>> Such applications will probably need to know exactly what's the
> >>>>>>>> sensor, what are their bugs, how it is connected, what are the DSP
> >>>>>>>> blocks in the patch, how the DSP algorithms are implemented, etc,
> >>>>>>>> in order to obtain the the perfect image.
> >>>>>>>> 
> >>>>>>>> Even on embedded devices like smartphones and tablets, I predict
> >>>>>>>> that both types of applications will be developed and used: people
> >>>>>>>> may use a generic application like flash player, and an
> >>>>>>>> specialized application provided by the manufacturer. Users can
> >>>>>>>> even develop their own applications generic apps using V4L2
> >>>>>>>> directly, at the devices that allow that.
> >>>>>>>> 
> >>>>>>>> As I said before: both application types are welcome. We just need
> >>>>>>>> to warrant that a pure V4L application will work reasonably well.
> >>>>>>> 
> >>>>>>> That's why we have libv4l. The driver simply doesn't receive enough
> >>>>>>> information to configure the hardware correctly from the VIDIOC_*
> >>>>>>> calls. And as mentioned above, 3A algorithms, required by "simple"
> >>>>>>> V4L2 applications, need to be implemented in userspace anyway.
> >>>>>> 
> >>>>>> It is OK to improve users experience via libv4l. What I'm saying is
> >>>>>> that it is NOT OK to remove V4L2 API support from the driver,
> >>>>>> forcing users to use some hardware plugin at libv4l.
> >>>>> 
> >>>>> Let me be clear on this. I'm *NOT* advocating removing V4L2 API
> >>>>> support from any driver (well, on the drivers I can currently think
> >>>>> of, if you show me a wifi driver that implements a V4L2 interface I
> >>>>> might change my mind :-)).
> >>>> 
> >>>> This thread is all about a patch series partially removing V4L2 API
> >>>> support.
> >>> 
> >>> Because that specific part of the API doesn't make sense for this use
> >>> case. You wouldn't object to removing S_INPUT support from a video
> >>> output driver, as it wouldn't make sense either.
> >> 
> >> A device with two sensors input where just one node can be switched to
> >> use either input is a typical case where S_INPUT needs to be provided.
> > 
> > No. S_INPUT shouldn't be use to select between sensors. The hardware
> > pipeline is more complex than just that. We can't make it all fit in the
> > S_INPUT API.
> > 
> > For instance, when switching between a b&w and a color sensor you will
> > need to reconfigure the whole pipeline to select the right gamma table,
> > white balance parameters, color conversion matrix, ... That's not
> > something we want to hardcode in the kernel. This needs to be done from
> > userspace.
> 
> This is something that, if it is not written somehwere, no userspace
> applications not developed by the hardware vendor will ever work.
> 
> I don't see any code for that any at the kernel or at libv4l. Am I missing
> something?

Code for that needs to be written in libv4l. It's not there yet as I don't 
think we have any hardware for this particular example at the moment :-)

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-17  7:57                           ` Laurent Pinchart
@ 2011-08-17 12:25                             ` Mauro Carvalho Chehab
  2011-08-17 12:37                               ` Ivan T. Ivanov
  0 siblings, 1 reply; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-17 12:25 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sylwester Nawrocki, Sylwester Nawrocki, linux-media, Sakari Ailus

Em 17-08-2011 00:57, Laurent Pinchart escreveu:
> Hi Mauro,
> 
> On Wednesday 17 August 2011 00:36:44 Mauro Carvalho Chehab wrote:
>> Em 16-08-2011 08:44, Laurent Pinchart escreveu:
>>> On Tuesday 16 August 2011 17:30:47 Mauro Carvalho Chehab wrote:
>>>> Em 16-08-2011 01:57, Laurent Pinchart escreveu:
>>>>>>> My point is that the ISP driver developer can't know in advance which
>>>>>>> sensor will be used systems that don't exist yet.
>>>>>>
>>>>>> As far as such hardware is projected, someone will know. It is a
>>>>>> simple trivial patch to associate a new hardware with a hardware
>>>>>> profile at the platform data.
>>>>>
>>>>> Platform data must contain hardware descriptions only, not policies.
>>>>> This is even clearer now that ARM is moving away from board code to
>>>>> the Device Tree.
>>>>
>>>> Again, a cell phone with one frontal camera and one hear camera has two
>>>> sensor inputs only. This is not a "policy". It is a hardware constraint.
>>>> The driver should allow setting the pipeline for both sensors via
>>>> S_INPUT, otherwise a V4L2 only userspace application won't work.
>>>>
>>>> It is as simple as that.
>>>
>>> When capturing from the main sensor on the OMAP3 ISP you need to capture
>>> raw data to memory on a video node, feed it back to the hardware through
>>> another video node, and finally capture it on a third video node. A
>>> V4L2-only userspace application won't work. That's how the hardware is,
>>> we can't do much about that.
>>
>> The raw data conversion is one of the functions that libv4l should do. So,
>> if you've already submitted the patches for libv4l to do the hardware
>> loopback trick, or add a function there to convert the raw data into a
>> common format, that should be ok. Otherwise, we have a problem, that needs
>> to be fixed.
> 
> That's not what this is about. Bayer to YUV conversion needs to happen in 
> hardware in this case for performance reasons. The hardware will also perform 
> scaling on YUV, as well as many image processing tasks (white balance, defect 
> pixel correction, gamma correction, noise filtering, ...).
> 
>>>>>> Also, on most cases, probing a sensor is as trivial as reading a
>>>>>> sensor ID during device probe. This applies, for example, for all
>>>>>> Omnivision sensors.
>>>>>>
>>>>>> We do things like that all the times for PC world, as nobody knows
>>>>>> what webcam someone would plug on his PC.
>>>>>
>>>>> Sorry, but that's not related. You simply can't decide in an embedded
>>>>> ISP driver how to deal with sensor controls, as the system will be
>>>>> used in a too wide variety of applications and hardware
>>>>> configurations. All controls need to be exposed, period.
>>>>
>>>> We're not talking about controls. We're talking about providing the
>>>> needed V4L2 support to allow an userspace application to access the
>>>> hardware sensor.
>>>
>>> OK, so we're discussing S_INPUT. Let's discuss controls later :-)
>>>
>>>>>>>> I never saw an embedded hardware that allows physically changing the
>>>>>>>> sensor.
>>>>>>>
>>>>>>> Beagleboard + pluggable sensor board.
>>>>>>
>>>>>> Development systems like beagleboard, pandaboard, Exynos SMDK, etc,
>>>>>> aren't embeeded hardware. They're development kits.
>>>>>
>>>>> People create end-user products based on those kits. That make them
>>>>> first- class embedded hardware like any other.
>>>>
>>>> No doubt they should be supported, but it doesn't make sense to create
>>>> tons of input pipelines to be used for S_INPUT for each different type
>>>> of possible sensor. Somehow, userspace needs to tell what's the sensor
>>>> that he attached to the hardware, or the driver should suport
>>>> auto-detecting it.
>>>
>>> We're not creating tons of input pipelines. Look at
>>> http://www.ideasonboard.org/media/omap3isp.ps , every video node (in
>>> yellow) has its purpose.
>>
>> Not sure if I it understood well. The subdevs 8-11 are the sensors, right?
> 
> et8ek8 and vs6555 are the sensors. ad5820 is the lens controller and adp1653 
> the flash controller. All other subdevs (green blocks) are part of the ISP.
> 
>>>> In other words, I see 2 options for that:
>>>> 	1) add hardware auto-detection at the sensor logic. At driver probe,
>>>>
>>>> try to probe all sensors, if it is a hardware development kit;
>>>
>>> We've worked quite hard to remove I2C device probing from the kernel,
>>> let's not add it back.
>>
>> We do I2C probing on several drivers. It is there for devices where
>> the cards entry is not enough to identify the hardware. For example,
>> two different devices with the same USB ID generally uses that.
>> If the hardware information is not enough, there's nothing wrong
>> on doing that.
> 
> Except that probing might destroy the hardware in the general case. We can 
> only probe I2C devices on a bus that we know will not contain any other 
> sensitive devices.
> 
>>>> 	2) add one new parameter at the driver: "sensors". If the hardware
>>>>
>>>> is one of those kits, this parameter will allow the developer to specify
>>>> the used sensors. It is the same logic as we do with userspace TV and
>>>> grabber cards without eeprom or any other way to auto-detect the
>>>> hardware.
>>>
>>> This will likely be done through the Device Tree.
>>
>> I don't mind much about the way it is implemented, but it should be there
>> on a place where people can find it and use it.
>>
>> Devices that requires the user to pass some parameters to the Kernel in
>> order for the driver to find the hardware are economic class devices. A
>> first class device should work as-is, after the driver is loaded.
> 
> I don't think there has ever been any disagreement on this, we can consider 
> the matter settled.
> 
>>>>>> I don't mind if, for those kits the developer that is playing with it
>>>>>> has to pass a mode parameter and/or run some open harware-aware small
>>>>>> application that makes the driver to select the sensor type he is
>>>>>> using, but, if the hardware is, instead, a N9 or a Galaxy Tab (or
>>>>>> whatever embedded hardware), the driver should expose just the sensors
>>>>>> that exists on such hardware. It shouldn't be ever allowed to change
>>>>>> it on userspace, using whatever API on those hardware.
>>>>>
>>>>> Who talked about changing sensors from userspace on those systems ?
>>>>> Platform data (regardless of whether it comes from board code, device
>>>>> tree, or something else) will contain a hardware description, and the
>>>>> kernel will create the right devices and load the right drivers. The
>>>>> issue we're discussing is how to expose controls for those devices to
>>>>> userspace, and that needs to be done through subdev nodes.
>>>>
>>>> The issue that is under discussion is the removal of S_INPUT from the
>>>> samsung driver, and the comments at the patches that talks about
>>>> removing V4L2 API support in favor of using a MC-only API for some
>>>> fundamental things.
>>>>
>>>> For example, with a patch like this one, only one sensor will be
>>>> supported without the MC API (either the front or back sensor on a
>>>> multi-sensor camera):
>>>> http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/47751733a
>>>> 32 2a241927f9238b8ab1441389c9c41
>>>>
>>>> Look at the comment on this patch also:
>>>> 	http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/c6fb462c
>>>> 	38b
>>>>
>>>> e60a45d16a29a9e56c886ee0aa08c
>>>>
>>>> What is called "API compatibility mode" is not clear, but it transmitted
>>>> me that the idea is to expose the controls only via subnodes.
>>>>
>>>> Those are the rationale for those discussions: V4L2 API is not being
>>>> deprecated in favor of MC API, e. g. controls shouldn't be hidden from
>>>> the V4L2 API without a good reason.
>>>
>>> Controls need to move to subdev nodes for embedded devices because
>>> there's simply no way to expose multiple identical controls through a
>>> video node. Please also have a look at the diagram I linked to above,
>>> and tell me though which video node sensor controls should be exposed.
>>> There's no simple answer to that.
>>
>> Again, not sure if I understood well your diagram. What device will be
>> controlling and receiving the video streaming from the sensor? /dev/video0?
>>
>> If so, this is the one that should be exposing the controls for the
>> selected input sensor.
> 
> When capturing data from the main sensor (et8ek8), a common pipeline is to 
> capture data from /dev/video1, feed it back through /dev/video0 and capture 
> the final result from /dev/video6. Another common alternative is to capture it 
> from /dev/video2, feed it back to /dev/video3 and capture the final result 
> from /dev/video6. Regardless of which above configuration is used, 
> applications can also need to capture images on /dev/video4 and /dev/video6 
> concurrently. This all depends on what kind of image the application wants and 
> what hardware processing it wants to apply.
> 
> This isn't even a complex case, there are much more complex hardware out there 
> that we want to support.
> 
>>>>>>>>> Even if you did, fine image quality tuning requires accessing
>>>>>>>>> pretty much all controls individually anyway.
>>>>>>>>
>>>>>>>> The same is also true for non-embedded hardware. The only situation
>>>>>>>> where V4L2 API is not enough is when there are two controls of the
>>>>>>>> same type active. For example, 2 active volume controls, one at the
>>>>>>>> audio demod, and another at the bridge. There may have some cases
>>>>>>>> where you can do the same thing at the sensor or at a DSP block.
>>>>>>>> This is where MC API gives an improvement, by allowing changing
>>>>>>>> both, instead of just one of the controls.
>>>>>>>
>>>>>>> To be precise it's the V4L2 subdev userspace API that allows that,
>>>>>>> not the MC API.
>>>>>>>
>>>>>>>>>>> This is a hack...sorry, just joking ;-) Seriously, I think the
>>>>>>>>>>> situation with the userspace subdevs is a bit different. Because
>>>>>>>>>>> with one API we directly expose some functionality for
>>>>>>>>>>> applications, with other we code it in the kernel, to make the
>>>>>>>>>>> devices appear uniform at user space.
>>>>>>>>>>
>>>>>>>>>> Not sure if I understood you. V4L2 export drivers functionality to
>>>>>>>>>> userspace in an uniform way. MC api is for special applications
>>>>>>>>>> that might need to access some internal functions on embedded
>>>>>>>>>> devices.
>>>>>>>>>>
>>>>>>>>>> Of course, there are some cases where it doesn't make sense to
>>>>>>>>>> export a subdev control via V4L2 API.
>>>>>>>>>>
>>>>>>>>>>>>> Also, the sensor subdev can be configured in the video node
>>>>>>>>>>>>> driver as well as through the subdev device node. Both APIs can
>>>>>>>>>>>>> do the same thing but in order to let the subdev API work as
>>>>>>>>>>>>> expected the video node driver must be forbidden to configure
>>>>>>>>>>>>> the subdev.
>>>>>>>>>>>>
>>>>>>>>>>>> Why? For the sensor, a V4L2 API call will look just like a
>>>>>>>>>>>> bridge driver call. The subdev will need a mutex anyway, as two
>>>>>>>>>>>> MC applications may be opening it simultaneously. I can't see
>>>>>>>>>>>> why it should forbid changing the control from the bridge
>>>>>>>>>>>> driver call.
>>>>>>>>>>>
>>>>>>>>>>> Please do not forget there might be more than one subdev to
>>>>>>>>>>> configure and that the bridge itself is also a subdev (which
>>>>>>>>>>> exposes a scaler interface, for instance). A situation pretty
>>>>>>>>>>> much like in Figure 4.4 [1] (after the scaler there is also a
>>>>>>>>>>> video node to configure, but we may assume that pixel resolution
>>>>>>>>>>> at the scaler pad 1 is same as at the video node). Assuming the
>>>>>>>>>>> format and crop configuration flow is from sensor to host scaler
>>>>>>>>>>> direction, if we have tried to configure _all_ subdevs when the
>>>>>>>>>>> last stage of the pipeline is configured (i.e. video node) the
>>>>>>>>>>> whole scaler and crop/composition
>>>>>>>>>>> configuration we have been destroyed at that time. And there is
>>>>>>>>>>> more to configure than
>>>>>>>>>>> VIDIOC_S_FMT can do.
>>>>>>>>>>
>>>>>>>>>> Think from users perspective: all user wants is to see a video of
>>>>>>>>>> a given resolution. S_FMT (and a few other VIDIOC_* calls) have
>>>>>>>>>> everything that the user wants: the desired resolution, framerate
>>>>>>>>>> and format.
>>>>>>>>>>
>>>>>>>>>> Specialized applications indeed need more, in order to get the
>>>>>>>>>> best images for certain types of usages. So, MC is there.
>>>>>>>>>>
>>>>>>>>>> Such applications will probably need to know exactly what's the
>>>>>>>>>> sensor, what are their bugs, how it is connected, what are the DSP
>>>>>>>>>> blocks in the patch, how the DSP algorithms are implemented, etc,
>>>>>>>>>> in order to obtain the the perfect image.
>>>>>>>>>>
>>>>>>>>>> Even on embedded devices like smartphones and tablets, I predict
>>>>>>>>>> that both types of applications will be developed and used: people
>>>>>>>>>> may use a generic application like flash player, and an
>>>>>>>>>> specialized application provided by the manufacturer. Users can
>>>>>>>>>> even develop their own applications generic apps using V4L2
>>>>>>>>>> directly, at the devices that allow that.
>>>>>>>>>>
>>>>>>>>>> As I said before: both application types are welcome. We just need
>>>>>>>>>> to warrant that a pure V4L application will work reasonably well.
>>>>>>>>>
>>>>>>>>> That's why we have libv4l. The driver simply doesn't receive enough
>>>>>>>>> information to configure the hardware correctly from the VIDIOC_*
>>>>>>>>> calls. And as mentioned above, 3A algorithms, required by "simple"
>>>>>>>>> V4L2 applications, need to be implemented in userspace anyway.
>>>>>>>>
>>>>>>>> It is OK to improve users experience via libv4l. What I'm saying is
>>>>>>>> that it is NOT OK to remove V4L2 API support from the driver,
>>>>>>>> forcing users to use some hardware plugin at libv4l.
>>>>>>>
>>>>>>> Let me be clear on this. I'm *NOT* advocating removing V4L2 API
>>>>>>> support from any driver (well, on the drivers I can currently think
>>>>>>> of, if you show me a wifi driver that implements a V4L2 interface I
>>>>>>> might change my mind :-)).
>>>>>>
>>>>>> This thread is all about a patch series partially removing V4L2 API
>>>>>> support.
>>>>>
>>>>> Because that specific part of the API doesn't make sense for this use
>>>>> case. You wouldn't object to removing S_INPUT support from a video
>>>>> output driver, as it wouldn't make sense either.
>>>>
>>>> A device with two sensors input where just one node can be switched to
>>>> use either input is a typical case where S_INPUT needs to be provided.
>>>
>>> No. S_INPUT shouldn't be use to select between sensors. The hardware
>>> pipeline is more complex than just that. We can't make it all fit in the
>>> S_INPUT API.
>>>
>>> For instance, when switching between a b&w and a color sensor you will
>>> need to reconfigure the whole pipeline to select the right gamma table,
>>> white balance parameters, color conversion matrix, ... That's not
>>> something we want to hardcode in the kernel. This needs to be done from
>>> userspace.
>>
>> This is something that, if it is not written somehwere, no userspace
>> applications not developed by the hardware vendor will ever work.
>>
>> I don't see any code for that any at the kernel or at libv4l. Am I missing
>> something?
> 
> Code for that needs to be written in libv4l. It's not there yet as I don't 
> think we have any hardware for this particular example at the moment :-)
> 
As no pure V4L2 application would set the pipelines as you've said, and
no libv4l code exists yet, that means that either:
	1) pure V4L application support is broken;
	2) it is possible to select between the main sensor and the
secondary sensor via S_INPUT on /dev/video1, and to control the sensor
via this node, in order to set bright, contrast, apperture time and
exposition. Performance will be sacrificed, as the bayer->YUV conversion
and 3A algorithms will be done unaccelerated in libv4l. The picture will
also not look fine, as no noise reduction, etc will be done, but support
for a pure V4L application requirement is satisfied.

Note also that even a generic MC-aware application will not work, as
there's nothing at the MC pipeline information that shows how to proper
configure the pipelines to get the expected result. Such application
needs to have an internal database that associates each pipeline seen
via the MC API to the processor type, and have the pipeline policy
configurations coded internally.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-16 22:36                         ` Mauro Carvalho Chehab
  2011-08-17  7:57                           ` Laurent Pinchart
@ 2011-08-17 12:33                           ` Sakari Ailus
  1 sibling, 0 replies; 44+ messages in thread
From: Sakari Ailus @ 2011-08-17 12:33 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Laurent Pinchart, Sylwester Nawrocki, Sylwester Nawrocki, linux-media

Hi Mauro,

On Tue, Aug 16, 2011 at 03:36:44PM -0700, Mauro Carvalho Chehab wrote:
[clip]
> > For instance, when switching between a b&w and a color sensor you will need to 
> > reconfigure the whole pipeline to select the right gamma table, white balance 
> > parameters, color conversion matrix, ... That's not something we want to 
> > hardcode in the kernel. This needs to be done from userspace.
> 
> This is something that, if it is not written somehwere, no userspace
> applications not developed by the hardware vendor will ever work.
> 
> I don't see any code for that any at the kernel or at libv4l. Am I missing
> something?

There actually is. The plugin interface patches went in to libv4l 0.9.0.
That's just an interface and it doesn't yet have support for any embedded
devices.

-- 
Sakari Ailus
sakari.ailus@iki.fi

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-17 12:25                             ` Mauro Carvalho Chehab
@ 2011-08-17 12:37                               ` Ivan T. Ivanov
  2011-08-17 13:27                                 ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 44+ messages in thread
From: Ivan T. Ivanov @ 2011-08-17 12:37 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Laurent Pinchart, Sylwester Nawrocki, Sylwester Nawrocki,
	linux-media, Sakari Ailus


Hi everybody, 

On Wed, 2011-08-17 at 05:25 -0700, Mauro Carvalho Chehab wrote:
> Em 17-08-2011 00:57, Laurent Pinchart escreveu:
> > Hi Mauro,
> > 
> > On Wednesday 17 August 2011 00:36:44 Mauro Carvalho Chehab wrote:
> >> Em 16-08-2011 08:44, Laurent Pinchart escreveu:
> >>> On Tuesday 16 August 2011 17:30:47 Mauro Carvalho Chehab wrote:
> >>>> Em 16-08-2011 01:57, Laurent Pinchart escreveu:
> >>>>>>> My point is that the ISP driver developer can't know in advance which
> >>>>>>> sensor will be used systems that don't exist yet.
> >>>>>>
> >>>>>> As far as such hardware is projected, someone will know. It is a
> >>>>>> simple trivial patch to associate a new hardware with a hardware
> >>>>>> profile at the platform data.
> >>>>>
> >>>>> Platform data must contain hardware descriptions only, not policies.
> >>>>> This is even clearer now that ARM is moving away from board code to
> >>>>> the Device Tree.
> >>>>
> >>>> Again, a cell phone with one frontal camera and one hear camera has two
> >>>> sensor inputs only. This is not a "policy". It is a hardware constraint.
> >>>> The driver should allow setting the pipeline for both sensors via
> >>>> S_INPUT, otherwise a V4L2 only userspace application won't work.
> >>>>
> >>>> It is as simple as that.
> >>>
> >>> When capturing from the main sensor on the OMAP3 ISP you need to capture
> >>> raw data to memory on a video node, feed it back to the hardware through
> >>> another video node, and finally capture it on a third video node. A
> >>> V4L2-only userspace application won't work. That's how the hardware is,
> >>> we can't do much about that.
> >>
> >> The raw data conversion is one of the functions that libv4l should do. So,
> >> if you've already submitted the patches for libv4l to do the hardware
> >> loopback trick, or add a function there to convert the raw data into a
> >> common format, that should be ok. Otherwise, we have a problem, that needs
> >> to be fixed.
> > 
> > That's not what this is about. Bayer to YUV conversion needs to happen in 
> > hardware in this case for performance reasons. The hardware will also perform 
> > scaling on YUV, as well as many image processing tasks (white balance, defect 
> > pixel correction, gamma correction, noise filtering, ...).
> > 
> >>>>>> Also, on most cases, probing a sensor is as trivial as reading a
> >>>>>> sensor ID during device probe. This applies, for example, for all
> >>>>>> Omnivision sensors.
> >>>>>>
> >>>>>> We do things like that all the times for PC world, as nobody knows
> >>>>>> what webcam someone would plug on his PC.
> >>>>>
> >>>>> Sorry, but that's not related. You simply can't decide in an embedded
> >>>>> ISP driver how to deal with sensor controls, as the system will be
> >>>>> used in a too wide variety of applications and hardware
> >>>>> configurations. All controls need to be exposed, period.
> >>>>
> >>>> We're not talking about controls. We're talking about providing the
> >>>> needed V4L2 support to allow an userspace application to access the
> >>>> hardware sensor.
> >>>
> >>> OK, so we're discussing S_INPUT. Let's discuss controls later :-)
> >>>
> >>>>>>>> I never saw an embedded hardware that allows physically changing the
> >>>>>>>> sensor.
> >>>>>>>
> >>>>>>> Beagleboard + pluggable sensor board.
> >>>>>>
> >>>>>> Development systems like beagleboard, pandaboard, Exynos SMDK, etc,
> >>>>>> aren't embeeded hardware. They're development kits.
> >>>>>
> >>>>> People create end-user products based on those kits. That make them
> >>>>> first- class embedded hardware like any other.
> >>>>
> >>>> No doubt they should be supported, but it doesn't make sense to create
> >>>> tons of input pipelines to be used for S_INPUT for each different type
> >>>> of possible sensor. Somehow, userspace needs to tell what's the sensor
> >>>> that he attached to the hardware, or the driver should suport
> >>>> auto-detecting it.
> >>>
> >>> We're not creating tons of input pipelines. Look at
> >>> http://www.ideasonboard.org/media/omap3isp.ps , every video node (in
> >>> yellow) has its purpose.
> >>
> >> Not sure if I it understood well. The subdevs 8-11 are the sensors, right?
> > 
> > et8ek8 and vs6555 are the sensors. ad5820 is the lens controller and adp1653 
> > the flash controller. All other subdevs (green blocks) are part of the ISP.
> > 
> >>>> In other words, I see 2 options for that:
> >>>> 	1) add hardware auto-detection at the sensor logic. At driver probe,
> >>>>
> >>>> try to probe all sensors, if it is a hardware development kit;
> >>>
> >>> We've worked quite hard to remove I2C device probing from the kernel,
> >>> let's not add it back.
> >>
> >> We do I2C probing on several drivers. It is there for devices where
> >> the cards entry is not enough to identify the hardware. For example,
> >> two different devices with the same USB ID generally uses that.
> >> If the hardware information is not enough, there's nothing wrong
> >> on doing that.
> > 
> > Except that probing might destroy the hardware in the general case. We can 
> > only probe I2C devices on a bus that we know will not contain any other 
> > sensitive devices.
> > 
> >>>> 	2) add one new parameter at the driver: "sensors". If the hardware
> >>>>
> >>>> is one of those kits, this parameter will allow the developer to specify
> >>>> the used sensors. It is the same logic as we do with userspace TV and
> >>>> grabber cards without eeprom or any other way to auto-detect the
> >>>> hardware.
> >>>
> >>> This will likely be done through the Device Tree.
> >>
> >> I don't mind much about the way it is implemented, but it should be there
> >> on a place where people can find it and use it.
> >>
> >> Devices that requires the user to pass some parameters to the Kernel in
> >> order for the driver to find the hardware are economic class devices. A
> >> first class device should work as-is, after the driver is loaded.
> > 
> > I don't think there has ever been any disagreement on this, we can consider 
> > the matter settled.
> > 
> >>>>>> I don't mind if, for those kits the developer that is playing with it
> >>>>>> has to pass a mode parameter and/or run some open harware-aware small
> >>>>>> application that makes the driver to select the sensor type he is
> >>>>>> using, but, if the hardware is, instead, a N9 or a Galaxy Tab (or
> >>>>>> whatever embedded hardware), the driver should expose just the sensors
> >>>>>> that exists on such hardware. It shouldn't be ever allowed to change
> >>>>>> it on userspace, using whatever API on those hardware.
> >>>>>
> >>>>> Who talked about changing sensors from userspace on those systems ?
> >>>>> Platform data (regardless of whether it comes from board code, device
> >>>>> tree, or something else) will contain a hardware description, and the
> >>>>> kernel will create the right devices and load the right drivers. The
> >>>>> issue we're discussing is how to expose controls for those devices to
> >>>>> userspace, and that needs to be done through subdev nodes.
> >>>>
> >>>> The issue that is under discussion is the removal of S_INPUT from the
> >>>> samsung driver, and the comments at the patches that talks about
> >>>> removing V4L2 API support in favor of using a MC-only API for some
> >>>> fundamental things.
> >>>>
> >>>> For example, with a patch like this one, only one sensor will be
> >>>> supported without the MC API (either the front or back sensor on a
> >>>> multi-sensor camera):
> >>>> http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/47751733a
> >>>> 32 2a241927f9238b8ab1441389c9c41
> >>>>
> >>>> Look at the comment on this patch also:
> >>>> 	http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/c6fb462c
> >>>> 	38b
> >>>>
> >>>> e60a45d16a29a9e56c886ee0aa08c
> >>>>
> >>>> What is called "API compatibility mode" is not clear, but it transmitted
> >>>> me that the idea is to expose the controls only via subnodes.
> >>>>
> >>>> Those are the rationale for those discussions: V4L2 API is not being
> >>>> deprecated in favor of MC API, e. g. controls shouldn't be hidden from
> >>>> the V4L2 API without a good reason.
> >>>
> >>> Controls need to move to subdev nodes for embedded devices because
> >>> there's simply no way to expose multiple identical controls through a
> >>> video node. Please also have a look at the diagram I linked to above,
> >>> and tell me though which video node sensor controls should be exposed.
> >>> There's no simple answer to that.
> >>
> >> Again, not sure if I understood well your diagram. What device will be
> >> controlling and receiving the video streaming from the sensor? /dev/video0?
> >>
> >> If so, this is the one that should be exposing the controls for the
> >> selected input sensor.
> > 
> > When capturing data from the main sensor (et8ek8), a common pipeline is to 
> > capture data from /dev/video1, feed it back through /dev/video0 and capture 
> > the final result from /dev/video6. Another common alternative is to capture it 
> > from /dev/video2, feed it back to /dev/video3 and capture the final result 
> > from /dev/video6. Regardless of which above configuration is used, 
> > applications can also need to capture images on /dev/video4 and /dev/video6 
> > concurrently. This all depends on what kind of image the application wants and 
> > what hardware processing it wants to apply.
> > 
> > This isn't even a complex case, there are much more complex hardware out there 
> > that we want to support.
> > 
> >>>>>>>>> Even if you did, fine image quality tuning requires accessing
> >>>>>>>>> pretty much all controls individually anyway.
> >>>>>>>>
> >>>>>>>> The same is also true for non-embedded hardware. The only situation
> >>>>>>>> where V4L2 API is not enough is when there are two controls of the
> >>>>>>>> same type active. For example, 2 active volume controls, one at the
> >>>>>>>> audio demod, and another at the bridge. There may have some cases
> >>>>>>>> where you can do the same thing at the sensor or at a DSP block.
> >>>>>>>> This is where MC API gives an improvement, by allowing changing
> >>>>>>>> both, instead of just one of the controls.
> >>>>>>>
> >>>>>>> To be precise it's the V4L2 subdev userspace API that allows that,
> >>>>>>> not the MC API.
> >>>>>>>
> >>>>>>>>>>> This is a hack...sorry, just joking ;-) Seriously, I think the
> >>>>>>>>>>> situation with the userspace subdevs is a bit different. Because
> >>>>>>>>>>> with one API we directly expose some functionality for
> >>>>>>>>>>> applications, with other we code it in the kernel, to make the
> >>>>>>>>>>> devices appear uniform at user space.
> >>>>>>>>>>
> >>>>>>>>>> Not sure if I understood you. V4L2 export drivers functionality to
> >>>>>>>>>> userspace in an uniform way. MC api is for special applications
> >>>>>>>>>> that might need to access some internal functions on embedded
> >>>>>>>>>> devices.
> >>>>>>>>>>
> >>>>>>>>>> Of course, there are some cases where it doesn't make sense to
> >>>>>>>>>> export a subdev control via V4L2 API.
> >>>>>>>>>>
> >>>>>>>>>>>>> Also, the sensor subdev can be configured in the video node
> >>>>>>>>>>>>> driver as well as through the subdev device node. Both APIs can
> >>>>>>>>>>>>> do the same thing but in order to let the subdev API work as
> >>>>>>>>>>>>> expected the video node driver must be forbidden to configure
> >>>>>>>>>>>>> the subdev.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Why? For the sensor, a V4L2 API call will look just like a
> >>>>>>>>>>>> bridge driver call. The subdev will need a mutex anyway, as two
> >>>>>>>>>>>> MC applications may be opening it simultaneously. I can't see
> >>>>>>>>>>>> why it should forbid changing the control from the bridge
> >>>>>>>>>>>> driver call.
> >>>>>>>>>>>
> >>>>>>>>>>> Please do not forget there might be more than one subdev to
> >>>>>>>>>>> configure and that the bridge itself is also a subdev (which
> >>>>>>>>>>> exposes a scaler interface, for instance). A situation pretty
> >>>>>>>>>>> much like in Figure 4.4 [1] (after the scaler there is also a
> >>>>>>>>>>> video node to configure, but we may assume that pixel resolution
> >>>>>>>>>>> at the scaler pad 1 is same as at the video node). Assuming the
> >>>>>>>>>>> format and crop configuration flow is from sensor to host scaler
> >>>>>>>>>>> direction, if we have tried to configure _all_ subdevs when the
> >>>>>>>>>>> last stage of the pipeline is configured (i.e. video node) the
> >>>>>>>>>>> whole scaler and crop/composition
> >>>>>>>>>>> configuration we have been destroyed at that time. And there is
> >>>>>>>>>>> more to configure than
> >>>>>>>>>>> VIDIOC_S_FMT can do.
> >>>>>>>>>>
> >>>>>>>>>> Think from users perspective: all user wants is to see a video of
> >>>>>>>>>> a given resolution. S_FMT (and a few other VIDIOC_* calls) have
> >>>>>>>>>> everything that the user wants: the desired resolution, framerate
> >>>>>>>>>> and format.
> >>>>>>>>>>
> >>>>>>>>>> Specialized applications indeed need more, in order to get the
> >>>>>>>>>> best images for certain types of usages. So, MC is there.
> >>>>>>>>>>
> >>>>>>>>>> Such applications will probably need to know exactly what's the
> >>>>>>>>>> sensor, what are their bugs, how it is connected, what are the DSP
> >>>>>>>>>> blocks in the patch, how the DSP algorithms are implemented, etc,
> >>>>>>>>>> in order to obtain the the perfect image.
> >>>>>>>>>>
> >>>>>>>>>> Even on embedded devices like smartphones and tablets, I predict
> >>>>>>>>>> that both types of applications will be developed and used: people
> >>>>>>>>>> may use a generic application like flash player, and an
> >>>>>>>>>> specialized application provided by the manufacturer. Users can
> >>>>>>>>>> even develop their own applications generic apps using V4L2
> >>>>>>>>>> directly, at the devices that allow that.
> >>>>>>>>>>
> >>>>>>>>>> As I said before: both application types are welcome. We just need
> >>>>>>>>>> to warrant that a pure V4L application will work reasonably well.
> >>>>>>>>>
> >>>>>>>>> That's why we have libv4l. The driver simply doesn't receive enough
> >>>>>>>>> information to configure the hardware correctly from the VIDIOC_*
> >>>>>>>>> calls. And as mentioned above, 3A algorithms, required by "simple"
> >>>>>>>>> V4L2 applications, need to be implemented in userspace anyway.
> >>>>>>>>
> >>>>>>>> It is OK to improve users experience via libv4l. What I'm saying is
> >>>>>>>> that it is NOT OK to remove V4L2 API support from the driver,
> >>>>>>>> forcing users to use some hardware plugin at libv4l.
> >>>>>>>
> >>>>>>> Let me be clear on this. I'm *NOT* advocating removing V4L2 API
> >>>>>>> support from any driver (well, on the drivers I can currently think
> >>>>>>> of, if you show me a wifi driver that implements a V4L2 interface I
> >>>>>>> might change my mind :-)).
> >>>>>>
> >>>>>> This thread is all about a patch series partially removing V4L2 API
> >>>>>> support.
> >>>>>
> >>>>> Because that specific part of the API doesn't make sense for this use
> >>>>> case. You wouldn't object to removing S_INPUT support from a video
> >>>>> output driver, as it wouldn't make sense either.
> >>>>
> >>>> A device with two sensors input where just one node can be switched to
> >>>> use either input is a typical case where S_INPUT needs to be provided.
> >>>
> >>> No. S_INPUT shouldn't be use to select between sensors. The hardware
> >>> pipeline is more complex than just that. We can't make it all fit in the
> >>> S_INPUT API.
> >>>
> >>> For instance, when switching between a b&w and a color sensor you will
> >>> need to reconfigure the whole pipeline to select the right gamma table,
> >>> white balance parameters, color conversion matrix, ... That's not
> >>> something we want to hardcode in the kernel. This needs to be done from
> >>> userspace.
> >>
> >> This is something that, if it is not written somehwere, no userspace
> >> applications not developed by the hardware vendor will ever work.
> >>
> >> I don't see any code for that any at the kernel or at libv4l. Am I missing
> >> something?
> > 
> > Code for that needs to be written in libv4l. It's not there yet as I don't 
> > think we have any hardware for this particular example at the moment :-)
> > 
> As no pure V4L2 application would set the pipelines as you've said, and
> no libv4l code exists yet, 


Actually there is such code for OMAP3 ISP driver. Plug-in support in
libv4l have been extended a little bit [1] and plugin-in which handle
request for "regular" video device nodes /dev/video0 and /dev/video1
and translate them to MC and sub-device API have been posted here [2],
but is still not merged.

Regards, 
iivanov

[1] http://www.spinics.net/lists/linux-media/msg35570.html
[2] http://www.spinics.net/lists/linux-media/msg32539.html




> that means that either:
> 	1) pure V4L application support is broken;
> 	2) it is possible to select between the main sensor and the
> secondary sensor via S_INPUT on /dev/video1, and to control the sensor
> via this node, in order to set bright, contrast, apperture time and
> exposition. Performance will be sacrificed, as the bayer->YUV conversion
> and 3A algorithms will be done unaccelerated in libv4l. The picture will
> also not look fine, as no noise reduction, etc will be done, but support
> for a pure V4L application requirement is satisfied.
> 
> Note also that even a generic MC-aware application will not work, as
> there's nothing at the MC pipeline information that shows how to proper
> configure the pipelines to get the expected result. Such application
> needs to have an internal database that associates each pipeline seen
> via the MC API to the processor type, and have the pipeline policy
> configurations coded internally.
> 
> Regards,
> Mauro
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-17 12:37                               ` Ivan T. Ivanov
@ 2011-08-17 13:27                                 ` Mauro Carvalho Chehab
  0 siblings, 0 replies; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-17 13:27 UTC (permalink / raw)
  To: Ivan T. Ivanov
  Cc: Laurent Pinchart, Sylwester Nawrocki, Sylwester Nawrocki,
	linux-media, Sakari Ailus, Hans de Goede

Em 17-08-2011 05:37, Ivan T. Ivanov escreveu:
> 
> Hi everybody, 
> 
> On Wed, 2011-08-17 at 05:25 -0700, Mauro Carvalho Chehab wrote:
>> Em 17-08-2011 00:57, Laurent Pinchart escreveu:
>>> Hi Mauro,
>>>
>>> On Wednesday 17 August 2011 00:36:44 Mauro Carvalho Chehab wrote:
>>>> Em 16-08-2011 08:44, Laurent Pinchart escreveu:
>>>>> On Tuesday 16 August 2011 17:30:47 Mauro Carvalho Chehab wrote:
>>>>>> Em 16-08-2011 01:57, Laurent Pinchart escreveu:
>>>>> No. S_INPUT shouldn't be use to select between sensors. The hardware
>>>>> pipeline is more complex than just that. We can't make it all fit in the
>>>>> S_INPUT API.
>>>>>
>>>>> For instance, when switching between a b&w and a color sensor you will
>>>>> need to reconfigure the whole pipeline to select the right gamma table,
>>>>> white balance parameters, color conversion matrix, ... That's not
>>>>> something we want to hardcode in the kernel. This needs to be done from
>>>>> userspace.
>>>>
>>>> This is something that, if it is not written somehwere, no userspace
>>>> applications not developed by the hardware vendor will ever work.
>>>>
>>>> I don't see any code for that any at the kernel or at libv4l. Am I missing
>>>> something?
>>>
>>> Code for that needs to be written in libv4l. It's not there yet as I don't 
>>> think we have any hardware for this particular example at the moment :-)
>>>
>> As no pure V4L2 application would set the pipelines as you've said, and
>> no libv4l code exists yet, 
> 
> 
> Actually there is such code for OMAP3 ISP driver. Plug-in support in
> libv4l have been extended a little bit [1] and plugin-in which handle
> request for "regular" video device nodes /dev/video0 and /dev/video1
> and translate them to MC and sub-device API have been posted here [2],
> but is still not merged.
> 
> Regards, 
> iivanov
> 
> [1] http://www.spinics.net/lists/linux-media/msg35570.html
> [2] http://www.spinics.net/lists/linux-media/msg32539.html

Ah, ok. So, it is just the usual delay of having some features merged.

Hans G.,

FYI. Please review the OMAP3 MC-aware patches for libv4l when you have
some time for that.

Thanks!
Mauro



^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-17  6:13                         ` Embedded device and the V4L2 API support - Was: " Mauro Carvalho Chehab
@ 2011-08-20 11:27                           ` Sylwester Nawrocki
  2011-08-20 12:12                             ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 44+ messages in thread
From: Sylwester Nawrocki @ 2011-08-20 11:27 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Laurent Pinchart, Sylwester Nawrocki, linux-media, Sakari Ailus

Hi Mauro,

On 08/17/2011 08:13 AM, Mauro Carvalho Chehab wrote:
> It seems that there are too many miss understandings or maybe we're just
> talking the same thing on different ways.
> 
> So, instead of answering again, let's re-start this discussion on a
> different way.
> 
> One of the requirements that it was discussed a lot on both mailing
> lists and on the Media Controllers meetings that we had (or, at least
> in the ones where I've participated) is that:
> 
> 	"A pure V4L2 userspace application, knowing about video device
> 	 nodes only, can still use the driver. Not all advanced features
> 	 will be available."

What does a term "a pure V4L2 userspace application" mean here ?
Does it also account an application which is linked to libv4l2 and uses
calls specific to a particular hardware which are included there?

> 
> This is easily said than done. Also, different understandings can be
> obtained by a simple phrase like that.

--
Regards,
Sylwester

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-20 11:27                           ` Sylwester Nawrocki
@ 2011-08-20 12:12                             ` Mauro Carvalho Chehab
  2011-08-24 22:29                               ` Sakari Ailus
  0 siblings, 1 reply; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-20 12:12 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Laurent Pinchart, Sylwester Nawrocki, linux-media, Sakari Ailus,
	Hans de Goede

Em 20-08-2011 04:27, Sylwester Nawrocki escreveu:
> Hi Mauro,
> 
> On 08/17/2011 08:13 AM, Mauro Carvalho Chehab wrote:
>> It seems that there are too many miss understandings or maybe we're just
>> talking the same thing on different ways.
>>
>> So, instead of answering again, let's re-start this discussion on a
>> different way.
>>
>> One of the requirements that it was discussed a lot on both mailing
>> lists and on the Media Controllers meetings that we had (or, at least
>> in the ones where I've participated) is that:
>>
>> 	"A pure V4L2 userspace application, knowing about video device
>> 	 nodes only, can still use the driver. Not all advanced features
>> 	 will be available."
> 
> What does a term "a pure V4L2 userspace application" mean here ?

The above quotation are exactly the Laurent's words that I took from one 
of his replies.

> Does it also account an application which is linked to libv4l2 and uses
> calls specific to a particular hardware which are included there?

That's a good question. We need to properly define what it means, to avoid
having libv4l abuses.

In other words, it seems ok to use libv4l to set pipelines via the MC API
at open(), but it isn't ok to have an open() binary only libv4l plugin that
will hook open and do the complete device initialization on userspace
(I remember that one vendor once proposed a driver like that).

Also, from my side, I'd like to see both libv4l and kernel drivers being
submitted together, if the new driver depends on a special libv4l support
for it to work.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-20 12:12                             ` Mauro Carvalho Chehab
@ 2011-08-24 22:29                               ` Sakari Ailus
  2011-08-25 12:43                                 ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 44+ messages in thread
From: Sakari Ailus @ 2011-08-24 22:29 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sylwester Nawrocki, Laurent Pinchart, Sylwester Nawrocki,
	linux-media, Hans de Goede

Hi Mauro,

On Sat, Aug 20, 2011 at 05:12:56AM -0700, Mauro Carvalho Chehab wrote:
> Em 20-08-2011 04:27, Sylwester Nawrocki escreveu:
> > Hi Mauro,
> > 
> > On 08/17/2011 08:13 AM, Mauro Carvalho Chehab wrote:
> >> It seems that there are too many miss understandings or maybe we're just
> >> talking the same thing on different ways.
> >>
> >> So, instead of answering again, let's re-start this discussion on a
> >> different way.
> >>
> >> One of the requirements that it was discussed a lot on both mailing
> >> lists and on the Media Controllers meetings that we had (or, at least
> >> in the ones where I've participated) is that:
> >>
> >> 	"A pure V4L2 userspace application, knowing about video device
> >> 	 nodes only, can still use the driver. Not all advanced features
> >> 	 will be available."
> > 
> > What does a term "a pure V4L2 userspace application" mean here ?
> 
> The above quotation are exactly the Laurent's words that I took from one 
> of his replies.

I would define this as an application which uses V4L2 but does not use Media
controller or the V4L2 subdev interfaces nor is aware of any particular
hardware device.

> > Does it also account an application which is linked to libv4l2 and uses
> > calls specific to a particular hardware which are included there?
> 
> That's a good question. We need to properly define what it means, to avoid
> having libv4l abuses.
> 
> In other words, it seems ok to use libv4l to set pipelines via the MC API
> at open(), but it isn't ok to have an open() binary only libv4l plugin that
> will hook open and do the complete device initialization on userspace
> (I remember that one vendor once proposed a driver like that).
> 
> Also, from my side, I'd like to see both libv4l and kernel drivers being
> submitted together, if the new driver depends on a special libv4l support
> for it to work.

I agree with the above.

I do favour using libv4l to do the pipeline setup using MC and V4L2 subdev
interfaces. This has the benefit that the driver provides just one interface
to access different aspects of it, be it pipeline setup (Media controller),
a control to change exposure time (V4L2 subdev) or queueing video buffer
(V4L2). This means more simple and more maintainable drivers and less bugs
in general.

Apart from what the drivers already provide on video nodea, to support a
general purpose V4L2 application, libv4l plugin can do is (not exhaustive
list):

- Perform pipeline setup using MC interface, possibly based on input
  selected using S_INPUT so that e.g. multiple sensors can be supported and
- implement {S,G,TRY}_EXT_CTRLS and QUERYCTRL using V4L2 subdev nodes as
  backend.

As the Media controller and V4L2 interfaces are standardised, I see no
reason why this plugin could not be fully generic: only the configuration is
device specific.

This configuration could be stored into a configuration file which is
selected based on the system type. On embedded systems (ARMs at least, but
anyway the vast majority is based on ARM) the board type is easily available
for the user space applications in /proc/cpuinfo --- this example is from
the Nokia N900:

---
Processor       : ARMv7 Processor rev 3 (v7l)
BogoMIPS        : 249.96
Features        : swp half fastmult vfp edsp neon vfpv3 
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x1
CPU part        : 0xc08
CPU revision    : 3

Hardware        : Nokia RX-51 board
Revision        : 2101
Serial          : 0000000000000000
---

I think this would be a first step to support general purpose application on
embedded systems.

The question I still have on this is that how should the user know which
video node to access on an embedded system with a camera: the OMAP 3 ISP,
for example, contains some eight video nodes which have different ISP blocks
connected to them. Likely two of these nodes are useful for a general
purpose application based on which image format it requests. It would make
sense to provide generic applications information only on those devices they
may meaningfully use.

Later on, more functionality could be added to better support hardware which
supports e.g. different pixel formats, image sizes, scaling and crop. I'm
not entirely certain if all of this is fully generic --- but I think the
vast majority of it is --- at least converting from v4l2_mbus_framefmt
pixelcode to v4l2_format.fmt is often quite hardware specific which must be
taken into account by the generic plugin.

At that point many policy decisions must be made on how to use the hardware
the best way, and that must be also present in the configuration file.

But perhaps I'm going too far with this now; we don't yet have a generic
plugin providing basic functionality. We have the OMAP 3 ISP libv4l plugin
which might be a good staring point for this work.

-- 
Sakari Ailus
sakari.ailus@iki.fi

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-24 22:29                               ` Sakari Ailus
@ 2011-08-25 12:43                                 ` Mauro Carvalho Chehab
  2011-08-26 13:45                                   ` Laurent Pinchart
  2011-08-30 20:34                                   ` Sakari Ailus
  0 siblings, 2 replies; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-25 12:43 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Sylwester Nawrocki, Laurent Pinchart, Sylwester Nawrocki,
	linux-media, Hans de Goede

Em 24-08-2011 19:29, Sakari Ailus escreveu:
> Hi Mauro,
> 
> On Sat, Aug 20, 2011 at 05:12:56AM -0700, Mauro Carvalho Chehab wrote:
>> Em 20-08-2011 04:27, Sylwester Nawrocki escreveu:
>>> Hi Mauro,
>>>
>>> On 08/17/2011 08:13 AM, Mauro Carvalho Chehab wrote:
>>>> It seems that there are too many miss understandings or maybe we're just
>>>> talking the same thing on different ways.
>>>>
>>>> So, instead of answering again, let's re-start this discussion on a
>>>> different way.
>>>>
>>>> One of the requirements that it was discussed a lot on both mailing
>>>> lists and on the Media Controllers meetings that we had (or, at least
>>>> in the ones where I've participated) is that:
>>>>
>>>> 	"A pure V4L2 userspace application, knowing about video device
>>>> 	 nodes only, can still use the driver. Not all advanced features
>>>> 	 will be available."
>>>
>>> What does a term "a pure V4L2 userspace application" mean here ?
>>
>> The above quotation are exactly the Laurent's words that I took from one 
>> of his replies.
> 
> I would define this as an application which uses V4L2 but does not use Media
> controller or the V4L2 subdev interfaces nor is aware of any particular
> hardware device.

As a general rule, applications should not be aware of any particular hardware.
That's why we provide standard ways of doing things. Experience shows that hardware
aware applications become obsolete very fast. If you seek at the net, you'll see
application-aware tools for bttv, zoran, etc. I doubt that they would still
work today, as they were dependent on some particular special case driver
behavior that has changed among the time. Also, if you look of the updates 
timeline for those, you'll see that they were kept maintained for a short
period of time.

So, I think that all hardware aware dependency should be at the driver and
at the libv4l only. As the proper usage of the MC API requires a hardware
aware knowledge, I don't think that a generic application should bother
to implement the MC API at all.

It should be noticed, however, that having a hardware/driver aware libv4l 
also implies that libv4l should be dependent on an specific kernel version, 
at the distributions.

This already happens somehow there, but, currently, a new version of libv4l
can work with an older kernel (as all we currently have there is support for
new FOURCC formats and quirk lists of sensors mounted upside down).

So, a version made to work with kernel 3.0 will for sure support all webcams
found at kernel 2.39.

>>> Does it also account an application which is linked to libv4l2 and uses
>>> calls specific to a particular hardware which are included there?
>>
>> That's a good question. We need to properly define what it means, to avoid
>> having libv4l abuses.
>>
>> In other words, it seems ok to use libv4l to set pipelines via the MC API
>> at open(), but it isn't ok to have an open() binary only libv4l plugin that
>> will hook open and do the complete device initialization on userspace
>> (I remember that one vendor once proposed a driver like that).
>>
>> Also, from my side, I'd like to see both libv4l and kernel drivers being
>> submitted together, if the new driver depends on a special libv4l support
>> for it to work.
> 
> I agree with the above.
> 
> I do favour using libv4l to do the pipeline setup using MC and V4L2 subdev
> interfaces. This has the benefit that the driver provides just one interface
> to access different aspects of it, be it pipeline setup (Media controller),
> a control to change exposure time (V4L2 subdev) or queueing video buffer
> (V4L2). This means more simple and more maintainable drivers and less bugs
> in general.

I agree.

> Apart from what the drivers already provide on video nodea, to support a
> general purpose V4L2 application, libv4l plugin can do is (not exhaustive
> list):

IMO, we need to write a full list of what should be done at libv4l, as it seems
that there are different opinions about what should be at the driver, and what
should be outside it.
 
> - Perform pipeline setup using MC interface, possibly based on input
>   selected using S_INPUT so that e.g. multiple sensors can be supported and
> - implement {S,G,TRY}_EXT_CTRLS and QUERYCTRL using V4L2 subdev nodes as
>   backend.
> 
> As the Media controller and V4L2 interfaces are standardised, I see no
> reason why this plugin could not be fully generic: only the configuration is
> device specific.

I don't think you can do everything that is needed on a fully generic plugin.
3A algorithm implementations, for example, seems to be device specific.

Of course, the maximum we can do to have a generic implementation that fits
on most cases, the better. 

> This configuration could be stored into a configuration file which is
> selected based on the system type. On embedded systems (ARMs at least, but
> anyway the vast majority is based on ARM) the board type is easily available
> for the user space applications in /proc/cpuinfo --- this example is from
> the Nokia N900:
> 
> ---
> Processor       : ARMv7 Processor rev 3 (v7l)
> BogoMIPS        : 249.96
> Features        : swp half fastmult vfp edsp neon vfpv3 
> CPU implementer : 0x41
> CPU architecture: 7
> CPU variant     : 0x1
> CPU part        : 0xc08
> CPU revision    : 3
> 
> Hardware        : Nokia RX-51 board
> Revision        : 2101
> Serial          : 0000000000000000
> ---
> 
> I think this would be a first step to support general purpose application on
> embedded systems.

Agreed.

> The question I still have on this is that how should the user know which
> video node to access on an embedded system with a camera: the OMAP 3 ISP,
> for example, contains some eight video nodes which have different ISP blocks
> connected to them. Likely two of these nodes are useful for a general
> purpose application based on which image format it requests. It would make
> sense to provide generic applications information only on those devices they
> may meaningfully use.

IMO, we should create a namespace device mapping for video devices. What I mean
is that we should keep the "raw" V4L2 devices as:
	/dev/video??
But also recommend the creation of a new userspace map, like:
	/dev/webcam??
	/dev/tv??
	...
with is an alias for the actual device.

Something similar to dvd/cdrom aliases that already happen on most distros:

lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrom -> sr0
lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrw -> sr0
lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvd -> sr0
lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvdrw -> sr0
 
> Later on, more functionality could be added to better support hardware which
> supports e.g. different pixel formats, image sizes, scaling and crop. I'm
> not entirely certain if all of this is fully generic --- but I think the
> vast majority of it is --- at least converting from v4l2_mbus_framefmt
> pixelcode to v4l2_format.fmt is often quite hardware specific which must be
> taken into account by the generic plugin.
> 
> At that point many policy decisions must be made on how to use the hardware
> the best way, and that must be also present in the configuration file.
> 
> But perhaps I'm going too far with this now; we don't yet have a generic
> plugin providing basic functionality. We have the OMAP 3 ISP libv4l plugin
> which might be a good staring point for this work.
> 

We can start with that plugin making it more generic or forking it into two
plugins: a generic one, and an OMAP3 specific implementation for the things
that are hardware-specific.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-25 12:43                                 ` Mauro Carvalho Chehab
@ 2011-08-26 13:45                                   ` Laurent Pinchart
  2011-08-26 14:16                                     ` Hans Verkuil
  2011-08-30 20:34                                   ` Sakari Ailus
  1 sibling, 1 reply; 44+ messages in thread
From: Laurent Pinchart @ 2011-08-26 13:45 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sakari Ailus, Sylwester Nawrocki, Sylwester Nawrocki,
	linux-media, Hans de Goede

Hi Mauro,

On Thursday 25 August 2011 14:43:56 Mauro Carvalho Chehab wrote:
> Em 24-08-2011 19:29, Sakari Ailus escreveu:

[snip]

> > The question I still have on this is that how should the user know which
> > video node to access on an embedded system with a camera: the OMAP 3 ISP,
> > for example, contains some eight video nodes which have different ISP
> > blocks connected to them. Likely two of these nodes are useful for a
> > general purpose application based on which image format it requests. It
> > would make sense to provide generic applications information only on
> > those devices they may meaningfully use.
> 
> IMO, we should create a namespace device mapping for video devices. What I
> mean is that we should keep the "raw" V4L2 devices as:
> 	/dev/video??
> But also recommend the creation of a new userspace map, like:
> 	/dev/webcam??
> 	/dev/tv??
> 	...
> with is an alias for the actual device.
> 
> Something similar to dvd/cdrom aliases that already happen on most distros:
> 
> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrom -> sr0
> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrw -> sr0
> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvd -> sr0
> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvdrw -> sr0

I've been toying with a similar idea. libv4l currently wraps /dev/video* 
device nodes and assumes a 1:1 relationship between a video device node and a 
video device. Should this assumption be somehow removed, replaced by a video 
device concept that wouldn't be tied to a single video device node ?

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-26 13:45                                   ` Laurent Pinchart
@ 2011-08-26 14:16                                     ` Hans Verkuil
  2011-08-26 15:09                                       ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 44+ messages in thread
From: Hans Verkuil @ 2011-08-26 14:16 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Mauro Carvalho Chehab, Sakari Ailus, Sylwester Nawrocki,
	Sylwester Nawrocki, linux-media, Hans de Goede

On Friday, August 26, 2011 15:45:30 Laurent Pinchart wrote:
> Hi Mauro,
> 
> On Thursday 25 August 2011 14:43:56 Mauro Carvalho Chehab wrote:
> > Em 24-08-2011 19:29, Sakari Ailus escreveu:
> 
> [snip]
> 
> > > The question I still have on this is that how should the user know which
> > > video node to access on an embedded system with a camera: the OMAP 3 ISP,
> > > for example, contains some eight video nodes which have different ISP
> > > blocks connected to them. Likely two of these nodes are useful for a
> > > general purpose application based on which image format it requests. It
> > > would make sense to provide generic applications information only on
> > > those devices they may meaningfully use.
> > 
> > IMO, we should create a namespace device mapping for video devices. What I
> > mean is that we should keep the "raw" V4L2 devices as:
> > 	/dev/video??
> > But also recommend the creation of a new userspace map, like:
> > 	/dev/webcam??
> > 	/dev/tv??
> > 	...
> > with is an alias for the actual device.
> > 
> > Something similar to dvd/cdrom aliases that already happen on most distros:
> > 
> > lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrom -> sr0
> > lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrw -> sr0
> > lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvd -> sr0
> > lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvdrw -> sr0
> 
> I've been toying with a similar idea. libv4l currently wraps /dev/video* 
> device nodes and assumes a 1:1 relationship between a video device node and a 
> video device. Should this assumption be somehow removed, replaced by a video 
> device concept that wouldn't be tied to a single video device node ?

Just as background information: the original idea was always that all v4l
drivers would have a MC and that libv4l would use the information contained
there as a helper (such as deciding which nodes would be the 'default' nodes
for generic applications).

Since there is only one MC device node for each piece of video hardware that
would make it much easier to discover what hardware there is and what video
nodes to use.

I always liked that idea, although I know Mauro is opposed to having a MC
for all v4l drivers.

While I am not opposed to creating such userspace maps I also think it is
a bit of a poor-man's solution. In particular I am worried that we get a
lot of those mappings (just think of ivtv with its 8 or 9 devices).

I can think of: webcam, tv, compressed (mpeg), tv-out, compressed-out, mem2mem.

But a 'tv' node might also be able to handle compressed video (depending
on how the hardware is organized), so how do you handle that? It can all
be solved, I'm sure, but I'm not sure if such userspace mappings will scale
that well with the increasing hardware complexity.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-26 14:16                                     ` Hans Verkuil
@ 2011-08-26 15:09                                       ` Mauro Carvalho Chehab
  2011-08-26 15:29                                         ` Hans Verkuil
  2011-08-29  9:12                                         ` Laurent Pinchart
  0 siblings, 2 replies; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-26 15:09 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Laurent Pinchart, Sakari Ailus, Sylwester Nawrocki,
	Sylwester Nawrocki, linux-media, Hans de Goede

Em 26-08-2011 11:16, Hans Verkuil escreveu:
> On Friday, August 26, 2011 15:45:30 Laurent Pinchart wrote:
>> Hi Mauro,
>>
>> On Thursday 25 August 2011 14:43:56 Mauro Carvalho Chehab wrote:
>>> Em 24-08-2011 19:29, Sakari Ailus escreveu:
>>
>> [snip]
>>
>>>> The question I still have on this is that how should the user know which
>>>> video node to access on an embedded system with a camera: the OMAP 3 ISP,
>>>> for example, contains some eight video nodes which have different ISP
>>>> blocks connected to them. Likely two of these nodes are useful for a
>>>> general purpose application based on which image format it requests. It
>>>> would make sense to provide generic applications information only on
>>>> those devices they may meaningfully use.
>>>
>>> IMO, we should create a namespace device mapping for video devices. What I
>>> mean is that we should keep the "raw" V4L2 devices as:
>>> 	/dev/video??
>>> But also recommend the creation of a new userspace map, like:
>>> 	/dev/webcam??
>>> 	/dev/tv??
>>> 	...
>>> with is an alias for the actual device.
>>>
>>> Something similar to dvd/cdrom aliases that already happen on most distros:
>>>
>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrom -> sr0
>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrw -> sr0
>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvd -> sr0
>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvdrw -> sr0
>>
>> I've been toying with a similar idea. libv4l currently wraps /dev/video* 
>> device nodes and assumes a 1:1 relationship between a video device node and a 
>> video device. Should this assumption be somehow removed, replaced by a video 
>> device concept that wouldn't be tied to a single video device node ?
> 
> Just as background information: the original idea was always that all v4l
> drivers would have a MC and that libv4l would use the information contained
> there as a helper (such as deciding which nodes would be the 'default' nodes
> for generic applications).

This is something that libv4l won't do: it is up to the userspace application
to choose the device node to open. Ok, libv4l can have helper APIs for
that, like the one I wrote, but even adding MC support on it may not solve
the issues.

> Since there is only one MC device node for each piece of video hardware that
> would make it much easier to discover what hardware there is and what video
> nodes to use.
> 
> I always liked that idea, although I know Mauro is opposed to having a MC
> for all v4l drivers.

It doesn't make sense to add MC for all V4L drivers. Not all devices are like
ivtv with lots of device drivers. In a matter of fact, most supported devices
create just one video node. Adding MC support for those devices will just 
increase the drivers complexity without _any_ reason, as those devices are
fully configurable using the existing ioctl's.

Also, as I said before, and implemented at xawtv and at a v4l-utils library, 
the code may use sysfs for simpler devices. It shouldn't be hard to implement
a mc aware code there, although I don't think that MC API is useful to discover
what nodes are meant to be used for TV, encoder, decoder, webcams, etc.
The only type information it currently provides is:

#define MEDIA_ENT_T_DEVNODE_V4L		(MEDIA_ENT_T_DEVNODE + 1)
#define MEDIA_ENT_T_DEVNODE_FB		(MEDIA_ENT_T_DEVNODE + 2)
#define MEDIA_ENT_T_DEVNODE_ALSA	(MEDIA_ENT_T_DEVNODE + 3)
#define MEDIA_ENT_T_DEVNODE_DVB		(MEDIA_ENT_T_DEVNODE + 4)

So, a MC aware application also needs to be a hardware-dependent application,
as it will need to use something else, like the media entity name, to discover
for what purpose a media node is meant to be used.

> While I am not opposed to creating such userspace maps I also think it is
> a bit of a poor-man's solution.

The creation of per-type devices is part of the current API: radio
and vbi nodes are examples of that (except that they aren't aliases, but
real devices, but the idea is the same: different names for different
types of usage).

> In particular I am worried that we get a
> lot of those mappings (just think of ivtv with its 8 or 9 devices).
> 
> I can think of: webcam, tv, compressed (mpeg), tv-out, compressed-out, mem2mem.
> 
> But a 'tv' node might also be able to handle compressed video (depending
> on how the hardware is organized), so how do you handle that? 

Well, What you've called as "compressed" is, in IMO, "encoder". It probably makes
sense to have, also "decoder". I'm in doubt about "webcam", as there are some
grabber devices with analog camera inputs for video surveillance. Maybe "camera"
is a better name for it.

> It can all
> be solved, I'm sure, but I'm not sure if such userspace mappings will scale
> that well with the increasing hardware complexity.

Not all video nodes would need an alias. Just the ones where it makes sense for
an application to open it.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-26 15:09                                       ` Mauro Carvalho Chehab
@ 2011-08-26 15:29                                         ` Hans Verkuil
  2011-08-26 17:32                                           ` Mauro Carvalho Chehab
  2011-08-29  9:12                                         ` Laurent Pinchart
  1 sibling, 1 reply; 44+ messages in thread
From: Hans Verkuil @ 2011-08-26 15:29 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Laurent Pinchart, Sakari Ailus, Sylwester Nawrocki,
	Sylwester Nawrocki, linux-media, Hans de Goede

On Friday, August 26, 2011 17:09:02 Mauro Carvalho Chehab wrote:
> Em 26-08-2011 11:16, Hans Verkuil escreveu:
> > On Friday, August 26, 2011 15:45:30 Laurent Pinchart wrote:
> >> Hi Mauro,
> >>
> >> On Thursday 25 August 2011 14:43:56 Mauro Carvalho Chehab wrote:
> >>> Em 24-08-2011 19:29, Sakari Ailus escreveu:
> >>
> >> [snip]
> >>
> >>>> The question I still have on this is that how should the user know which
> >>>> video node to access on an embedded system with a camera: the OMAP 3 ISP,
> >>>> for example, contains some eight video nodes which have different ISP
> >>>> blocks connected to them. Likely two of these nodes are useful for a
> >>>> general purpose application based on which image format it requests. It
> >>>> would make sense to provide generic applications information only on
> >>>> those devices they may meaningfully use.
> >>>
> >>> IMO, we should create a namespace device mapping for video devices. What I
> >>> mean is that we should keep the "raw" V4L2 devices as:
> >>> 	/dev/video??
> >>> But also recommend the creation of a new userspace map, like:
> >>> 	/dev/webcam??
> >>> 	/dev/tv??
> >>> 	...
> >>> with is an alias for the actual device.
> >>>
> >>> Something similar to dvd/cdrom aliases that already happen on most distros:
> >>>
> >>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrom -> sr0
> >>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrw -> sr0
> >>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvd -> sr0
> >>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvdrw -> sr0
> >>
> >> I've been toying with a similar idea. libv4l currently wraps /dev/video* 
> >> device nodes and assumes a 1:1 relationship between a video device node and a 
> >> video device. Should this assumption be somehow removed, replaced by a video 
> >> device concept that wouldn't be tied to a single video device node ?
> > 
> > Just as background information: the original idea was always that all v4l
> > drivers would have a MC and that libv4l would use the information contained
> > there as a helper (such as deciding which nodes would be the 'default' nodes
> > for generic applications).
> 
> This is something that libv4l won't do: it is up to the userspace application
> to choose the device node to open. Ok, libv4l can have helper APIs for
> that, like the one I wrote, but even adding MC support on it may not solve
> the issues.
> 
> > Since there is only one MC device node for each piece of video hardware that
> > would make it much easier to discover what hardware there is and what video
> > nodes to use.
> > 
> > I always liked that idea, although I know Mauro is opposed to having a MC
> > for all v4l drivers.
> 
> It doesn't make sense to add MC for all V4L drivers. Not all devices are like
> ivtv with lots of device drivers. In a matter of fact, most supported devices
> create just one video node. Adding MC support for those devices will just 
> increase the drivers complexity without _any_ reason, as those devices are
> fully configurable using the existing ioctl's.

It's for consistency so applications know what to expect. For all the simple
drivers you'd just need some simple core support to add a MC. What I always
thought would be handy is for applications to just iterate over all MCs and
show which video/dvb/audio hardware the user has in its system.

> Also, as I said before, and implemented at xawtv and at a v4l-utils library, 
> the code may use sysfs for simpler devices. It shouldn't be hard to implement
> a mc aware code there, although I don't think that MC API is useful to discover
> what nodes are meant to be used for TV, encoder, decoder, webcams, etc.
> The only type information it currently provides is:
> 
> #define MEDIA_ENT_T_DEVNODE_V4L		(MEDIA_ENT_T_DEVNODE + 1)
> #define MEDIA_ENT_T_DEVNODE_FB		(MEDIA_ENT_T_DEVNODE + 2)
> #define MEDIA_ENT_T_DEVNODE_ALSA	(MEDIA_ENT_T_DEVNODE + 3)
> #define MEDIA_ENT_T_DEVNODE_DVB		(MEDIA_ENT_T_DEVNODE + 4)

That's because we never added meta information like that. As long as the MC
is only used for SoC/complex drivers there is no point in adding such info.

It would be trivial to add precisely this type of information, though.

> So, a MC aware application also needs to be a hardware-dependent application,
> as it will need to use something else, like the media entity name, to discover
> for what purpose a media node is meant to be used.
> 
> > While I am not opposed to creating such userspace maps I also think it is
> > a bit of a poor-man's solution.
> 
> The creation of per-type devices is part of the current API: radio
> and vbi nodes are examples of that (except that they aren't aliases, but
> real devices, but the idea is the same: different names for different
> types of usage).

That's why I'm not opposed to it. I'm just not sure how detailed/extensive
that mapping should be.

> > In particular I am worried that we get a
> > lot of those mappings (just think of ivtv with its 8 or 9 devices).
> > 
> > I can think of: webcam, tv, compressed (mpeg), tv-out, compressed-out, mem2mem.
> > 
> > But a 'tv' node might also be able to handle compressed video (depending
> > on how the hardware is organized), so how do you handle that? 
> 
> Well, What you've called as "compressed" is, in IMO, "encoder". It probably makes
> sense to have, also "decoder".

I couldn't remember the name :-)

> I'm in doubt about "webcam", as there are some
> grabber devices with analog camera inputs for video surveillance. Maybe "camera"
> is a better name for it.

Hmm. 'webcam' or 'camera' implies settings like exposure, etc. Many video
surveillance devices are just frame grabbers to which you can attach a camera,
but you can just as easily attach any composite/S-video input.

> 
> > It can all
> > be solved, I'm sure, but I'm not sure if such userspace mappings will scale
> > that well with the increasing hardware complexity.
> 
> Not all video nodes would need an alias. Just the ones where it makes sense for
> an application to open it.

I'm not certain you will solve that much with this. Most people (i.e. the average
linux users) have only one or two video devices, most likely a webcam and perhaps
some DVB/V4L USB stick. Generic apps that needs to enumerate all devices will still
need to use sysfs or go through all video nodes.

It's why I like the MC: just one device node per hardware unit. Easy to enumerate,
easy to present to the user.

I'm tempted to see if I can make a proof-of-concept... Time is a problem for me,
though.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-26 15:29                                         ` Hans Verkuil
@ 2011-08-26 17:32                                           ` Mauro Carvalho Chehab
  2011-08-29  9:24                                             ` Laurent Pinchart
  0 siblings, 1 reply; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-26 17:32 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Laurent Pinchart, Sakari Ailus, Sylwester Nawrocki,
	Sylwester Nawrocki, linux-media, Hans de Goede

Em 26-08-2011 12:29, Hans Verkuil escreveu:
> On Friday, August 26, 2011 17:09:02 Mauro Carvalho Chehab wrote:
>> Em 26-08-2011 11:16, Hans Verkuil escreveu:
>>> On Friday, August 26, 2011 15:45:30 Laurent Pinchart wrote:
>>>> Hi Mauro,
>>>>
>>>> On Thursday 25 August 2011 14:43:56 Mauro Carvalho Chehab wrote:
>>>>> Em 24-08-2011 19:29, Sakari Ailus escreveu:
>>>>
>>>> [snip]
>>>>
>>>>>> The question I still have on this is that how should the user know which
>>>>>> video node to access on an embedded system with a camera: the OMAP 3 ISP,
>>>>>> for example, contains some eight video nodes which have different ISP
>>>>>> blocks connected to them. Likely two of these nodes are useful for a
>>>>>> general purpose application based on which image format it requests. It
>>>>>> would make sense to provide generic applications information only on
>>>>>> those devices they may meaningfully use.
>>>>>
>>>>> IMO, we should create a namespace device mapping for video devices. What I
>>>>> mean is that we should keep the "raw" V4L2 devices as:
>>>>> 	/dev/video??
>>>>> But also recommend the creation of a new userspace map, like:
>>>>> 	/dev/webcam??
>>>>> 	/dev/tv??
>>>>> 	...
>>>>> with is an alias for the actual device.
>>>>>
>>>>> Something similar to dvd/cdrom aliases that already happen on most distros:
>>>>>
>>>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrom -> sr0
>>>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrw -> sr0
>>>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvd -> sr0
>>>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvdrw -> sr0
>>>>
>>>> I've been toying with a similar idea. libv4l currently wraps /dev/video* 
>>>> device nodes and assumes a 1:1 relationship between a video device node and a 
>>>> video device. Should this assumption be somehow removed, replaced by a video 
>>>> device concept that wouldn't be tied to a single video device node ?
>>>
>>> Just as background information: the original idea was always that all v4l
>>> drivers would have a MC and that libv4l would use the information contained
>>> there as a helper (such as deciding which nodes would be the 'default' nodes
>>> for generic applications).
>>
>> This is something that libv4l won't do: it is up to the userspace application
>> to choose the device node to open. Ok, libv4l can have helper APIs for
>> that, like the one I wrote, but even adding MC support on it may not solve
>> the issues.
>>
>>> Since there is only one MC device node for each piece of video hardware that
>>> would make it much easier to discover what hardware there is and what video
>>> nodes to use.
>>>
>>> I always liked that idea, although I know Mauro is opposed to having a MC
>>> for all v4l drivers.
>>
>> It doesn't make sense to add MC for all V4L drivers. Not all devices are like
>> ivtv with lots of device drivers. In a matter of fact, most supported devices
>> create just one video node. Adding MC support for those devices will just 
>> increase the drivers complexity without _any_ reason, as those devices are
>> fully configurable using the existing ioctl's.
> 
> It's for consistency so applications know what to expect.

I've already said it before: We won't be implementing an API for a device just 
for "consistency" without any real reason.

> For all the simple
> drivers you'd just need some simple core support to add a MC. What I always
> thought would be handy is for applications to just iterate over all MCs and
> show which video/dvb/audio hardware the user has in its system.

MC doesn't work for audio yet, as snd-usb-audio doesn't use it. So, it will fail
for a large amount of devices whose audio is implemented using standard USB
Audio Class. Adding MC support for it doesn't sound trivial, and won't offer
any gain over the sysfs equivalent.

> 
>> Also, as I said before, and implemented at xawtv and at a v4l-utils library, 
>> the code may use sysfs for simpler devices. It shouldn't be hard to implement
>> a mc aware code there, although I don't think that MC API is useful to discover
>> what nodes are meant to be used for TV, encoder, decoder, webcams, etc.
>> The only type information it currently provides is:
>>
>> #define MEDIA_ENT_T_DEVNODE_V4L		(MEDIA_ENT_T_DEVNODE + 1)
>> #define MEDIA_ENT_T_DEVNODE_FB		(MEDIA_ENT_T_DEVNODE + 2)
>> #define MEDIA_ENT_T_DEVNODE_ALSA	(MEDIA_ENT_T_DEVNODE + 3)
>> #define MEDIA_ENT_T_DEVNODE_DVB		(MEDIA_ENT_T_DEVNODE + 4)
> 
> That's because we never added meta information like that. As long as the MC
> is only used for SoC/complex drivers there is no point in adding such info.

Even For SoC, such info would probably be useful. As I said before, with the
current way, I can't see how a generic MC aware application would do the right
thing for the devices that currently implement the MC api, simply because
there's no way for an application to know what pipelines need to be configured, as
no entity at the MC (or elsewhere on some userspace library) describes what
pipelines are meant to be used by the common usecase.

> It would be trivial to add precisely this type of information, though.

Yes, adding a new field to indicate what type of V4L devnode is there won't 
be hard, but it would be better to replace the "MEDIA_ENT_T_DEVNODE_V4L" by
something that actually describes the device type. The same kind of logic
might also applies also to other device types. For example, ALSA means a
playback, a capture or a mixer device? Or does it mean the complete audio
hardware? (probably not, as it would hide alsa internal pipelines).

>> So, a MC aware application also needs to be a hardware-dependent application,
>> as it will need to use something else, like the media entity name, to discover
>> for what purpose a media node is meant to be used.
>>
>>> While I am not opposed to creating such userspace maps I also think it is
>>> a bit of a poor-man's solution.
>>
>> The creation of per-type devices is part of the current API: radio
>> and vbi nodes are examples of that (except that they aren't aliases, but
>> real devices, but the idea is the same: different names for different
>> types of usage).
> 
> That's why I'm not opposed to it. I'm just not sure how detailed/extensive
> that mapping should be.
> 
>>> In particular I am worried that we get a
>>> lot of those mappings (just think of ivtv with its 8 or 9 devices).
>>>
>>> I can think of: webcam, tv, compressed (mpeg), tv-out, compressed-out, mem2mem.
>>>
>>> But a 'tv' node might also be able to handle compressed video (depending
>>> on how the hardware is organized), so how do you handle that? 
>>
>> Well, What you've called as "compressed" is, in IMO, "encoder". It probably makes
>> sense to have, also "decoder".
> 
> I couldn't remember the name :-)
> 
>> I'm in doubt about "webcam", as there are some
>> grabber devices with analog camera inputs for video surveillance. Maybe "camera"
>> is a better name for it.
> 
> Hmm. 'webcam' or 'camera' implies settings like exposure, etc. Many video
> surveillance devices are just frame grabbers to which you can attach a camera,
> but you can just as easily attach any composite/S-video input.

"grabber" is for sure another type.

The thing with the device type at the name is that S_INPUT may actually change the
type of device.

Btw, a proper implementation for the MC API for a device that has audio/video muxes 
internally would require to map the internal pipelines via MC. So, it is something
that just adding a V4L core "glue" won't work. So, even a "simple" device like em28xx
or bttv would require lots of changes internally, as those things are currently not
implemented fully as sub-devices. It will also create a conflict at userspace, as 
the same pipeline could be changed via two different API's. As I said before, too
much changes for no benefit.

>>> It can all
>>> be solved, I'm sure, but I'm not sure if such userspace mappings will scale
>>> that well with the increasing hardware complexity.
>>
>> Not all video nodes would need an alias. Just the ones where it makes sense for
>> an application to open it.
> 
> I'm not certain you will solve that much with this. Most people (i.e. the average
> linux users) have only one or two video devices, most likely a webcam and perhaps
> some DVB/V4L USB stick. Generic apps that needs to enumerate all devices will still
> need to use sysfs or go through all video nodes.
> 
> It's why I like the MC: just one device node per hardware unit. Easy to enumerate,
> easy to present to the user.

It is not one device node per hardware unit. Sysfs is needed to match snd-usb-audio
with the corresponding video device.

Implementing MC to cover the non-SoC cases may work after lots of efforts. It is like
implementing a nuclear bomb to kill a bug. It will work, but a bug spray insecticide
will produce the same effect with a very small cost to implement.

> I'm tempted to see if I can make a proof-of-concept... Time is a problem for me,
> though.
> 
> Regards,
> 
> 	Hans


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-26 15:09                                       ` Mauro Carvalho Chehab
  2011-08-26 15:29                                         ` Hans Verkuil
@ 2011-08-29  9:12                                         ` Laurent Pinchart
  1 sibling, 0 replies; 44+ messages in thread
From: Laurent Pinchart @ 2011-08-29  9:12 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Sakari Ailus, Sylwester Nawrocki,
	Sylwester Nawrocki, linux-media, Hans de Goede

Hi Mauro,

On Friday 26 August 2011 17:09:02 Mauro Carvalho Chehab wrote:
> Em 26-08-2011 11:16, Hans Verkuil escreveu:
> > On Friday, August 26, 2011 15:45:30 Laurent Pinchart wrote:
> >> On Thursday 25 August 2011 14:43:56 Mauro Carvalho Chehab wrote:
> >>> Em 24-08-2011 19:29, Sakari Ailus escreveu:
> >> [snip]
> >> 
> >>>> The question I still have on this is that how should the user know
> >>>> which video node to access on an embedded system with a camera: the
> >>>> OMAP 3 ISP, for example, contains some eight video nodes which have
> >>>> different ISP blocks connected to them. Likely two of these nodes are
> >>>> useful for a general purpose application based on which image format
> >>>> it requests. It would make sense to provide generic applications
> >>>> information only on those devices they may meaningfully use.
> >>> 
> >>> IMO, we should create a namespace device mapping for video devices.
> >>> What I
> >>> 
> >>> mean is that we should keep the "raw" V4L2 devices as:
> >>> 	/dev/video??
> >>> 
> >>> But also recommend the creation of a new userspace map, like:
> >>> 	/dev/webcam??
> >>> 	/dev/tv??
> >>> 	...
> >>> 
> >>> with is an alias for the actual device.
> >>> 
> >>> Something similar to dvd/cdrom aliases that already happen on most
> >>> distros:
> >>> 
> >>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrom -> sr0
> >>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrw -> sr0
> >>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvd -> sr0
> >>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvdrw -> sr0
> >> 
> >> I've been toying with a similar idea. libv4l currently wraps /dev/video*
> >> device nodes and assumes a 1:1 relationship between a video device node
> >> and a video device. Should this assumption be somehow removed, replaced
> >> by a video device concept that wouldn't be tied to a single video
> >> device node ?
> > 
> > Just as background information: the original idea was always that all v4l
> > drivers would have a MC and that libv4l would use the information
> > contained there as a helper (such as deciding which nodes would be the
> > 'default' nodes for generic applications).
> 
> This is something that libv4l won't do: it is up to the userspace
> application to choose the device node to open.

I think this is one of our fundamental issues. Most applications are actually 
not interested in video nodes at all. What they want is a video device. 
Shouldn't libv4l should allow applications to enumerate video devices (as 
opposed to video nodes) and open them without caring about video nodes ?

> Ok, libv4l can have helper APIs for that, like the one I wrote, but even
> adding MC support on it may not solve the issues.
> 
> > Since there is only one MC device node for each piece of video hardware
> > that would make it much easier to discover what hardware there is and
> > what video nodes to use.
> > 
> > I always liked that idea, although I know Mauro is opposed to having a MC
> > for all v4l drivers.
> 
> It doesn't make sense to add MC for all V4L drivers. Not all devices are
> like ivtv with lots of device drivers. In a matter of fact, most supported
> devices create just one video node. Adding MC support for those devices
> will just increase the drivers complexity without _any_ reason, as those
> devices are fully configurable using the existing ioctl's.

Hans' proposal is to handle this in the V4L2 core for most drivers, so those 
drivers won't become more complex as they won't be modified at all. The MC API 
for those devices will only offer read-only enumeration, not link 
configuration.

> Also, as I said before, and implemented at xawtv and at a v4l-utils library,
> the code may use sysfs for simpler devices. It shouldn't be hard to
> implement a mc aware code there, although I don't think that MC API is
> useful to discover what nodes are meant to be used for TV, encoder, decoder,
> webcams, etc. The only type information it currently provides is:
> 
> #define MEDIA_ENT_T_DEVNODE_V4L		(MEDIA_ENT_T_DEVNODE + 1)
> #define MEDIA_ENT_T_DEVNODE_FB		(MEDIA_ENT_T_DEVNODE + 2)
> #define MEDIA_ENT_T_DEVNODE_ALSA	(MEDIA_ENT_T_DEVNODE + 3)
> #define MEDIA_ENT_T_DEVNODE_DVB		(MEDIA_ENT_T_DEVNODE + 4)
> 
> So, a MC aware application also needs to be a hardware-dependent
> application, as it will need to use something else, like the media entity
> name, to discover for what purpose a media node is meant to be used.

As Hans pointed out, this is because we haven't implemented more detailed 
information *yet*. It has always been a goal to provide more details through 
the MC API.

> > While I am not opposed to creating such userspace maps I also think it is
> > a bit of a poor-man's solution.
> 
> The creation of per-type devices is part of the current API: radio
> and vbi nodes are examples of that (except that they aren't aliases, but
> real devices, but the idea is the same: different names for different
> types of usage).

This would only work in a black-and-white world. Devices are often not just 
webcams or tv tuners.

> > In particular I am worried that we get a lot of those mappings (just think
> > of ivtv with its 8 or 9 devices).
> > 
> > I can think of: webcam, tv, compressed (mpeg), tv-out, compressed-out,
> > mem2mem.
> > 
> > But a 'tv' node might also be able to handle compressed video (depending
> > on how the hardware is organized), so how do you handle that?
> 
> Well, What you've called as "compressed" is, in IMO, "encoder". It probably
> makes sense to have, also "decoder". I'm in doubt about "webcam", as there
> are some grabber devices with analog camera inputs for video surveillance.
> Maybe "camera" is a better name for it.
> 
> > It can all be solved, I'm sure, but I'm not sure if such userspace
> > mappings will scale that well with the increasing hardware complexity.
> 
> Not all video nodes would need an alias. Just the ones where it makes sense
> for an application to open it.

If it doesn't make sense for an application to open a video node, you can 
remove it completely :-)

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-26 17:32                                           ` Mauro Carvalho Chehab
@ 2011-08-29  9:24                                             ` Laurent Pinchart
  2011-08-29 14:53                                               ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 44+ messages in thread
From: Laurent Pinchart @ 2011-08-29  9:24 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Sakari Ailus, Sylwester Nawrocki,
	Sylwester Nawrocki, linux-media, Hans de Goede

Hi Mauro,

On Friday 26 August 2011 19:32:30 Mauro Carvalho Chehab wrote:
> Em 26-08-2011 12:29, Hans Verkuil escreveu:
> > On Friday, August 26, 2011 17:09:02 Mauro Carvalho Chehab wrote:
> >> Em 26-08-2011 11:16, Hans Verkuil escreveu:
> >>> On Friday, August 26, 2011 15:45:30 Laurent Pinchart wrote:
> >>>> On Thursday 25 August 2011 14:43:56 Mauro Carvalho Chehab wrote:
> >>>>> Em 24-08-2011 19:29, Sakari Ailus escreveu:
> >>>> [snip]
> >>>> 
> >>>>>> The question I still have on this is that how should the user know
> >>>>>> which video node to access on an embedded system with a camera: the
> >>>>>> OMAP 3 ISP, for example, contains some eight video nodes which have
> >>>>>> different ISP blocks connected to them. Likely two of these nodes
> >>>>>> are useful for a general purpose application based on which image
> >>>>>> format it requests. It would make sense to provide generic
> >>>>>> applications information only on those devices they may
> >>>>>> meaningfully use.
> >>>>> 
> >>>>> IMO, we should create a namespace device mapping for video devices.
> >>>>> What I
> >>>>> 
> >>>>> mean is that we should keep the "raw" V4L2 devices as:
> >>>>> 	/dev/video??
> >>>>> 
> >>>>> But also recommend the creation of a new userspace map, like:
> >>>>> 	/dev/webcam??
> >>>>> 	/dev/tv??
> >>>>> 	...
> >>>>> 
> >>>>> with is an alias for the actual device.
> >>>>> 
> >>>>> Something similar to dvd/cdrom aliases that already happen on most
> >>>>> distros:
> >>>>> 
> >>>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrom -> sr0
> >>>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrw -> sr0
> >>>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvd -> sr0
> >>>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvdrw -> sr0
> >>>> 
> >>>> I've been toying with a similar idea. libv4l currently wraps
> >>>> /dev/video* device nodes and assumes a 1:1 relationship between a
> >>>> video device node and a video device. Should this assumption be
> >>>> somehow removed, replaced by a video device concept that wouldn't be
> >>>> tied to a single video device node ?
> >>> 
> >>> Just as background information: the original idea was always that all
> >>> v4l drivers would have a MC and that libv4l would use the information
> >>> contained there as a helper (such as deciding which nodes would be the
> >>> 'default' nodes for generic applications).
> >> 
> >> This is something that libv4l won't do: it is up to the userspace
> >> application to choose the device node to open. Ok, libv4l can have
> >> helper APIs for that, like the one I wrote, but even adding MC support
> >> on it may not solve the issues.
> >> 
> >>> Since there is only one MC device node for each piece of video hardware
> >>> that would make it much easier to discover what hardware there is and
> >>> what video nodes to use.
> >>> 
> >>> I always liked that idea, although I know Mauro is opposed to having a
> >>> MC for all v4l drivers.
> >> 
> >> It doesn't make sense to add MC for all V4L drivers. Not all devices are
> >> like ivtv with lots of device drivers. In a matter of fact, most
> >> supported devices create just one video node. Adding MC support for
> >> those devices will just increase the drivers complexity without _any_
> >> reason, as those devices are fully configurable using the existing
> >> ioctl's.
> > 
> > It's for consistency so applications know what to expect.
> 
> I've already said it before: We won't be implementing an API for a device
> just for "consistency" without any real reason.

We have a real reason: providing a single API through which applications can 
enumerate devices and retrieve information such as device capabilities and 
relationships between devices.

> > For all the simple drivers you'd just need some simple core support to add
> > a MC. What I always thought would be handy is for applications to just
> > iterate over all MCs and show which video/dvb/audio hardware the user has
> > in its system.
> 
> MC doesn't work for audio yet,

*yet*

> as snd-usb-audio doesn't use it. So, it will fail for a large amount of
> devices whose audio is implemented using standard USB Audio Class. Adding MC
> support for it doesn't sound trivial, and won't offer any gain over the
> sysfs equivalent.
>
> >> Also, as I said before, and implemented at xawtv and at a v4l-utils
> >> library, the code may use sysfs for simpler devices. It shouldn't be
> >> hard to implement a mc aware code there, although I don't think that MC
> >> API is useful to discover what nodes are meant to be used for TV,
> >> encoder, decoder, webcams, etc. The only type information it currently
> >> provides is:
> >> 
> >> #define MEDIA_ENT_T_DEVNODE_V4L		(MEDIA_ENT_T_DEVNODE + 1)
> >> #define MEDIA_ENT_T_DEVNODE_FB		(MEDIA_ENT_T_DEVNODE + 2)
> >> #define MEDIA_ENT_T_DEVNODE_ALSA	(MEDIA_ENT_T_DEVNODE + 3)
> >> #define MEDIA_ENT_T_DEVNODE_DVB		(MEDIA_ENT_T_DEVNODE + 4)
> > 
> > That's because we never added meta information like that. As long as the
> > MC is only used for SoC/complex drivers there is no point in adding such
> > info.
> 
> Even For SoC, such info would probably be useful.

I agree, that's something we need. Most MC-related develop have been driven by 
products so far, this is just something we haven't implemented yet because 
there was no real need. It has always been planned though.

> As I said before, with the current way, I can't see how a generic MC aware
> application would do the right thing for the devices that currently
> implement the MC api, simply because there's no way for an application to
> know what pipelines need to be configured, as no entity at the MC (or
> elsewhere on some userspace library) describes what pipelines are meant to
> be used by the common usecase.
> 
> > It would be trivial to add precisely this type of information, though.
> 
> Yes, adding a new field to indicate what type of V4L devnode is there won't
> be hard, but it would be better to replace the "MEDIA_ENT_T_DEVNODE_V4L" by
> something that actually describes the device type.

I don't agree with that. For complex devices a video node is just a DMA 
engine, it's not tied to a device description as it can be connected to 
different pipelines with different capabilities. What we need is a way to tag 
an entity with several type information. For a device with a single video node 
and no other entity, the video node entity will be tagged with 
MEDIA_ENT_T_DEVNODE_V4L as well as "camera" or "webcam" for instance (this is 
just an example, we need to think about the names). For a device such as the 
OMAP3 ISP, video nodes will be tagged with MEDIA_ENT_T_DEVNODE_V4L only and 
other entities will be tagged with "camera sensor", "resizer", ...

> The same kind of logic might also applies also to other device types. For
> example, ALSA means a playback, a capture or a mixer device? Or does it mean
> the complete audio hardware? (probably not, as it would hide alsa internal
> pipelines).
> 
> >> So, a MC aware application also needs to be a hardware-dependent
> >> application, as it will need to use something else, like the media
> >> entity name, to discover for what purpose a media node is meant to be
> >> used.
> >> 
> >>> While I am not opposed to creating such userspace maps I also think it
> >>> is a bit of a poor-man's solution.
> >> 
> >> The creation of per-type devices is part of the current API: radio
> >> and vbi nodes are examples of that (except that they aren't aliases, but
> >> real devices, but the idea is the same: different names for different
> >> types of usage).
> > 
> > That's why I'm not opposed to it. I'm just not sure how
> > detailed/extensive that mapping should be.
> > 
> >>> In particular I am worried that we get a
> >>> lot of those mappings (just think of ivtv with its 8 or 9 devices).
> >>> 
> >>> I can think of: webcam, tv, compressed (mpeg), tv-out, compressed-out,
> >>> mem2mem.
> >>> 
> >>> But a 'tv' node might also be able to handle compressed video
> >>> (depending on how the hardware is organized), so how do you handle
> >>> that?
> >> 
> >> Well, What you've called as "compressed" is, in IMO, "encoder". It
> >> probably makes sense to have, also "decoder".
> > 
> > I couldn't remember the name :-)
> > 
> >> I'm in doubt about "webcam", as there are some
> >> grabber devices with analog camera inputs for video surveillance. Maybe
> >> "camera" is a better name for it.
> > 
> > Hmm. 'webcam' or 'camera' implies settings like exposure, etc. Many video
> > surveillance devices are just frame grabbers to which you can attach a
> > camera, but you can just as easily attach any composite/S-video input.
> 
> "grabber" is for sure another type.
> 
> The thing with the device type at the name is that S_INPUT may actually
> change the type of device.
> 
> Btw, a proper implementation for the MC API for a device that has
> audio/video muxes internally would require to map the internal pipelines
> via MC. So, it is something that just adding a V4L core "glue" won't work.
> So, even a "simple" device like em28xx or bttv would require lots of
> changes internally, as those things are currently not implemented fully as
> sub-devices. It will also create a conflict at userspace, as the same
> pipeline could be changed via two different API's. As I said before, too
> much changes for no benefit.

You don't have to do that. You can keep the current API unchanged, and add MC 
support in V4L2 core to only enumerate devices and query their capabilities.

> >>> It can all be solved, I'm sure, but I'm not sure if such userspace
> >>> mappings will scale that well with the increasing hardware complexity.
> >> 
> >> Not all video nodes would need an alias. Just the ones where it makes
> >> sense for an application to open it.
> > 
> > I'm not certain you will solve that much with this. Most people (i.e. the
> > average linux users) have only one or two video devices, most likely a
> > webcam and perhaps some DVB/V4L USB stick. Generic apps that needs to
> > enumerate all devices will still need to use sysfs or go through all
> > video nodes.
> > 
> > It's why I like the MC: just one device node per hardware unit. Easy to
> > enumerate, easy to present to the user.
> 
> It is not one device node per hardware unit. Sysfs is needed to match
> snd-usb-audio with the corresponding video device.

No, the goal is to have a single /dev/media* node for both the audio and video 
devices in the webcam.

> Implementing MC to cover the non-SoC cases may work after lots of efforts.
> It is like implementing a nuclear bomb to kill a bug. It will work, but a
> bug spray insecticide will produce the same effect with a very small cost
> to implement.
> 
> > I'm tempted to see if I can make a proof-of-concept... Time is a problem
> > for me, though.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-29  9:24                                             ` Laurent Pinchart
@ 2011-08-29 14:53                                               ` Mauro Carvalho Chehab
  0 siblings, 0 replies; 44+ messages in thread
From: Mauro Carvalho Chehab @ 2011-08-29 14:53 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Hans Verkuil, Sakari Ailus, Sylwester Nawrocki,
	Sylwester Nawrocki, linux-media, Hans de Goede

Em 29-08-2011 06:24, Laurent Pinchart escreveu:
> Hi Mauro,
> 
> On Friday 26 August 2011 19:32:30 Mauro Carvalho Chehab wrote:
>> Em 26-08-2011 12:29, Hans Verkuil escreveu:
>>> On Friday, August 26, 2011 17:09:02 Mauro Carvalho Chehab wrote:
>>>> Em 26-08-2011 11:16, Hans Verkuil escreveu:
>>>>> On Friday, August 26, 2011 15:45:30 Laurent Pinchart wrote:
>>>>>> On Thursday 25 August 2011 14:43:56 Mauro Carvalho Chehab wrote:
>>>>>>> Em 24-08-2011 19:29, Sakari Ailus escreveu:
>>>>>> [snip]
>>>>>>
>>>>>>>> The question I still have on this is that how should the user know
>>>>>>>> which video node to access on an embedded system with a camera: the
>>>>>>>> OMAP 3 ISP, for example, contains some eight video nodes which have
>>>>>>>> different ISP blocks connected to them. Likely two of these nodes
>>>>>>>> are useful for a general purpose application based on which image
>>>>>>>> format it requests. It would make sense to provide generic
>>>>>>>> applications information only on those devices they may
>>>>>>>> meaningfully use.
>>>>>>>
>>>>>>> IMO, we should create a namespace device mapping for video devices.
>>>>>>> What I
>>>>>>>
>>>>>>> mean is that we should keep the "raw" V4L2 devices as:
>>>>>>> 	/dev/video??
>>>>>>>
>>>>>>> But also recommend the creation of a new userspace map, like:
>>>>>>> 	/dev/webcam??
>>>>>>> 	/dev/tv??
>>>>>>> 	...
>>>>>>>
>>>>>>> with is an alias for the actual device.
>>>>>>>
>>>>>>> Something similar to dvd/cdrom aliases that already happen on most
>>>>>>> distros:
>>>>>>>
>>>>>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrom -> sr0
>>>>>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrw -> sr0
>>>>>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvd -> sr0
>>>>>>> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvdrw -> sr0
>>>>>>
>>>>>> I've been toying with a similar idea. libv4l currently wraps
>>>>>> /dev/video* device nodes and assumes a 1:1 relationship between a
>>>>>> video device node and a video device. Should this assumption be
>>>>>> somehow removed, replaced by a video device concept that wouldn't be
>>>>>> tied to a single video device node ?
>>>>>
>>>>> Just as background information: the original idea was always that all
>>>>> v4l drivers would have a MC and that libv4l would use the information
>>>>> contained there as a helper (such as deciding which nodes would be the
>>>>> 'default' nodes for generic applications).
>>>>
>>>> This is something that libv4l won't do: it is up to the userspace
>>>> application to choose the device node to open. Ok, libv4l can have
>>>> helper APIs for that, like the one I wrote, but even adding MC support
>>>> on it may not solve the issues.
>>>>
>>>>> Since there is only one MC device node for each piece of video hardware
>>>>> that would make it much easier to discover what hardware there is and
>>>>> what video nodes to use.
>>>>>
>>>>> I always liked that idea, although I know Mauro is opposed to having a
>>>>> MC for all v4l drivers.
>>>>
>>>> It doesn't make sense to add MC for all V4L drivers. Not all devices are
>>>> like ivtv with lots of device drivers. In a matter of fact, most
>>>> supported devices create just one video node. Adding MC support for
>>>> those devices will just increase the drivers complexity without _any_
>>>> reason, as those devices are fully configurable using the existing
>>>> ioctl's.
>>>
>>> It's for consistency so applications know what to expect.
>>
>> I've already said it before: We won't be implementing an API for a device
>> just for "consistency" without any real reason.
> 
> We have a real reason: providing a single API through which applications can 
> enumerate devices and retrieve information such as device capabilities and 
> relationships between devices.

There's no need to enumerate the relationship for a device with just one video 
node, and where everything is configured via V4L2 API. Adding MC support will
only make things confusing.

Also, see bellow.

> 
>>> For all the simple drivers you'd just need some simple core support to add
>>> a MC. What I always thought would be handy is for applications to just
>>> iterate over all MCs and show which video/dvb/audio hardware the user has
>>> in its system.
>>
>> MC doesn't work for audio yet,
> 
> *yet*
> 
>> as snd-usb-audio doesn't use it. So, it will fail for a large amount of
>> devices whose audio is implemented using standard USB Audio Class. Adding MC
>> support for it doesn't sound trivial, and won't offer any gain over the
>> sysfs equivalent.
>>
>>>> Also, as I said before, and implemented at xawtv and at a v4l-utils
>>>> library, the code may use sysfs for simpler devices. It shouldn't be
>>>> hard to implement a mc aware code there, although I don't think that MC
>>>> API is useful to discover what nodes are meant to be used for TV,
>>>> encoder, decoder, webcams, etc. The only type information it currently
>>>> provides is:
>>>>
>>>> #define MEDIA_ENT_T_DEVNODE_V4L		(MEDIA_ENT_T_DEVNODE + 1)
>>>> #define MEDIA_ENT_T_DEVNODE_FB		(MEDIA_ENT_T_DEVNODE + 2)
>>>> #define MEDIA_ENT_T_DEVNODE_ALSA	(MEDIA_ENT_T_DEVNODE + 3)
>>>> #define MEDIA_ENT_T_DEVNODE_DVB		(MEDIA_ENT_T_DEVNODE + 4)
>>>
>>> That's because we never added meta information like that. As long as the
>>> MC is only used for SoC/complex drivers there is no point in adding such
>>> info.
>>
>> Even For SoC, such info would probably be useful.
> 
> I agree, that's something we need. Most MC-related develop have been driven by 
> products so far, this is just something we haven't implemented yet because 
> there was no real need. It has always been planned though.

No. The reason is that SoC people are creating their own hardware-specific
application. As I said before, no generic MC-aware application would work
currently, as there's nothing at the MC structures that allows an userspace
application (or library) to detect for what purpose a V4L node should be used.

>> As I said before, with the current way, I can't see how a generic MC aware
>> application would do the right thing for the devices that currently
>> implement the MC api, simply because there's no way for an application to
>> know what pipelines need to be configured, as no entity at the MC (or
>> elsewhere on some userspace library) describes what pipelines are meant to
>> be used by the common usecase.
>>
>>> It would be trivial to add precisely this type of information, though.
>>
>> Yes, adding a new field to indicate what type of V4L devnode is there won't
>> be hard, but it would be better to replace the "MEDIA_ENT_T_DEVNODE_V4L" by
>> something that actually describes the device type.
> 
> I don't agree with that. For complex devices a video node is just a DMA 
> engine, it's not tied to a device description as it can be connected to 
> different pipelines with different capabilities. What we need is a way to tag 
> an entity with several type information. For a device with a single video node 
> and no other entity, the video node entity will be tagged with 
> MEDIA_ENT_T_DEVNODE_V4L as well as "camera" or "webcam" for instance (this is 
> just an example, we need to think about the names). For a device such as the 
> OMAP3 ISP, video nodes will be tagged with MEDIA_ENT_T_DEVNODE_V4L only and 
> other entities will be tagged with "camera sensor", "resizer", ...

Ok, that sounds to work.

>> The same kind of logic might also applies also to other device types. For
>> example, ALSA means a playback, a capture or a mixer device? Or does it mean
>> the complete audio hardware? (probably not, as it would hide alsa internal
>> pipelines).
>>
>>>> So, a MC aware application also needs to be a hardware-dependent
>>>> application, as it will need to use something else, like the media
>>>> entity name, to discover for what purpose a media node is meant to be
>>>> used.
>>>>
>>>>> While I am not opposed to creating such userspace maps I also think it
>>>>> is a bit of a poor-man's solution.
>>>>
>>>> The creation of per-type devices is part of the current API: radio
>>>> and vbi nodes are examples of that (except that they aren't aliases, but
>>>> real devices, but the idea is the same: different names for different
>>>> types of usage).
>>>
>>> That's why I'm not opposed to it. I'm just not sure how
>>> detailed/extensive that mapping should be.
>>>
>>>>> In particular I am worried that we get a
>>>>> lot of those mappings (just think of ivtv with its 8 or 9 devices).
>>>>>
>>>>> I can think of: webcam, tv, compressed (mpeg), tv-out, compressed-out,
>>>>> mem2mem.
>>>>>
>>>>> But a 'tv' node might also be able to handle compressed video
>>>>> (depending on how the hardware is organized), so how do you handle
>>>>> that?
>>>>
>>>> Well, What you've called as "compressed" is, in IMO, "encoder". It
>>>> probably makes sense to have, also "decoder".
>>>
>>> I couldn't remember the name :-)
>>>
>>>> I'm in doubt about "webcam", as there are some
>>>> grabber devices with analog camera inputs for video surveillance. Maybe
>>>> "camera" is a better name for it.
>>>
>>> Hmm. 'webcam' or 'camera' implies settings like exposure, etc. Many video
>>> surveillance devices are just frame grabbers to which you can attach a
>>> camera, but you can just as easily attach any composite/S-video input.
>>
>> "grabber" is for sure another type.
>>
>> The thing with the device type at the name is that S_INPUT may actually
>> change the type of device.
>>
>> Btw, a proper implementation for the MC API for a device that has
>> audio/video muxes internally would require to map the internal pipelines
>> via MC. So, it is something that just adding a V4L core "glue" won't work.
>> So, even a "simple" device like em28xx or bttv would require lots of
>> changes internally, as those things are currently not implemented fully as
>> sub-devices. It will also create a conflict at userspace, as the same
>> pipeline could be changed via two different API's. As I said before, too
>> much changes for no benefit.
> 
> You don't have to do that. You can keep the current API unchanged, and add MC 
> support in V4L2 core to only enumerate devices and query their capabilities.

Not doing that would mean a broken implementation of the MC API, as the
device pipelines won't be shown there.

The point is: the MC API should either not be implemented at all or be properly
implemented, describing the device video (and audio) pipelines.

A simple core patch won't implement it properly. Changes at _all_ V4L drivers
will be required, in order to convert the audio and video pipeline code used
by [G|S]_AUDIO and [G|S]_INPUT ioctl's into MC code. Those changes are not trivial,
and will increase the drivers complexity without a good reason.

It is a way easier and more direct to, instead, add a layer at libv4l that would
convert [G|S]_AUDIO and [G|S]_INPUT into MC pipeline setups when the device
implements the MC API, or just use the V4L2 API otherwise.

>>>>> It can all be solved, I'm sure, but I'm not sure if such userspace
>>>>> mappings will scale that well with the increasing hardware complexity.
>>>>
>>>> Not all video nodes would need an alias. Just the ones where it makes
>>>> sense for an application to open it.
>>>
>>> I'm not certain you will solve that much with this. Most people (i.e. the
>>> average linux users) have only one or two video devices, most likely a
>>> webcam and perhaps some DVB/V4L USB stick. Generic apps that needs to
>>> enumerate all devices will still need to use sysfs or go through all
>>> video nodes.
>>>
>>> It's why I like the MC: just one device node per hardware unit. Easy to
>>> enumerate, easy to present to the user.
>>
>> It is not one device node per hardware unit. Sysfs is needed to match
>> snd-usb-audio with the corresponding video device.
> 
> No, the goal is to have a single /dev/media* node for both the audio and video 
> devices in the webcam.

Maybe this is the goal, but it is not the current implementation.

>> Implementing MC to cover the non-SoC cases may work after lots of efforts.
>> It is like implementing a nuclear bomb to kill a bug. It will work, but a
>> bug spray insecticide will produce the same effect with a very small cost
>> to implement.
>>
>>> I'm tempted to see if I can make a proof-of-concept... Time is a problem
>>> for me, though.
> 

Regards,
Mauro

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: Embedded device and the  V4L2 API support - Was: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates
  2011-08-25 12:43                                 ` Mauro Carvalho Chehab
  2011-08-26 13:45                                   ` Laurent Pinchart
@ 2011-08-30 20:34                                   ` Sakari Ailus
  1 sibling, 0 replies; 44+ messages in thread
From: Sakari Ailus @ 2011-08-30 20:34 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sylwester Nawrocki, Laurent Pinchart, Sylwester Nawrocki,
	linux-media, Hans de Goede

Hi Mauro,

On Thu, Aug 25, 2011 at 09:43:56AM -0300, Mauro Carvalho Chehab wrote:
> Em 24-08-2011 19:29, Sakari Ailus escreveu:
> > Hi Mauro,
> > 
> > On Sat, Aug 20, 2011 at 05:12:56AM -0700, Mauro Carvalho Chehab wrote:
> >> Em 20-08-2011 04:27, Sylwester Nawrocki escreveu:
> >>> Hi Mauro,
> >>>
> >>> On 08/17/2011 08:13 AM, Mauro Carvalho Chehab wrote:
> >>>> It seems that there are too many miss understandings or maybe we're just
> >>>> talking the same thing on different ways.
> >>>>
> >>>> So, instead of answering again, let's re-start this discussion on a
> >>>> different way.
> >>>>
> >>>> One of the requirements that it was discussed a lot on both mailing
> >>>> lists and on the Media Controllers meetings that we had (or, at least
> >>>> in the ones where I've participated) is that:
> >>>>
> >>>> 	"A pure V4L2 userspace application, knowing about video device
> >>>> 	 nodes only, can still use the driver. Not all advanced features
> >>>> 	 will be available."
> >>>
> >>> What does a term "a pure V4L2 userspace application" mean here ?
> >>
> >> The above quotation are exactly the Laurent's words that I took from one 
> >> of his replies.
> > 
> > I would define this as an application which uses V4L2 but does not use Media
> > controller or the V4L2 subdev interfaces nor is aware of any particular
> > hardware device.
> 
> As a general rule, applications should not be aware of any particular hardware.
> That's why we provide standard ways of doing things. Experience shows that hardware
> aware applications become obsolete very fast. If you seek at the net, you'll see
> application-aware tools for bttv, zoran, etc. I doubt that they would still
> work today, as they were dependent on some particular special case driver
> behavior that has changed among the time. Also, if you look of the updates 
> timeline for those, you'll see that they were kept maintained for a short
> period of time.

I agree.

> So, I think that all hardware aware dependency should be at the driver and
> at the libv4l only. As the proper usage of the MC API requires a hardware
> aware knowledge, I don't think that a generic application should bother
> to implement the MC API at all.

At least for link configuration, no. But for device discovery, I'd say
"perhaps", as this was one of the MC API's original goals. But that's a
separate discussion from this one. I think the question is how the user
should discover devices, e.g. that which audio source is connected to a
video source.

> It should be noticed, however, that having a hardware/driver aware libv4l 
> also implies that libv4l should be dependent on an specific kernel version, 
> at the distributions.

I think a difference needs to be made between libv4l and libv4l plugins.

The plugin interface was added with plugins performing MC link setup etc. in
mind, but whether the libv4l proper should do something specific in embedded
system other than loading a plugin hasn't been discussed yet.

I could imagine this might be part of the libv4l core eventually.

> This already happens somehow there, but, currently, a new version of libv4l
> can work with an older kernel (as all we currently have there is support for
> new FOURCC formats and quirk lists of sensors mounted upside down).
> 
> So, a version made to work with kernel 3.0 will for sure support all webcams
> found at kernel 2.39.

As features are standardised this is easy. However, often standardising
something requires creating a few implementations privately. If one writes a
libv4l plugin using such private interfaces, it will break once the feature
is standardised.

A good, simple example of this could be the FRAME_SYNC event:

<URL:http://git.linuxtv.org/sailus/media_tree.git/shortlog/refs/heads/media-for-3.2-frame-sync-event-1>

I don't think this is an actual problem right now since there haven't been
many generic users of these interfaces but I think this is worth keeping in
mind.

> >>> Does it also account an application which is linked to libv4l2 and uses
> >>> calls specific to a particular hardware which are included there?
> >>
> >> That's a good question. We need to properly define what it means, to avoid
> >> having libv4l abuses.
> >>
> >> In other words, it seems ok to use libv4l to set pipelines via the MC API
> >> at open(), but it isn't ok to have an open() binary only libv4l plugin that
> >> will hook open and do the complete device initialization on userspace
> >> (I remember that one vendor once proposed a driver like that).
> >>
> >> Also, from my side, I'd like to see both libv4l and kernel drivers being
> >> submitted together, if the new driver depends on a special libv4l support
> >> for it to work.
> > 
> > I agree with the above.
> > 
> > I do favour using libv4l to do the pipeline setup using MC and V4L2 subdev
> > interfaces. This has the benefit that the driver provides just one interface
> > to access different aspects of it, be it pipeline setup (Media controller),
> > a control to change exposure time (V4L2 subdev) or queueing video buffer
> > (V4L2). This means more simple and more maintainable drivers and less bugs
> > in general.
> 
> I agree.
> 
> > Apart from what the drivers already provide on video nodea, to support a
> > general purpose V4L2 application, libv4l plugin can do is (not exhaustive
> > list):
> 
> IMO, we need to write a full list of what should be done at libv4l, as it seems
> that there are different opinions about what should be at the driver, and what
> should be outside it.

In general we can describe this related to the capabilities of the hardware
as the capabilities of the hardware varies. I think a general rule should be
that the drivers should provide access to the capabilities of the hardware,
no more.

For example, some sensors run their own software automatic white balance
algorithms. A Fujitsu image sensor (M5-MOLS, if I remember correctly) does
this. But on hardware doesn't do it, it often depends on the user space what
kind of white balance algorithm is wanted. One example of this which I often
mention is Fcam, which is also open source. It requires quite low level
access to hardware features such as those offered by the V4L2 subdevice
nodes.

> > - Perform pipeline setup using MC interface, possibly based on input
> >   selected using S_INPUT so that e.g. multiple sensors can be supported and
> > - implement {S,G,TRY}_EXT_CTRLS and QUERYCTRL using V4L2 subdev nodes as
> >   backend.
> > 
> > As the Media controller and V4L2 interfaces are standardised, I see no
> > reason why this plugin could not be fully generic: only the configuration is
> > device specific.
> 
> I don't think you can do everything that is needed on a fully generic plugin.
> 3A algorithm implementations, for example, seems to be device specific.

I hadn't gotten that far yet. :)

It depends how you want to do it. Libv4l contains algorithms which use image
data and not statistics. Using the image data is only dependent on the
format of the data, so it can be considered hardware independent. It's less
efficient than using the statistics: doing the same computations using the
CPU takes CPU time and requires extra memory accesses.

> Of course, the maximum we can do to have a generic implementation that fits
> on most cases, the better. 
> 
> > This configuration could be stored into a configuration file which is
> > selected based on the system type. On embedded systems (ARMs at least, but
> > anyway the vast majority is based on ARM) the board type is easily available
> > for the user space applications in /proc/cpuinfo --- this example is from
> > the Nokia N900:
> > 
> > ---
> > Processor       : ARMv7 Processor rev 3 (v7l)
> > BogoMIPS        : 249.96
> > Features        : swp half fastmult vfp edsp neon vfpv3 
> > CPU implementer : 0x41
> > CPU architecture: 7
> > CPU variant     : 0x1
> > CPU part        : 0xc08
> > CPU revision    : 3
> > 
> > Hardware        : Nokia RX-51 board
> > Revision        : 2101
> > Serial          : 0000000000000000
> > ---
> > 
> > I think this would be a first step to support general purpose application on
> > embedded systems.
> 
> Agreed.
> 
> > The question I still have on this is that how should the user know which
> > video node to access on an embedded system with a camera: the OMAP 3 ISP,
> > for example, contains some eight video nodes which have different ISP blocks
> > connected to them. Likely two of these nodes are useful for a general
> > purpose application based on which image format it requests. It would make
> > sense to provide generic applications information only on those devices they
> > may meaningfully use.
> 
> IMO, we should create a namespace device mapping for video devices. What I mean
> is that we should keep the "raw" V4L2 devices as:
> 	/dev/video??
> But also recommend the creation of a new userspace map, like:
> 	/dev/webcam??
> 	/dev/tv??
> 	...
> with is an alias for the actual device.
> 
> Something similar to dvd/cdrom aliases that already happen on most distros:
> 
> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrom -> sr0
> lrwxrwxrwx   1 root root           3 Ago 24 12:14 cdrw -> sr0
> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvd -> sr0
> lrwxrwxrwx   1 root root           3 Ago 24 12:14 dvdrw -> sr0
>  
> > Later on, more functionality could be added to better support hardware which
> > supports e.g. different pixel formats, image sizes, scaling and crop. I'm
> > not entirely certain if all of this is fully generic --- but I think the
> > vast majority of it is --- at least converting from v4l2_mbus_framefmt
> > pixelcode to v4l2_format.fmt is often quite hardware specific which must be
> > taken into account by the generic plugin.
> > 
> > At that point many policy decisions must be made on how to use the hardware
> > the best way, and that must be also present in the configuration file.
> > 
> > But perhaps I'm going too far with this now; we don't yet have a generic
> > plugin providing basic functionality. We have the OMAP 3 ISP libv4l plugin
> > which might be a good staring point for this work.
> > 
> 
> We can start with that plugin making it more generic or forking it into two
> plugins: a generic one, and an OMAP3 specific implementation for the things
> that are hardware-specific.

I think at this point I would like to focus all the efforts towards a
generic plugin. We can later see what really cannot be done in that one at
all, and decide if we need that, wharever it turns out to be.

-- 
Sakari Ailus
sakari.ailus@iki.fi

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2011-08-30 20:34 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-07-27 16:35 [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates Sylwester Nawrocki
2011-07-28  2:55 ` Mauro Carvalho Chehab
2011-07-28 10:09   ` Sylwester Nawrocki
2011-07-28 13:20     ` Mauro Carvalho Chehab
2011-07-28 22:57       ` Sylwester Nawrocki
2011-07-29  4:02         ` Mauro Carvalho Chehab
2011-07-29  8:36           ` Laurent Pinchart
2011-08-09 20:05             ` Mauro Carvalho Chehab
2011-08-09 23:18               ` Sakari Ailus
2011-08-10  0:22                 ` Mauro Carvalho Chehab
2011-08-10  8:41                   ` Sylwester Nawrocki
2011-08-10 12:52                     ` Mauro Carvalho Chehab
2011-08-15 12:45                   ` Laurent Pinchart
2011-08-16  0:21                     ` Mauro Carvalho Chehab
2011-08-16  8:59                       ` Laurent Pinchart
2011-08-16 15:07                         ` Mauro Carvalho Chehab
2011-08-15 12:30               ` Laurent Pinchart
2011-08-16  0:13                 ` Mauro Carvalho Chehab
2011-08-16  8:57                   ` Laurent Pinchart
2011-08-16 15:30                     ` Mauro Carvalho Chehab
2011-08-16 15:44                       ` Laurent Pinchart
2011-08-16 22:36                         ` Mauro Carvalho Chehab
2011-08-17  7:57                           ` Laurent Pinchart
2011-08-17 12:25                             ` Mauro Carvalho Chehab
2011-08-17 12:37                               ` Ivan T. Ivanov
2011-08-17 13:27                                 ` Mauro Carvalho Chehab
2011-08-17 12:33                           ` Sakari Ailus
2011-08-16 21:47                       ` Sylwester Nawrocki
2011-08-17  6:13                         ` Embedded device and the V4L2 API support - Was: " Mauro Carvalho Chehab
2011-08-20 11:27                           ` Sylwester Nawrocki
2011-08-20 12:12                             ` Mauro Carvalho Chehab
2011-08-24 22:29                               ` Sakari Ailus
2011-08-25 12:43                                 ` Mauro Carvalho Chehab
2011-08-26 13:45                                   ` Laurent Pinchart
2011-08-26 14:16                                     ` Hans Verkuil
2011-08-26 15:09                                       ` Mauro Carvalho Chehab
2011-08-26 15:29                                         ` Hans Verkuil
2011-08-26 17:32                                           ` Mauro Carvalho Chehab
2011-08-29  9:24                                             ` Laurent Pinchart
2011-08-29 14:53                                               ` Mauro Carvalho Chehab
2011-08-29  9:12                                         ` Laurent Pinchart
2011-08-30 20:34                                   ` Sakari Ailus
2011-08-03 14:28           ` Sylwester Nawrocki
2011-07-29  8:17   ` Sakari Ailus

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.