All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] snapshot mode, flash capabilities and control
@ 2011-02-24 12:18 Guennadi Liakhovetski
  2011-02-24 12:40 ` Hans Verkuil
  2011-03-03  7:09 ` Kim, HeungJun
  0 siblings, 2 replies; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-24 12:18 UTC (permalink / raw)
  To: Linux Media Mailing List

Agenda.
=======

In a recent RFC [1] I proposed V4L2 API extensions to support fast switching
between multiple capture modes or data formats. However, this is not sufficient
to efficiently leverage snapshot capabilities of existing hardware - sensors and
SoCs, and to satisfy user-space needs, a few more functions have to be
implemented.

Snapshot and strobe / flash capabilities vary significantly between sensors.
Some of them only capture a single image upon trigger activation, some can
capture several images, readout and exposure capabilities vary too. Not all
sensors support a strobe signal, and those, that support it, also offer very
different options to select strobe beginning and duration. This proposal is
trying to select a minimum API, that can be reasonably supported by many
systems and provide a reasonable functionality set to the user.

Proposed implementation.
========================

1. Switch the interface into the snapshot mode. This is required in addition to
simply configuring the interface with a different format to activate hardware-
specific support for triggered single image capture. It is proposed to use the
VIDIOC_S_PARM ioctl() with a new V4L2_MODE_SNAPSHOT value for the
struct v4l2_captureparm::capturemode and ::capability fields. Further
hardware-specific details can be passed in ::extendedmode, ::readbuffers can be
used to specify the exact number of frames to be captured. Similarly,
VIDIOC_G_PARM shall return supported and current capture modes.

Many sensors provide the ability to trigger snapshot capture either from an
external source or from a control register. Usually, however, there is no
possibility to select the trigger source, either of them can be used at any
time.

2. Specify a flash mode. Define new capture capabilities to be used with
struct v4l2_captureparm::capturemode:

V4L2_MODE_FLASH_SYNC	/* synchronise flash with image capture */
V4L2_MODE_FLASH_ON	/* turn on - "torch-mode" */
V4L2_MODE_FLASH_OFF	/* turn off */

Obviously, the above synchronous operation does not exactly define beginning and
duration of the strobe signal. It is proposed to leave the specific flash timing
configuration to the driver itself and, possibly, to driver-specific extended
mode flags.

3. Add a sensor-subdev operation

	int (*snapshot_trigger)(struct v4l2_subdev *sd)

to start capturing the next frame in the snapshot mode.

References.
===========

[1] http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/29357

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-24 12:18 [RFC] snapshot mode, flash capabilities and control Guennadi Liakhovetski
@ 2011-02-24 12:40 ` Hans Verkuil
  2011-02-24 16:07   ` Guennadi Liakhovetski
  2011-03-03  7:09 ` Kim, HeungJun
  1 sibling, 1 reply; 50+ messages in thread
From: Hans Verkuil @ 2011-02-24 12:40 UTC (permalink / raw)
  To: Guennadi Liakhovetski; +Cc: Linux Media Mailing List

On Thursday, February 24, 2011 13:18:39 Guennadi Liakhovetski wrote:
> Agenda.
> =======
> 
> In a recent RFC [1] I proposed V4L2 API extensions to support fast switching
> between multiple capture modes or data formats. However, this is not sufficient
> to efficiently leverage snapshot capabilities of existing hardware - sensors and
> SoCs, and to satisfy user-space needs, a few more functions have to be
> implemented.
> 
> Snapshot and strobe / flash capabilities vary significantly between sensors.
> Some of them only capture a single image upon trigger activation, some can
> capture several images, readout and exposure capabilities vary too. Not all
> sensors support a strobe signal, and those, that support it, also offer very
> different options to select strobe beginning and duration. This proposal is
> trying to select a minimum API, that can be reasonably supported by many
> systems and provide a reasonable functionality set to the user.
> 
> Proposed implementation.
> ========================
> 
> 1. Switch the interface into the snapshot mode. This is required in addition to
> simply configuring the interface with a different format to activate hardware-
> specific support for triggered single image capture. It is proposed to use the
> VIDIOC_S_PARM ioctl() with a new V4L2_MODE_SNAPSHOT value for the
> struct v4l2_captureparm::capturemode and ::capability fields. Further
> hardware-specific details can be passed in ::extendedmode, ::readbuffers can be
> used to specify the exact number of frames to be captured. Similarly,
> VIDIOC_G_PARM shall return supported and current capture modes.
> 
> Many sensors provide the ability to trigger snapshot capture either from an
> external source or from a control register. Usually, however, there is no
> possibility to select the trigger source, either of them can be used at any
> time.

I'd rather see a new VIDIOC_G/S_SNAPSHOT ioctl then adding stuff to G/S_PARM.
Those G/S_PARM ioctls should never have been added to V4L2 in the current form.

AFAIK the only usable field is timeperframe, all others are either not used at
all or driver specific.

I am very much in favor of freezing the G/S_PARM ioctls.

> 2. Specify a flash mode. Define new capture capabilities to be used with
> struct v4l2_captureparm::capturemode:
> 
> V4L2_MODE_FLASH_SYNC	/* synchronise flash with image capture */
> V4L2_MODE_FLASH_ON	/* turn on - "torch-mode" */
> V4L2_MODE_FLASH_OFF	/* turn off */
> 
> Obviously, the above synchronous operation does not exactly define beginning and
> duration of the strobe signal. It is proposed to leave the specific flash timing
> configuration to the driver itself and, possibly, to driver-specific extended
> mode flags.

Isn't this something that can be done quite well with controls?
 
> 3. Add a sensor-subdev operation
> 
> 	int (*snapshot_trigger)(struct v4l2_subdev *sd)
> 
> to start capturing the next frame in the snapshot mode.

You might need a 'count' argument if you want to have multiple frames in snapshot
mode.

Regards,

	Hans

> 
> References.
> ===========
> 
> [1] http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/29357
> 
> Thanks
> Guennadi
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

-- 
Hans Verkuil - video4linux developer - sponsored by Cisco

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-24 12:40 ` Hans Verkuil
@ 2011-02-24 16:07   ` Guennadi Liakhovetski
  2011-02-24 17:57     ` Kim HeungJun
  0 siblings, 1 reply; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-24 16:07 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Linux Media Mailing List

Hi Hans

Thanks for the review. Perhaps, I should have mentioned it in the original 
post, I've written down this RFC to have a basis for a discussion. The 
specific API proposals in it are nothing solid, of course, so, we can 
freely discuss it here now, or, maybe we get a chance to discuss it 
together with the earlier one, concerning multiple queues, if and when we 
meet personally next time;) Concerning your specific comments:

On Thu, 24 Feb 2011, Hans Verkuil wrote:

> On Thursday, February 24, 2011 13:18:39 Guennadi Liakhovetski wrote:
> > Agenda.
> > =======
> > 
> > In a recent RFC [1] I proposed V4L2 API extensions to support fast switching
> > between multiple capture modes or data formats. However, this is not sufficient
> > to efficiently leverage snapshot capabilities of existing hardware - sensors and
> > SoCs, and to satisfy user-space needs, a few more functions have to be
> > implemented.
> > 
> > Snapshot and strobe / flash capabilities vary significantly between sensors.
> > Some of them only capture a single image upon trigger activation, some can
> > capture several images, readout and exposure capabilities vary too. Not all
> > sensors support a strobe signal, and those, that support it, also offer very
> > different options to select strobe beginning and duration. This proposal is
> > trying to select a minimum API, that can be reasonably supported by many
> > systems and provide a reasonable functionality set to the user.
> > 
> > Proposed implementation.
> > ========================
> > 
> > 1. Switch the interface into the snapshot mode. This is required in addition to
> > simply configuring the interface with a different format to activate hardware-
> > specific support for triggered single image capture. It is proposed to use the
> > VIDIOC_S_PARM ioctl() with a new V4L2_MODE_SNAPSHOT value for the
> > struct v4l2_captureparm::capturemode and ::capability fields. Further
> > hardware-specific details can be passed in ::extendedmode, ::readbuffers can be
> > used to specify the exact number of frames to be captured. Similarly,
> > VIDIOC_G_PARM shall return supported and current capture modes.
> > 
> > Many sensors provide the ability to trigger snapshot capture either from an
> > external source or from a control register. Usually, however, there is no
> > possibility to select the trigger source, either of them can be used at any
> > time.
> 
> I'd rather see a new VIDIOC_G/S_SNAPSHOT ioctl then adding stuff to G/S_PARM.
> Those G/S_PARM ioctls should never have been added to V4L2 in the current form.
> 
> AFAIK the only usable field is timeperframe, all others are either not used at
> all or driver specific.
> 
> I am very much in favor of freezing the G/S_PARM ioctls.

Ic, of course, I knew nothing about this status of G/S_PARM. As it stands, 
it seemed a pretty good fit for what I want to do, but, yes, I agree, that 
this ioctl() as such is not very pretty, so, sure, we can use a new one. 
But first, in principle you agree, that adding this functionality to V4L2 
makes sense instead of relying exclusively on STREAMON / STREAMOFF for 
still images? But in fact, streaming vs. snapshot does fit under the same 
category as fps setting, IMHO, doesn't it? So, I think it would make sense 
to group them together if possible. One could even say: 0fps means manual 
trigger, i.e., snapshot mode... But anyway, if we want to freeze / kill it 
- will create a new ioctl().

Now, my proposal above put the snapshot mode parallel to the continuous 
streaming mode. In this regard, maybe it would make sense to add a new 
capability V4L2_CAP_VIDEO_SNAPSHOT? Next, additionally to the standard 
streamon / qbuf / dqbuf / streamoff flow we want to add an operation to 
switch to the snapshot mode and another one to query the current mode and 
its parameters. I agree, that there's currently no suitable ioctl(), that 
could be extended to manage the snapshot mode, apart from G/S_PARM. My 
question then is: shall we just add G/S_SNAPSHOT, or domething more 
generic, like G/S_CAPMODE with STREAMING and SNAPSHOT as possible values, 
STREAMING being the default. This would make sense if there were any other 
modes, that could eventually be added? No, I cannot think of any 
personally atm.

So, if we go for G/S_SNAPSHOT, shall they have on / off and number of 
frames per trigger as two parameters?

> > 2. Specify a flash mode. Define new capture capabilities to be used with
> > struct v4l2_captureparm::capturemode:
> > 
> > V4L2_MODE_FLASH_SYNC	/* synchronise flash with image capture */
> > V4L2_MODE_FLASH_ON	/* turn on - "torch-mode" */
> > V4L2_MODE_FLASH_OFF	/* turn off */
> > 
> > Obviously, the above synchronous operation does not exactly define beginning and
> > duration of the strobe signal. It is proposed to leave the specific flash timing
> > configuration to the driver itself and, possibly, to driver-specific extended
> > mode flags.
> 
> Isn't this something that can be done quite well with controls?

In principle - yes, I guess, I just wanted to group them together with the 
snapshot mode explicitly, because they are usually bound together. 
Although, one could use the torch-mode with preview. So, yes, if we 
prefer them independent from snapshot, then controls will do the job.

> > 3. Add a sensor-subdev operation
> > 
> > 	int (*snapshot_trigger)(struct v4l2_subdev *sd)
> > 
> > to start capturing the next frame in the snapshot mode.
> 
> You might need a 'count' argument if you want to have multiple frames in snapshot
> mode.

Right, I wasn't sure whether we should issue this for each frame, but I 
think - yes - it is better to fire one per trigger set.

Thanks
Guennadi

> 
> Regards,
> 
> 	Hans
> 
> > 
> > References.
> > ===========
> > 
> > [1] http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/29357
> > 
> > Thanks
> > Guennadi
> > ---
> > Guennadi Liakhovetski, Ph.D.
> > Freelance Open-Source Software Developer
> > http://www.open-technology.de/
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-media" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> > 
> 
> -- 
> Hans Verkuil - video4linux developer - sponsored by Cisco
> 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-24 16:07   ` Guennadi Liakhovetski
@ 2011-02-24 17:57     ` Kim HeungJun
  2011-02-25 10:05       ` Laurent Pinchart
  0 siblings, 1 reply; 50+ messages in thread
From: Kim HeungJun @ 2011-02-24 17:57 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Kim HeungJun, Hans Verkuil, Linux Media Mailing List

Hi Guennadi,

I think, it's maybe a good suggestion for current trend! ( I'm not sure this express *trend* is right :))

But, the flash or strobe control connected with sensor can be controlled by user application
directly, not in the sensor subdev device. For example, let's think that there is a Flash light
application. Its function is just power-up LED. So, in this case, if the flash control mechanism
should be used *only* in the V4L2 API belonging through S/G_PARM, this mechanism
might be a little complicated, nevertheless it's possible to control led/strobe using S/G_PARM.
I mean that we must check the camera must be in the capture state or not, or another setting
(like a FPS) should be checked. Namely, for powering up LED, the more procedure is needed.

Also, we can think another case. Generally, the LED/STROBE/FLASH is connected with camera
sensor directly, but another case can be existed that LED connected with Processor by controlling
GPIO. So, In this case, using LED framework look more better I guess, and then user application
access LED frame sysfs node eaisly.

So, IMHO, probably it's the better option for now to make another CID like V4L2_CID_STROBE clearly,
then in the case connected directly with sensor, we control using I2C or another command method for
sensor in the V4L2_CID_STROBE, OR in the case connected directly with process, we also using
V4L2_CID_STROBE, but the led framework code is in the CID control functions.

Actually, because I also think deeply same issue for a long time, so I wanna very talk about this issues.

How about dealing with this?

Regards,
Heungjun Kim


2011. 2. 25., 오전 1:07, Guennadi Liakhovetski 작성:

> Hi Hans
> 
> Thanks for the review. Perhaps, I should have mentioned it in the original 
> post, I've written down this RFC to have a basis for a discussion. The 
> specific API proposals in it are nothing solid, of course, so, we can 
> freely discuss it here now, or, maybe we get a chance to discuss it 
> together with the earlier one, concerning multiple queues, if and when we 
> meet personally next time;) Concerning your specific comments:
> 
> On Thu, 24 Feb 2011, Hans Verkuil wrote:
> 
>> On Thursday, February 24, 2011 13:18:39 Guennadi Liakhovetski wrote:
>>> Agenda.
>>> =======
>>> 
>>> In a recent RFC [1] I proposed V4L2 API extensions to support fast switching
>>> between multiple capture modes or data formats. However, this is not sufficient
>>> to efficiently leverage snapshot capabilities of existing hardware - sensors and
>>> SoCs, and to satisfy user-space needs, a few more functions have to be
>>> implemented.
>>> 
>>> Snapshot and strobe / flash capabilities vary significantly between sensors.
>>> Some of them only capture a single image upon trigger activation, some can
>>> capture several images, readout and exposure capabilities vary too. Not all
>>> sensors support a strobe signal, and those, that support it, also offer very
>>> different options to select strobe beginning and duration. This proposal is
>>> trying to select a minimum API, that can be reasonably supported by many
>>> systems and provide a reasonable functionality set to the user.
>>> 
>>> Proposed implementation.
>>> ========================
>>> 
>>> 1. Switch the interface into the snapshot mode. This is required in addition to
>>> simply configuring the interface with a different format to activate hardware-
>>> specific support for triggered single image capture. It is proposed to use the
>>> VIDIOC_S_PARM ioctl() with a new V4L2_MODE_SNAPSHOT value for the
>>> struct v4l2_captureparm::capturemode and ::capability fields. Further
>>> hardware-specific details can be passed in ::extendedmode, ::readbuffers can be
>>> used to specify the exact number of frames to be captured. Similarly,
>>> VIDIOC_G_PARM shall return supported and current capture modes.
>>> 
>>> Many sensors provide the ability to trigger snapshot capture either from an
>>> external source or from a control register. Usually, however, there is no
>>> possibility to select the trigger source, either of them can be used at any
>>> time.
>> 
>> I'd rather see a new VIDIOC_G/S_SNAPSHOT ioctl then adding stuff to G/S_PARM.
>> Those G/S_PARM ioctls should never have been added to V4L2 in the current form.
>> 
>> AFAIK the only usable field is timeperframe, all others are either not used at
>> all or driver specific.
>> 
>> I am very much in favor of freezing the G/S_PARM ioctls.
> 
> Ic, of course, I knew nothing about this status of G/S_PARM. As it stands, 
> it seemed a pretty good fit for what I want to do, but, yes, I agree, that 
> this ioctl() as such is not very pretty, so, sure, we can use a new one. 
> But first, in principle you agree, that adding this functionality to V4L2 
> makes sense instead of relying exclusively on STREAMON / STREAMOFF for 
> still images? But in fact, streaming vs. snapshot does fit under the same 
> category as fps setting, IMHO, doesn't it? So, I think it would make sense 
> to group them together if possible. One could even say: 0fps means manual 
> trigger, i.e., snapshot mode... But anyway, if we want to freeze / kill it 
> - will create a new ioctl().
> 
> Now, my proposal above put the snapshot mode parallel to the continuous 
> streaming mode. In this regard, maybe it would make sense to add a new 
> capability V4L2_CAP_VIDEO_SNAPSHOT? Next, additionally to the standard 
> streamon / qbuf / dqbuf / streamoff flow we want to add an operation to 
> switch to the snapshot mode and another one to query the current mode and 
> its parameters. I agree, that there's currently no suitable ioctl(), that 
> could be extended to manage the snapshot mode, apart from G/S_PARM. My 
> question then is: shall we just add G/S_SNAPSHOT, or domething more 
> generic, like G/S_CAPMODE with STREAMING and SNAPSHOT as possible values, 
> STREAMING being the default. This would make sense if there were any other 
> modes, that could eventually be added? No, I cannot think of any 
> personally atm.
> 
> So, if we go for G/S_SNAPSHOT, shall they have on / off and number of 
> frames per trigger as two parameters?
> 
>>> 2. Specify a flash mode. Define new capture capabilities to be used with
>>> struct v4l2_captureparm::capturemode:
>>> 
>>> V4L2_MODE_FLASH_SYNC	/* synchronise flash with image capture */
>>> V4L2_MODE_FLASH_ON	/* turn on - "torch-mode" */
>>> V4L2_MODE_FLASH_OFF	/* turn off */
>>> 
>>> Obviously, the above synchronous operation does not exactly define beginning and
>>> duration of the strobe signal. It is proposed to leave the specific flash timing
>>> configuration to the driver itself and, possibly, to driver-specific extended
>>> mode flags.
>> 
>> Isn't this something that can be done quite well with controls?
> 
> In principle - yes, I guess, I just wanted to group them together with the 
> snapshot mode explicitly, because they are usually bound together. 
> Although, one could use the torch-mode with preview. So, yes, if we 
> prefer them independent from snapshot, then controls will do the job.
> 
>>> 3. Add a sensor-subdev operation
>>> 
>>> 	int (*snapshot_trigger)(struct v4l2_subdev *sd)
>>> 
>>> to start capturing the next frame in the snapshot mode.
>> 
>> You might need a 'count' argument if you want to have multiple frames in snapshot
>> mode.
> 
> Right, I wasn't sure whether we should issue this for each frame, but I 
> think - yes - it is better to fire one per trigger set.
> 
> Thanks
> Guennadi
> 
>> 
>> Regards,
>> 
>> 	Hans
>> 
>>> 
>>> References.
>>> ===========
>>> 
>>> [1] http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/29357
>>> 
>>> Thanks
>>> Guennadi
>>> ---
>>> Guennadi Liakhovetski, Ph.D.
>>> Freelance Open-Source Software Developer
>>> http://www.open-technology.de/
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-media" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> 
>>> 
>> 
>> -- 
>> Hans Verkuil - video4linux developer - sponsored by Cisco
>> 
> 
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-24 17:57     ` Kim HeungJun
@ 2011-02-25 10:05       ` Laurent Pinchart
  2011-02-25 13:53         ` Sakari Ailus
  2011-02-25 16:58         ` Guennadi Liakhovetski
  0 siblings, 2 replies; 50+ messages in thread
From: Laurent Pinchart @ 2011-02-25 10:05 UTC (permalink / raw)
  To: Kim HeungJun
  Cc: Guennadi Liakhovetski, Hans Verkuil, Linux Media Mailing List,
	Sakari Ailus, Stanimir Varbanov

Hi,

On Thursday 24 February 2011 18:57:22 Kim HeungJun wrote:
> Hi Guennadi,
> 
> I think, it's maybe a good suggestion for current trend! ( I'm not sure
> this express *trend* is right :))
> 
> But, the flash or strobe control connected with sensor can be controlled by
> user application directly, not in the sensor subdev device. For example,
> let's think that there is a Flash light application. Its function is just
> power-up LED. So, in this case, if the flash control mechanism should be
> used *only* in the V4L2 API belonging through S/G_PARM, this mechanism
> might be a little complicated, nevertheless it's possible to control
> led/strobe using S/G_PARM. I mean that we must check the camera must be in
> the capture state or not, or another setting (like a FPS) should be
> checked. Namely, for powering up LED, the more procedure is needed.
> 
> Also, we can think another case. Generally, the LED/STROBE/FLASH is
> connected with camera sensor directly, but another case can be existed
> that LED connected with Processor by controlling GPIO. So, In this case,
> using LED framework look more better I guess, and then user application
> access LED frame sysfs node eaisly.

The flash is usually handled by a dedicated I2C flash controller (such as the 
ADP1653 used in the N900 - http://www.analog.com/static/imported-
files/data_sheets/ADP1653.pdf), which is in that case mapped to a v4l2_subdev. 
Simpler solutions of course exist, such as GPIO LEDs or LEDs connected 
directly to the sensor. We need an API that can support all those hardware 
options.

Let's also not forget that, in addition to the flash LEDs itself, devices 
often feature an indicator LED (a small low-power red LED used to indicate 
that video capture is ongoing). The flash LEDs can also be used during video 
capture, and in focus assist mode (pre-flashing also comes to mind).

In addition to the modes, flash controllers can generate strobe signals or 
react on them. Durations are programmable, as well as current limits, and 
sometimes PWM/current source mode selection. The device can usually detect 
faults (such as over-current or over-temperature faults) that need to be 
reported to the user. And we haven't even discussed Xenon flashes.

Given the wide variety of parameters, I think it would make sense to use V4L2 
controls on the sub-device directly. If the flash LEDs are connected directly 
to the sensor, controls on the sensor sub-device can be used to select the 
flash parameters.

This doesn't solve the flash/capture synchronization problem though. I don't 
think we need a dedicated snapshot capture mode at the V4L2 level. A way to 
configure the sensor to react on an external trigger provided by the flash 
controller is needed, and that could be a control on the flash sub-device. 
What we would probably miss is a way to issue a STREAMON with a number of 
frames to capture. A new ioctl is probably needed there. Maybe that would be 
an opportunity to create a new stream-control ioctl that could replace 
STREAMON and STREAMOFF in the long term (we could extend the subdev s_stream 
operation, and easily map STREAMON and STREAMOFF to the new ioctl in 
video_ioctl2 internally).

> So, IMHO, probably it's the better option for now to make another CID like
> V4L2_CID_STROBE clearly, then in the case connected directly with sensor,
> we control using I2C or another command method for sensor in the
> V4L2_CID_STROBE, OR in the case connected directly with process, we also
> using V4L2_CID_STROBE, but the led framework code is in the CID control
> functions.
> 
> Actually, because I also think deeply same issue for a long time, so I
> wanna very talk about this issues.
> 
> How about dealing with this?
> 
> Regards,
> Heungjun Kim
> 
> 2011. 2. 25., 오전 1:07, Guennadi Liakhovetski wrote:
> > Hi Hans
> > 
> > Thanks for the review. Perhaps, I should have mentioned it in the
> > original post, I've written down this RFC to have a basis for a
> > discussion. The specific API proposals in it are nothing solid, of
> > course, so, we can freely discuss it here now, or, maybe we get a chance
> > to discuss it together with the earlier one, concerning multiple queues,
> > if and when we meet personally next time;) Concerning your specific
> > comments:
> > 
> > On Thu, 24 Feb 2011, Hans Verkuil wrote:
> >> On Thursday, February 24, 2011 13:18:39 Guennadi Liakhovetski wrote:
> >>> Agenda.
> >>> =======
> >>> 
> >>> In a recent RFC [1] I proposed V4L2 API extensions to support fast
> >>> switching between multiple capture modes or data formats. However,
> >>> this is not sufficient to efficiently leverage snapshot capabilities
> >>> of existing hardware - sensors and SoCs, and to satisfy user-space
> >>> needs, a few more functions have to be implemented.
> >>> 
> >>> Snapshot and strobe / flash capabilities vary significantly between
> >>> sensors. Some of them only capture a single image upon trigger
> >>> activation, some can capture several images, readout and exposure
> >>> capabilities vary too. Not all sensors support a strobe signal, and
> >>> those, that support it, also offer very different options to select
> >>> strobe beginning and duration. This proposal is trying to select a
> >>> minimum API, that can be reasonably supported by many systems and
> >>> provide a reasonable functionality set to the user.
> >>> 
> >>> Proposed implementation.
> >>> ========================
> >>> 
> >>> 1. Switch the interface into the snapshot mode. This is required in
> >>> addition to simply configuring the interface with a different format
> >>> to activate hardware- specific support for triggered single image
> >>> capture. It is proposed to use the VIDIOC_S_PARM ioctl() with a new
> >>> V4L2_MODE_SNAPSHOT value for the struct v4l2_captureparm::capturemode
> >>> and ::capability fields. Further hardware-specific details can be
> >>> passed in ::extendedmode, ::readbuffers can be used to specify the
> >>> exact number of frames to be captured. Similarly, VIDIOC_G_PARM shall
> >>> return supported and current capture modes.
> >>> 
> >>> Many sensors provide the ability to trigger snapshot capture either
> >>> from an external source or from a control register. Usually, however,
> >>> there is no possibility to select the trigger source, either of them
> >>> can be used at any time.
> >> 
> >> I'd rather see a new VIDIOC_G/S_SNAPSHOT ioctl then adding stuff to
> >> G/S_PARM. Those G/S_PARM ioctls should never have been added to V4L2 in
> >> the current form.
> >> 
> >> AFAIK the only usable field is timeperframe, all others are either not
> >> used at all or driver specific.
> >> 
> >> I am very much in favor of freezing the G/S_PARM ioctls.
> > 
> > Ic, of course, I knew nothing about this status of G/S_PARM. As it
> > stands, it seemed a pretty good fit for what I want to do, but, yes, I
> > agree, that this ioctl() as such is not very pretty, so, sure, we can
> > use a new one. But first, in principle you agree, that adding this
> > functionality to V4L2 makes sense instead of relying exclusively on
> > STREAMON / STREAMOFF for still images? But in fact, streaming vs.
> > snapshot does fit under the same category as fps setting, IMHO, doesn't
> > it? So, I think it would make sense to group them together if possible.
> > One could even say: 0fps means manual trigger, i.e., snapshot mode...
> > But anyway, if we want to freeze / kill it - will create a new ioctl().
> > 
> > Now, my proposal above put the snapshot mode parallel to the continuous
> > streaming mode. In this regard, maybe it would make sense to add a new
> > capability V4L2_CAP_VIDEO_SNAPSHOT? Next, additionally to the standard
> > streamon / qbuf / dqbuf / streamoff flow we want to add an operation to
> > switch to the snapshot mode and another one to query the current mode and
> > its parameters. I agree, that there's currently no suitable ioctl(), that
> > could be extended to manage the snapshot mode, apart from G/S_PARM. My
> > question then is: shall we just add G/S_SNAPSHOT, or domething more
> > generic, like G/S_CAPMODE with STREAMING and SNAPSHOT as possible values,
> > STREAMING being the default. This would make sense if there were any
> > other modes, that could eventually be added? No, I cannot think of any
> > personally atm.
> > 
> > So, if we go for G/S_SNAPSHOT, shall they have on / off and number of
> > frames per trigger as two parameters?
> > 
> >>> 2. Specify a flash mode. Define new capture capabilities to be used
> >>> with struct v4l2_captureparm::capturemode:
> >>> 
> >>> V4L2_MODE_FLASH_SYNC	/* synchronise flash with image capture */
> >>> V4L2_MODE_FLASH_ON	/* turn on - "torch-mode" */
> >>> V4L2_MODE_FLASH_OFF	/* turn off */
> >>> 
> >>> Obviously, the above synchronous operation does not exactly define
> >>> beginning and duration of the strobe signal. It is proposed to leave
> >>> the specific flash timing configuration to the driver itself and,
> >>> possibly, to driver-specific extended mode flags.
> >> 
> >> Isn't this something that can be done quite well with controls?
> > 
> > In principle - yes, I guess, I just wanted to group them together with
> > the snapshot mode explicitly, because they are usually bound together.
> > Although, one could use the torch-mode with preview. So, yes, if we
> > prefer them independent from snapshot, then controls will do the job.
> > 
> >>> 3. Add a sensor-subdev operation
> >>> 
> >>> 	int (*snapshot_trigger)(struct v4l2_subdev *sd)
> >>> 
> >>> to start capturing the next frame in the snapshot mode.
> >> 
> >> You might need a 'count' argument if you want to have multiple frames in
> >> snapshot mode.
> > 
> > Right, I wasn't sure whether we should issue this for each frame, but I
> > think - yes - it is better to fire one per trigger set.
> > 
> > Thanks
> > Guennadi
> > 
> >> Regards,
> >> 
> >> 	Hans
> >> 	
> >>> References.
> >>> ===========
> >>> 
> >>> [1]
> >>> http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure
> >>> /29357

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-25 10:05       ` Laurent Pinchart
@ 2011-02-25 13:53         ` Sakari Ailus
  2011-02-25 17:08           ` Guennadi Liakhovetski
  2011-02-25 16:58         ` Guennadi Liakhovetski
  1 sibling, 1 reply; 50+ messages in thread
From: Sakari Ailus @ 2011-02-25 13:53 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Kim HeungJun, Guennadi Liakhovetski, Hans Verkuil,
	Linux Media Mailing List, Stanimir Varbanov

On Fri, Feb 25, 2011 at 11:05:05AM +0100, Laurent Pinchart wrote:
> Hi,

Hi,

> On Thursday 24 February 2011 18:57:22 Kim HeungJun wrote:
> > Hi Guennadi,
> > 
> > I think, it's maybe a good suggestion for current trend! ( I'm not sure
> > this express *trend* is right :))
> > 
> > But, the flash or strobe control connected with sensor can be controlled by
> > user application directly, not in the sensor subdev device. For example,
> > let's think that there is a Flash light application. Its function is just
> > power-up LED. So, in this case, if the flash control mechanism should be
> > used *only* in the V4L2 API belonging through S/G_PARM, this mechanism
> > might be a little complicated, nevertheless it's possible to control
> > led/strobe using S/G_PARM. I mean that we must check the camera must be in
> > the capture state or not, or another setting (like a FPS) should be
> > checked. Namely, for powering up LED, the more procedure is needed.
> > 
> > Also, we can think another case. Generally, the LED/STROBE/FLASH is
> > connected with camera sensor directly, but another case can be existed
> > that LED connected with Processor by controlling GPIO. So, In this case,
> > using LED framework look more better I guess, and then user application
> > access LED frame sysfs node eaisly.
> 
> The flash is usually handled by a dedicated I2C flash controller (such as the 
> ADP1653 used in the N900 - http://www.analog.com/static/imported-
> files/data_sheets/ADP1653.pdf), which is in that case mapped to a v4l2_subdev. 
> Simpler solutions of course exist, such as GPIO LEDs or LEDs connected 
> directly to the sensor. We need an API that can support all those hardware 
> options.
> 
> Let's also not forget that, in addition to the flash LEDs itself, devices 
> often feature an indicator LED (a small low-power red LED used to indicate 
> that video capture is ongoing). The flash LEDs can also be used during video 
> capture, and in focus assist mode (pre-flashing also comes to mind).
> 
> In addition to the modes, flash controllers can generate strobe signals or 
> react on them. Durations are programmable, as well as current limits, and 
> sometimes PWM/current source mode selection. The device can usually detect 
> faults (such as over-current or over-temperature faults) that need to be 
> reported to the user. And we haven't even discussed Xenon flashes.
> 
> Given the wide variety of parameters, I think it would make sense to use V4L2 
> controls on the sub-device directly. If the flash LEDs are connected directly 
> to the sensor, controls on the sensor sub-device can be used to select the 
> flash parameters.
> 
> This doesn't solve the flash/capture synchronization problem though. I don't 
> think we need a dedicated snapshot capture mode at the V4L2 level. A way to 

I agree with that. Flash synchronisation is just one of the many parameters
that would benefit from frame level synchronisation. Exposure time, gain
etc. are also such. The sensors provide varying level of hardware support
for all these.

Flash and indicator power setting can be included to the list of controls
above.

One more thing that suggests the flash control should be available as a
separate subdev is to support the use of flash in torch mode without
performing streaming at all. The power management of the camera is
preferrably optimised for speed so that the camera related devices need not
to be power cycled when using it. If the flash interface is available on a
subdev separately the flash can also be easily powered separately without
making this a special case --- the rest of the camera related devices (ISP,
lens and sensor) should stay powered off.

> configure the sensor to react on an external trigger provided by the flash 
> controller is needed, and that could be a control on the flash sub-device. 
> What we would probably miss is a way to issue a STREAMON with a number of 
> frames to capture. A new ioctl is probably needed there. Maybe that would be 
> an opportunity to create a new stream-control ioctl that could replace 
> STREAMON and STREAMOFF in the long term (we could extend the subdev s_stream 
> operation, and easily map STREAMON and STREAMOFF to the new ioctl in 
> video_ioctl2 internally).

How would this be different from queueing n frames (in total; count
dequeueing, too) and issuing streamon? --- Except that when the last frame
is processed the pipeline could be stopped already before issuing STREAMOFF.
That does indeed have some benefits. Something else?

Regards,

-- 
Sakari Ailus
sakari dot ailus at iki dot fi

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-25 10:05       ` Laurent Pinchart
  2011-02-25 13:53         ` Sakari Ailus
@ 2011-02-25 16:58         ` Guennadi Liakhovetski
  2011-02-25 18:49           ` Sakari Ailus
  1 sibling, 1 reply; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-25 16:58 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Kim HeungJun, Hans Verkuil, Linux Media Mailing List,
	Sakari Ailus, Stanimir Varbanov

Hi Laurent

On Fri, 25 Feb 2011, Laurent Pinchart wrote:

> Hi,
> 
> On Thursday 24 February 2011 18:57:22 Kim HeungJun wrote:
> > Hi Guennadi,
> > 
> > I think, it's maybe a good suggestion for current trend! ( I'm not sure
> > this express *trend* is right :))
> > 
> > But, the flash or strobe control connected with sensor can be controlled by
> > user application directly, not in the sensor subdev device. For example,
> > let's think that there is a Flash light application. Its function is just
> > power-up LED. So, in this case, if the flash control mechanism should be
> > used *only* in the V4L2 API belonging through S/G_PARM, this mechanism
> > might be a little complicated, nevertheless it's possible to control
> > led/strobe using S/G_PARM. I mean that we must check the camera must be in
> > the capture state or not, or another setting (like a FPS) should be
> > checked. Namely, for powering up LED, the more procedure is needed.
> > 
> > Also, we can think another case. Generally, the LED/STROBE/FLASH is
> > connected with camera sensor directly, but another case can be existed
> > that LED connected with Processor by controlling GPIO. So, In this case,
> > using LED framework look more better I guess, and then user application
> > access LED frame sysfs node eaisly.
> 
> The flash is usually handled by a dedicated I2C flash controller (such as the 
> ADP1653 used in the N900 - http://www.analog.com/static/imported-
> files/data_sheets/ADP1653.pdf), which is in that case mapped to a v4l2_subdev. 
> Simpler solutions of course exist, such as GPIO LEDs or LEDs connected 
> directly to the sensor. We need an API that can support all those hardware 
> options.

In principle - yes, and yes, I do realise, that the couple of controls, 
that I've proposed only cover a very minor subset of the whole flash 
function palette. The purposes of my RFC were:

1. get things started in the snapshot / flash direction;)
2. get access to dedicated snapshot / flash registers, present on many 
sensors and SoCs
3. get at least the very basic snapshot / flash functions, common to most 
hardware implementations, but trying to make it future-proof for further 
extensions
4. get a basis for a future detailed discussion

> Let's also not forget that, in addition to the flash LEDs itself, devices 
> often feature an indicator LED (a small low-power red LED used to indicate 
> that video capture is ongoing).

Well, this one doesn't seem too special to me? Wouldn't it suffice to just 
toggle it from user-space on streamon / streamoff?

> The flash LEDs can also be used during video 
> capture, and in focus assist mode (pre-flashing also comes to mind).

Sure, so, we have to design an API, that addresses the basic uses, we see 
immediately in sensors, but also try to make it extensible? Are there 
sensors, that do that pre-flashing automatically? If not, then you 
probably use some dedicated controller for that?

> In addition to the modes, flash controllers can generate strobe signals or 
> react on them. Durations are programmable, as well as current limits, and 
> sometimes PWM/current source mode selection. The device can usually detect 
> faults (such as over-current or over-temperature faults) that need to be 
> reported to the user. And we haven't even discussed Xenon flashes.
> 
> Given the wide variety of parameters, I think it would make sense to use V4L2 
> controls on the sub-device directly. If the flash LEDs are connected directly 
> to the sensor, controls on the sensor sub-device can be used to select the 
> flash parameters.

But you're not proposing to handle on-sensor flash controls from a 
separate subdevice? But isn't it possible, to have this functionality 
either attached to the sensor subdevice or to a dedicated flash subdevice? 
Maybe we could just add a flash capability indicator to the subdev API? 
And flash is not unique here, same holds for some other functions, e.g., 
shutter / exposure - you can either use on-chip electronic exposure 
control, or an external mechanical shutter, also possible controlled by a 
separate IC... Same for lenses, etc?

> This doesn't solve the flash/capture synchronization problem though. I don't 
> think we need a dedicated snapshot capture mode at the V4L2 level. A way to 
> configure the sensor to react on an external trigger provided by the flash 
> controller is needed, and that could be a control on the flash sub-device. 

Well... Sensors call this a "snapshot mode." I don't care that much how we 
_call_ it, but I do think, that we should be able to use it.

Hm, don't think only the "flash subdevice" has to know about this. First, 
you have to switch the sensor into that mode. Second, it might be either 
external trigger from the flash controller, or a programmed trigger and a 
flash strobe from the sensor to the flash (controller). Third, well, not 
quite sure, but doesn't the host have to know about the snapshot mode? 
What would host have to know about? A different format? It can and shall 
be configured by standard means. Number of frames? Not sure, if it needs 
to know about it, maybe not. But at least both the sensor and the flash 
(if separate) do.

> What we would probably miss is a way to issue a STREAMON with a number of 
> frames to capture. A new ioctl is probably needed there. Maybe that would be 
> an opportunity to create a new stream-control ioctl that could replace 
> STREAMON and STREAMOFF in the long term (we could extend the subdev s_stream 
> operation, and easily map STREAMON and STREAMOFF to the new ioctl in 
> video_ioctl2 internally).

Not sure, if we need this. Could you elaborate why in your opinion it is 
needed?

Thanks
Guennadi

> > So, IMHO, probably it's the better option for now to make another CID like
> > V4L2_CID_STROBE clearly, then in the case connected directly with sensor,
> > we control using I2C or another command method for sensor in the
> > V4L2_CID_STROBE, OR in the case connected directly with process, we also
> > using V4L2_CID_STROBE, but the led framework code is in the CID control
> > functions.
> > 
> > Actually, because I also think deeply same issue for a long time, so I
> > wanna very talk about this issues.
> > 
> > How about dealing with this?
> > 
> > Regards,
> > Heungjun Kim
> > 
> > 2011. 2. 25., ¿ÀÀü 1:07, Guennadi Liakhovetski wrote:
> > > Hi Hans
> > > 
> > > Thanks for the review. Perhaps, I should have mentioned it in the
> > > original post, I've written down this RFC to have a basis for a
> > > discussion. The specific API proposals in it are nothing solid, of
> > > course, so, we can freely discuss it here now, or, maybe we get a chance
> > > to discuss it together with the earlier one, concerning multiple queues,
> > > if and when we meet personally next time;) Concerning your specific
> > > comments:
> > > 
> > > On Thu, 24 Feb 2011, Hans Verkuil wrote:
> > >> On Thursday, February 24, 2011 13:18:39 Guennadi Liakhovetski wrote:
> > >>> Agenda.
> > >>> =======
> > >>> 
> > >>> In a recent RFC [1] I proposed V4L2 API extensions to support fast
> > >>> switching between multiple capture modes or data formats. However,
> > >>> this is not sufficient to efficiently leverage snapshot capabilities
> > >>> of existing hardware - sensors and SoCs, and to satisfy user-space
> > >>> needs, a few more functions have to be implemented.
> > >>> 
> > >>> Snapshot and strobe / flash capabilities vary significantly between
> > >>> sensors. Some of them only capture a single image upon trigger
> > >>> activation, some can capture several images, readout and exposure
> > >>> capabilities vary too. Not all sensors support a strobe signal, and
> > >>> those, that support it, also offer very different options to select
> > >>> strobe beginning and duration. This proposal is trying to select a
> > >>> minimum API, that can be reasonably supported by many systems and
> > >>> provide a reasonable functionality set to the user.
> > >>> 
> > >>> Proposed implementation.
> > >>> ========================
> > >>> 
> > >>> 1. Switch the interface into the snapshot mode. This is required in
> > >>> addition to simply configuring the interface with a different format
> > >>> to activate hardware- specific support for triggered single image
> > >>> capture. It is proposed to use the VIDIOC_S_PARM ioctl() with a new
> > >>> V4L2_MODE_SNAPSHOT value for the struct v4l2_captureparm::capturemode
> > >>> and ::capability fields. Further hardware-specific details can be
> > >>> passed in ::extendedmode, ::readbuffers can be used to specify the
> > >>> exact number of frames to be captured. Similarly, VIDIOC_G_PARM shall
> > >>> return supported and current capture modes.
> > >>> 
> > >>> Many sensors provide the ability to trigger snapshot capture either
> > >>> from an external source or from a control register. Usually, however,
> > >>> there is no possibility to select the trigger source, either of them
> > >>> can be used at any time.
> > >> 
> > >> I'd rather see a new VIDIOC_G/S_SNAPSHOT ioctl then adding stuff to
> > >> G/S_PARM. Those G/S_PARM ioctls should never have been added to V4L2 in
> > >> the current form.
> > >> 
> > >> AFAIK the only usable field is timeperframe, all others are either not
> > >> used at all or driver specific.
> > >> 
> > >> I am very much in favor of freezing the G/S_PARM ioctls.
> > > 
> > > Ic, of course, I knew nothing about this status of G/S_PARM. As it
> > > stands, it seemed a pretty good fit for what I want to do, but, yes, I
> > > agree, that this ioctl() as such is not very pretty, so, sure, we can
> > > use a new one. But first, in principle you agree, that adding this
> > > functionality to V4L2 makes sense instead of relying exclusively on
> > > STREAMON / STREAMOFF for still images? But in fact, streaming vs.
> > > snapshot does fit under the same category as fps setting, IMHO, doesn't
> > > it? So, I think it would make sense to group them together if possible.
> > > One could even say: 0fps means manual trigger, i.e., snapshot mode...
> > > But anyway, if we want to freeze / kill it - will create a new ioctl().
> > > 
> > > Now, my proposal above put the snapshot mode parallel to the continuous
> > > streaming mode. In this regard, maybe it would make sense to add a new
> > > capability V4L2_CAP_VIDEO_SNAPSHOT? Next, additionally to the standard
> > > streamon / qbuf / dqbuf / streamoff flow we want to add an operation to
> > > switch to the snapshot mode and another one to query the current mode and
> > > its parameters. I agree, that there's currently no suitable ioctl(), that
> > > could be extended to manage the snapshot mode, apart from G/S_PARM. My
> > > question then is: shall we just add G/S_SNAPSHOT, or domething more
> > > generic, like G/S_CAPMODE with STREAMING and SNAPSHOT as possible values,
> > > STREAMING being the default. This would make sense if there were any
> > > other modes, that could eventually be added? No, I cannot think of any
> > > personally atm.
> > > 
> > > So, if we go for G/S_SNAPSHOT, shall they have on / off and number of
> > > frames per trigger as two parameters?
> > > 
> > >>> 2. Specify a flash mode. Define new capture capabilities to be used
> > >>> with struct v4l2_captureparm::capturemode:
> > >>> 
> > >>> V4L2_MODE_FLASH_SYNC	/* synchronise flash with image capture */
> > >>> V4L2_MODE_FLASH_ON	/* turn on - "torch-mode" */
> > >>> V4L2_MODE_FLASH_OFF	/* turn off */
> > >>> 
> > >>> Obviously, the above synchronous operation does not exactly define
> > >>> beginning and duration of the strobe signal. It is proposed to leave
> > >>> the specific flash timing configuration to the driver itself and,
> > >>> possibly, to driver-specific extended mode flags.
> > >> 
> > >> Isn't this something that can be done quite well with controls?
> > > 
> > > In principle - yes, I guess, I just wanted to group them together with
> > > the snapshot mode explicitly, because they are usually bound together.
> > > Although, one could use the torch-mode with preview. So, yes, if we
> > > prefer them independent from snapshot, then controls will do the job.
> > > 
> > >>> 3. Add a sensor-subdev operation
> > >>> 
> > >>> 	int (*snapshot_trigger)(struct v4l2_subdev *sd)
> > >>> 
> > >>> to start capturing the next frame in the snapshot mode.
> > >> 
> > >> You might need a 'count' argument if you want to have multiple frames in
> > >> snapshot mode.
> > > 
> > > Right, I wasn't sure whether we should issue this for each frame, but I
> > > think - yes - it is better to fire one per trigger set.
> > > 
> > > Thanks
> > > Guennadi
> > > 
> > >> Regards,
> > >> 
> > >> 	Hans
> > >> 	
> > >>> References.
> > >>> ===========
> > >>> 
> > >>> [1]
> > >>> http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure
> > >>> /29357
> 
> -- 
> Regards,
> 
> Laurent Pinchart
> 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-25 13:53         ` Sakari Ailus
@ 2011-02-25 17:08           ` Guennadi Liakhovetski
  2011-02-25 18:55             ` Sakari Ailus
  2011-02-26 12:31             ` Hans Verkuil
  0 siblings, 2 replies; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-25 17:08 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Laurent Pinchart, Kim HeungJun, Hans Verkuil,
	Linux Media Mailing List, Stanimir Varbanov

Hi Sakari

On Fri, 25 Feb 2011, Sakari Ailus wrote:

> On Fri, Feb 25, 2011 at 11:05:05AM +0100, Laurent Pinchart wrote:
> > Hi,
> 
> Hi,
> 
> > On Thursday 24 February 2011 18:57:22 Kim HeungJun wrote:
> > > Hi Guennadi,
> > > 
> > > I think, it's maybe a good suggestion for current trend! ( I'm not sure
> > > this express *trend* is right :))
> > > 
> > > But, the flash or strobe control connected with sensor can be controlled by
> > > user application directly, not in the sensor subdev device. For example,
> > > let's think that there is a Flash light application. Its function is just
> > > power-up LED. So, in this case, if the flash control mechanism should be
> > > used *only* in the V4L2 API belonging through S/G_PARM, this mechanism
> > > might be a little complicated, nevertheless it's possible to control
> > > led/strobe using S/G_PARM. I mean that we must check the camera must be in
> > > the capture state or not, or another setting (like a FPS) should be
> > > checked. Namely, for powering up LED, the more procedure is needed.
> > > 
> > > Also, we can think another case. Generally, the LED/STROBE/FLASH is
> > > connected with camera sensor directly, but another case can be existed
> > > that LED connected with Processor by controlling GPIO. So, In this case,
> > > using LED framework look more better I guess, and then user application
> > > access LED frame sysfs node eaisly.
> > 
> > The flash is usually handled by a dedicated I2C flash controller (such as the 
> > ADP1653 used in the N900 - http://www.analog.com/static/imported-
> > files/data_sheets/ADP1653.pdf), which is in that case mapped to a v4l2_subdev. 
> > Simpler solutions of course exist, such as GPIO LEDs or LEDs connected 
> > directly to the sensor. We need an API that can support all those hardware 
> > options.
> > 
> > Let's also not forget that, in addition to the flash LEDs itself, devices 
> > often feature an indicator LED (a small low-power red LED used to indicate 
> > that video capture is ongoing). The flash LEDs can also be used during video 
> > capture, and in focus assist mode (pre-flashing also comes to mind).
> > 
> > In addition to the modes, flash controllers can generate strobe signals or 
> > react on them. Durations are programmable, as well as current limits, and 
> > sometimes PWM/current source mode selection. The device can usually detect 
> > faults (such as over-current or over-temperature faults) that need to be 
> > reported to the user. And we haven't even discussed Xenon flashes.
> > 
> > Given the wide variety of parameters, I think it would make sense to use V4L2 
> > controls on the sub-device directly. If the flash LEDs are connected directly 
> > to the sensor, controls on the sensor sub-device can be used to select the 
> > flash parameters.
> > 
> > This doesn't solve the flash/capture synchronization problem though. I don't 
> > think we need a dedicated snapshot capture mode at the V4L2 level. A way to 
> 
> I agree with that. Flash synchronisation is just one of the many parameters
> that would benefit from frame level synchronisation. Exposure time, gain
> etc. are also such. The sensors provide varying level of hardware support
> for all these.

Well, that's true, but... From what I've seen so far, many sensors 
synchronise such sensitive configuration changes with their frame readout 
automatically, i.e., you configure some new parameter in a sensor 
register, but it will only become valid with the next frame. On other 
sensors you can issue a "hold" command, perform any needed changed, then 
issue a "commit" and all your changes will be applied atomically.

Also, we already _can_ configure gain, exposure and many other parameters, 
but we have no way to use sensor snapshot and flash-synchronisation 
capabilities.

What we could also do, we could add an optional callback to subdev (core?) 
operations, which, if activated, the host would call on each frame 
completion.

> Flash and indicator power setting can be included to the list of controls
> above.

As I replied to Laurent, not sure we need to control the power indicator 
from V4L2, unless there are sensors, that have support for that.

> One more thing that suggests the flash control should be available as a
> separate subdev is to support the use of flash in torch mode without
> performing streaming at all.

Well, yes, that's what my V4L2_MODE_FLASH_ON was supposed to do - 
independent from the streaming status. Maybe placing it in 
v4l2_captureparm::capturemode made it look like it was related to 
streaming status, but it certainly should be possible without any capture 
at all.

> The power management of the camera is
> preferrably optimised for speed so that the camera related devices need not
> to be power cycled when using it. If the flash interface is available on a
> subdev separately the flash can also be easily powered separately without
> making this a special case --- the rest of the camera related devices (ISP,
> lens and sensor) should stay powered off.
> 
> > configure the sensor to react on an external trigger provided by the flash 
> > controller is needed, and that could be a control on the flash sub-device. 
> > What we would probably miss is a way to issue a STREAMON with a number of 
> > frames to capture. A new ioctl is probably needed there. Maybe that would be 
> > an opportunity to create a new stream-control ioctl that could replace 
> > STREAMON and STREAMOFF in the long term (we could extend the subdev s_stream 
> > operation, and easily map STREAMON and STREAMOFF to the new ioctl in 
> > video_ioctl2 internally).
> 
> How would this be different from queueing n frames (in total; count
> dequeueing, too) and issuing streamon? --- Except that when the last frame
> is processed the pipeline could be stopped already before issuing STREAMOFF.
> That does indeed have some benefits. Something else?

Well, you usually see in your host driver, that the videobuffer queue is 
empty (no more free buffers are available), so, you stop streaming 
immediately too.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-25 16:58         ` Guennadi Liakhovetski
@ 2011-02-25 18:49           ` Sakari Ailus
  2011-02-25 20:33             ` Guennadi Liakhovetski
  0 siblings, 1 reply; 50+ messages in thread
From: Sakari Ailus @ 2011-02-25 18:49 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Laurent Pinchart, Kim HeungJun, Hans Verkuil,
	Linux Media Mailing List, Sakari Ailus, Stanimir Varbanov

Hi Guennadi,

Guennadi Liakhovetski wrote:
> In principle - yes, and yes, I do realise, that the couple of controls, 
> that I've proposed only cover a very minor subset of the whole flash 
> function palette. The purposes of my RFC were:

Why would there be a different interface for controlling the flash in
simple cases and more complex cases?

As far as I see it, the way the flash is accessed should be the same in
both cases --- if more complex functionality is required that would be
implemented in using additional ways (controls, for example).

The drivers should use sane defaults for controls like power and length.
I assume things like mode (strobe/continuous) is fairly standard
functionality.

> 1. get things started in the snapshot / flash direction;)
> 2. get access to dedicated snapshot / flash registers, present on many 
> sensors and SoCs
> 3. get at least the very basic snapshot / flash functions, common to most 
> hardware implementations, but trying to make it future-proof for further 
> extensions
> 4. get a basis for a future detailed discussion
> 
>> Let's also not forget that, in addition to the flash LEDs itself, devices 
>> often feature an indicator LED (a small low-power red LED used to indicate 
>> that video capture is ongoing).
> 
> Well, this one doesn't seem too special to me? Wouldn't it suffice to just 
> toggle it from user-space on streamon / streamoff?

And what if you want to use the led unconnected to the streaming state? :-)

>> This doesn't solve the flash/capture synchronization problem though. I don't 
>> think we need a dedicated snapshot capture mode at the V4L2 level. A way to 
>> configure the sensor to react on an external trigger provided by the flash 
>> controller is needed, and that could be a control on the flash sub-device. 
> 
> Well... Sensors call this a "snapshot mode." I don't care that much how we 
> _call_ it, but I do think, that we should be able to use it.

Some sensors and webcams might have that, but newer camera solutions
tend to contain a raw bayer sensor and and ISP. There is no concept of
snapsnot mode in these sensors.

> Hm, don't think only the "flash subdevice" has to know about this. First, 
> you have to switch the sensor into that mode. Second, it might be either 
> external trigger from the flash controller, or a programmed trigger and a 
> flash strobe from the sensor to the flash (controller). Third, well, not 
> quite sure, but doesn't the host have to know about the snapshot mode? 

I do not favour adding use case type of functionality to interfaces that
do not necessarily need it. Would the concept of a snapshot be
parametrisable on V4L2 level?

Otherwise we may end adding interfaces for use case specific things. The
use cases vary a lot more than the individual features that are required
to implement them, suggesting it's relatively easy to add redundant
functionality to the API.

-- 
Sakari Ailus
sakari.ailus@maxwell.research.nokia.com

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-25 17:08           ` Guennadi Liakhovetski
@ 2011-02-25 18:55             ` Sakari Ailus
  2011-02-25 20:56               ` Guennadi Liakhovetski
  2011-02-26 12:31             ` Hans Verkuil
  1 sibling, 1 reply; 50+ messages in thread
From: Sakari Ailus @ 2011-02-25 18:55 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Laurent Pinchart, Kim HeungJun, Hans Verkuil,
	Linux Media Mailing List, Stanimir Varbanov

On Fri, Feb 25, 2011 at 06:08:07PM +0100, Guennadi Liakhovetski wrote:
> Hi Sakari

Hi Guennadi,

> On Fri, 25 Feb 2011, Sakari Ailus wrote:
> > I agree with that. Flash synchronisation is just one of the many parameters
> > that would benefit from frame level synchronisation. Exposure time, gain
> > etc. are also such. The sensors provide varying level of hardware support
> > for all these.
> 
> Well, that's true, but... From what I've seen so far, many sensors 
> synchronise such sensitive configuration changes with their frame readout 
> automatically, i.e., you configure some new parameter in a sensor 
> register, but it will only become valid with the next frame. On other 
> sensors you can issue a "hold" command, perform any needed changed, then 
> issue a "commit" and all your changes will be applied atomically.

At that level it's automatic, but what I meant is synchronising the
application of the settings to the exposure start of a given frame. This is
very similar to flash synchronisation.

> Also, we already _can_ configure gain, exposure and many other parameters, 
> but we have no way to use sensor snapshot and flash-synchronisation 
> capabilities.

There is a way to configure them but the interface doesn't allow to specify
when they should take effect.

FCam type applications requires this sort of functionality.

<URL:http://fcam.garage.maemo.org/>

> What we could also do, we could add an optional callback to subdev (core?) 
> operations, which, if activated, the host would call on each frame 
> completion.

It's not quite that simple. The exposure of the next frame has started long
time before that. This requires much more thought probably --- in the case
of lack of hardware support, when the parameters need to be actually given
to the sensor depend somewhat on sensors, I suppose.

> > Flash and indicator power setting can be included to the list of controls
> > above.
> 
> As I replied to Laurent, not sure we need to control the power indicator 
> from V4L2, unless there are sensors, that have support for that.

Um, flash controllers, that is. Yes, there are; the ADP1653, which is just
one example.

> > The power management of the camera is
> > preferrably optimised for speed so that the camera related devices need not
> > to be power cycled when using it. If the flash interface is available on a
> > subdev separately the flash can also be easily powered separately without
> > making this a special case --- the rest of the camera related devices (ISP,
> > lens and sensor) should stay powered off.
> > 
> > > configure the sensor to react on an external trigger provided by the flash 
> > > controller is needed, and that could be a control on the flash sub-device. 
> > > What we would probably miss is a way to issue a STREAMON with a number of 
> > > frames to capture. A new ioctl is probably needed there. Maybe that would be 
> > > an opportunity to create a new stream-control ioctl that could replace 
> > > STREAMON and STREAMOFF in the long term (we could extend the subdev s_stream 
> > > operation, and easily map STREAMON and STREAMOFF to the new ioctl in 
> > > video_ioctl2 internally).
> > 
> > How would this be different from queueing n frames (in total; count
> > dequeueing, too) and issuing streamon? --- Except that when the last frame
> > is processed the pipeline could be stopped already before issuing STREAMOFF.
> > That does indeed have some benefits. Something else?
> 
> Well, you usually see in your host driver, that the videobuffer queue is 
> empty (no more free buffers are available), so, you stop streaming 
> immediately too.

That's right. Disabling streaming does save some power but even more is
saved when switching the devices off completely. This is important in
embedded systems that are often battery powered.

The hardware could be switched off when no streaming takes place. However,
this introduces extra delays to power-up at times they are unwanted --- for
example, when switching from viewfinder to still capture.

The alternative to this is to add a timer to the driver: power off if no
streaming has taken place for n seconds, for example. I would consider this
much inferior to just providing a simple subdev for the flash chip and not
involve the ISP at all.

Regards,

-- 
Sakari Ailus
sakari dot ailus at iki dot fi

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-25 18:49           ` Sakari Ailus
@ 2011-02-25 20:33             ` Guennadi Liakhovetski
  2011-02-27 21:00               ` Sakari Ailus
  0 siblings, 1 reply; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-25 20:33 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Laurent Pinchart, Kim HeungJun, Hans Verkuil,
	Linux Media Mailing List, Sakari Ailus, Stanimir Varbanov

On Fri, 25 Feb 2011, Sakari Ailus wrote:

> Hi Guennadi,
> 
> Guennadi Liakhovetski wrote:
> > In principle - yes, and yes, I do realise, that the couple of controls, 
> > that I've proposed only cover a very minor subset of the whole flash 
> > function palette. The purposes of my RFC were:
> 
> Why would there be a different interface for controlling the flash in
> simple cases and more complex cases?

Sorry, not sure what you mean. Do you mean different APIs when the flash 
is controlled directly by the sensor and by an external controller? No, of 
course we need one API, but you either issue those ioctl()s to the sensor 
(sub)device, or to the dedicated flash (sub)device. If you mean my "minor 
subset" above, then I was trying to say, that this is a basis, that has to 
be extended, but not, that we will develop a new API for more complicated 
cases.

> As far as I see it, the way the flash is accessed should be the same in
> both cases --- if more complex functionality is required that would be
> implemented in using additional ways (controls, for example).

Of course, that's what we're discussing here - what functions have to be 
implemented.

> The drivers should use sane defaults for controls like power and length.
> I assume things like mode (strobe/continuous) is fairly standard
> functionality.

Yes, these are all the things, that we shall discuss...

> > 1. get things started in the snapshot / flash direction;)
> > 2. get access to dedicated snapshot / flash registers, present on many 
> > sensors and SoCs
> > 3. get at least the very basic snapshot / flash functions, common to most 
> > hardware implementations, but trying to make it future-proof for further 
> > extensions
> > 4. get a basis for a future detailed discussion
> > 
> >> Let's also not forget that, in addition to the flash LEDs itself, devices 
> >> often feature an indicator LED (a small low-power red LED used to indicate 
> >> that video capture is ongoing).
> > 
> > Well, this one doesn't seem too special to me? Wouldn't it suffice to just 
> > toggle it from user-space on streamon / streamoff?
> 
> And what if you want to use the led unconnected to the streaming state? :-)

That's even easier, isn't it? Just turn it on and off from your 
application whenever you want.

> >> This doesn't solve the flash/capture synchronization problem though. I don't 
> >> think we need a dedicated snapshot capture mode at the V4L2 level. A way to 
> >> configure the sensor to react on an external trigger provided by the flash 
> >> controller is needed, and that could be a control on the flash sub-device. 
> > 
> > Well... Sensors call this a "snapshot mode." I don't care that much how we 
> > _call_ it, but I do think, that we should be able to use it.
> 
> Some sensors and webcams might have that, but newer camera solutions
> tend to contain a raw bayer sensor and and ISP. There is no concept of
> snapsnot mode in these sensors.

Hm, I am not sure I understand, why sensors with DSPs in them should have 
no notion of a snapshot mode. Do they have no strobe / trigger pins? And 
no built in possibility to synchronize with a flash?

> > Hm, don't think only the "flash subdevice" has to know about this. First, 
> > you have to switch the sensor into that mode. Second, it might be either 
> > external trigger from the flash controller, or a programmed trigger and a 
> > flash strobe from the sensor to the flash (controller). Third, well, not 
> > quite sure, but doesn't the host have to know about the snapshot mode? 
> 
> I do not favour adding use case type of functionality to interfaces that
> do not necessarily need it. Would the concept of a snapshot be
> parametrisable on V4L2 level?

I am open to this. I don't have a good idea of whether camera hosts have 
to know about the snapshot mode or not. It's open for discussion.

> Otherwise we may end adding interfaces for use case specific things. The
> use cases vary a lot more than the individual features that are required
> to implement them, suggesting it's relatively easy to add redundant
> functionality to the API.

Sure, completely agree - if we can sufficiently implement all the needed 
functionality for those new use-cases with existing API, no need to add 
anything new.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-25 18:55             ` Sakari Ailus
@ 2011-02-25 20:56               ` Guennadi Liakhovetski
  2011-02-28 11:57                 ` Guennadi Liakhovetski
  2011-03-06  9:53                 ` Sakari Ailus
  0 siblings, 2 replies; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-25 20:56 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Laurent Pinchart, Kim HeungJun, Hans Verkuil,
	Linux Media Mailing List, Stanimir Varbanov

On Fri, 25 Feb 2011, Sakari Ailus wrote:

> On Fri, Feb 25, 2011 at 06:08:07PM +0100, Guennadi Liakhovetski wrote:
> > Hi Sakari
> 
> Hi Guennadi,
> 
> > On Fri, 25 Feb 2011, Sakari Ailus wrote:
> > > I agree with that. Flash synchronisation is just one of the many parameters
> > > that would benefit from frame level synchronisation. Exposure time, gain
> > > etc. are also such. The sensors provide varying level of hardware support
> > > for all these.
> > 
> > Well, that's true, but... From what I've seen so far, many sensors 
> > synchronise such sensitive configuration changes with their frame readout 
> > automatically, i.e., you configure some new parameter in a sensor 
> > register, but it will only become valid with the next frame. On other 
> > sensors you can issue a "hold" command, perform any needed changed, then 
> > issue a "commit" and all your changes will be applied atomically.
> 
> At that level it's automatic, but what I meant is synchronising the
> application of the settings to the exposure start of a given frame. This is
> very similar to flash synchronisation.

Right... But I don't think we can do more, than what the sensor supports 
about this, can we? Only stop the sensor, apply parameters, start the 
sensor...

> > Also, we already _can_ configure gain, exposure and many other parameters, 
> > but we have no way to use sensor snapshot and flash-synchronisation 
> > capabilities.
> 
> There is a way to configure them but the interface doesn't allow to specify
> when they should take effect.

??? There are V4L ioctl()s to control the flash?...

> FCam type applications requires this sort of functionality.
> 
> <URL:http://fcam.garage.maemo.org/>
> 
> > What we could also do, we could add an optional callback to subdev (core?) 
> > operations, which, if activated, the host would call on each frame 
> > completion.
> 
> It's not quite that simple. The exposure of the next frame has started long
> time before that. This requires much more thought probably --- in the case
> of lack of hardware support, when the parameters need to be actually given
> to the sensor depend somewhat on sensors, I suppose.

Yes, that's right. I seem to remember, there was a case, for which such a 
callback would have been useful... Don't remember what that was though.

> > > Flash and indicator power setting can be included to the list of controls
> > > above.
> > 
> > As I replied to Laurent, not sure we need to control the power indicator 
> > from V4L2, unless there are sensors, that have support for that.
> 
> Um, flash controllers, that is. Yes, there are; the ADP1653, which is just
> one example.

No, not flash controllers, just an indicator, that a capture is running 
(normally a small red LED).

> > > The power management of the camera is
> > > preferrably optimised for speed so that the camera related devices need not
> > > to be power cycled when using it. If the flash interface is available on a
> > > subdev separately the flash can also be easily powered separately without
> > > making this a special case --- the rest of the camera related devices (ISP,
> > > lens and sensor) should stay powered off.
> > > 
> > > > configure the sensor to react on an external trigger provided by the flash 
> > > > controller is needed, and that could be a control on the flash sub-device. 
> > > > What we would probably miss is a way to issue a STREAMON with a number of 
> > > > frames to capture. A new ioctl is probably needed there. Maybe that would be 
> > > > an opportunity to create a new stream-control ioctl that could replace 
> > > > STREAMON and STREAMOFF in the long term (we could extend the subdev s_stream 
> > > > operation, and easily map STREAMON and STREAMOFF to the new ioctl in 
> > > > video_ioctl2 internally).
> > > 
> > > How would this be different from queueing n frames (in total; count
> > > dequeueing, too) and issuing streamon? --- Except that when the last frame
> > > is processed the pipeline could be stopped already before issuing STREAMOFF.
> > > That does indeed have some benefits. Something else?
> > 
> > Well, you usually see in your host driver, that the videobuffer queue is 
> > empty (no more free buffers are available), so, you stop streaming 
> > immediately too.
> 
> That's right. Disabling streaming does save some power but even more is
> saved when switching the devices off completely. This is important in
> embedded systems that are often battery powered.
> 
> The hardware could be switched off when no streaming takes place. However,
> this introduces extra delays to power-up at times they are unwanted --- for
> example, when switching from viewfinder to still capture.
> 
> The alternative to this is to add a timer to the driver: power off if no
> streaming has taken place for n seconds, for example. I would consider this
> much inferior to just providing a simple subdev for the flash chip and not
> involve the ISP at all.

There's an .s_power() method already in subdev core-ops.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-25 17:08           ` Guennadi Liakhovetski
  2011-02-25 18:55             ` Sakari Ailus
@ 2011-02-26 12:31             ` Hans Verkuil
  2011-02-26 13:03               ` Guennadi Liakhovetski
  1 sibling, 1 reply; 50+ messages in thread
From: Hans Verkuil @ 2011-02-26 12:31 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Sakari Ailus, Laurent Pinchart, Kim HeungJun,
	Linux Media Mailing List, Stanimir Varbanov

On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:

<snip>

> > > configure the sensor to react on an external trigger provided by the flash 
> > > controller is needed, and that could be a control on the flash sub-device. 
> > > What we would probably miss is a way to issue a STREAMON with a number of 
> > > frames to capture. A new ioctl is probably needed there. Maybe that would be 
> > > an opportunity to create a new stream-control ioctl that could replace 
> > > STREAMON and STREAMOFF in the long term (we could extend the subdev s_stream 
> > > operation, and easily map STREAMON and STREAMOFF to the new ioctl in 
> > > video_ioctl2 internally).
> > 
> > How would this be different from queueing n frames (in total; count
> > dequeueing, too) and issuing streamon? --- Except that when the last frame
> > is processed the pipeline could be stopped already before issuing STREAMOFF.
> > That does indeed have some benefits. Something else?
> 
> Well, you usually see in your host driver, that the videobuffer queue is 
> empty (no more free buffers are available), so, you stop streaming 
> immediately too.

This probably assumes that the host driver knows that this is a special queue?
Because in general drivers will simply keep capturing in the last buffer and not
release it to userspace until a new buffer is queued.

That said, it wouldn't be hard to add some flag somewhere that puts a queue in
a 'stop streaming on last buffer capture' mode.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by Cisco

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-26 12:31             ` Hans Verkuil
@ 2011-02-26 13:03               ` Guennadi Liakhovetski
  2011-02-26 13:39                 ` Sylwester Nawrocki
  2011-02-28 10:24                 ` Laurent Pinchart
  0 siblings, 2 replies; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-26 13:03 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Sakari Ailus, Laurent Pinchart, Kim HeungJun,
	Linux Media Mailing List, Stanimir Varbanov

On Sat, 26 Feb 2011, Hans Verkuil wrote:

> On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:
> 
> <snip>
> 
> > > > configure the sensor to react on an external trigger provided by the flash 
> > > > controller is needed, and that could be a control on the flash sub-device. 
> > > > What we would probably miss is a way to issue a STREAMON with a number of 
> > > > frames to capture. A new ioctl is probably needed there. Maybe that would be 
> > > > an opportunity to create a new stream-control ioctl that could replace 
> > > > STREAMON and STREAMOFF in the long term (we could extend the subdev s_stream 
> > > > operation, and easily map STREAMON and STREAMOFF to the new ioctl in 
> > > > video_ioctl2 internally).
> > > 
> > > How would this be different from queueing n frames (in total; count
> > > dequeueing, too) and issuing streamon? --- Except that when the last frame
> > > is processed the pipeline could be stopped already before issuing STREAMOFF.
> > > That does indeed have some benefits. Something else?
> > 
> > Well, you usually see in your host driver, that the videobuffer queue is 
> > empty (no more free buffers are available), so, you stop streaming 
> > immediately too.
> 
> This probably assumes that the host driver knows that this is a special queue?
> Because in general drivers will simply keep capturing in the last buffer and not
> release it to userspace until a new buffer is queued.

Yes, I know about this spec requirement, but I also know, that not all 
drivers do that and not everyone is happy about that requirement:)

> That said, it wouldn't be hard to add some flag somewhere that puts a queue in
> a 'stop streaming on last buffer capture' mode.

No, it wouldn't... But TBH this doesn't seem like the most elegant and 
complete solution. Maybe we have to think a bit more about it - which 
soncequences switching into the snapshot mode has on the host driver, 
apart from stopping after N frames. So, this is one of the possibilities, 
not sure if the best one.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-26 13:03               ` Guennadi Liakhovetski
@ 2011-02-26 13:39                 ` Sylwester Nawrocki
  2011-02-26 13:56                   ` Hans Verkuil
  2011-02-28 10:24                 ` Laurent Pinchart
  1 sibling, 1 reply; 50+ messages in thread
From: Sylwester Nawrocki @ 2011-02-26 13:39 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Hans Verkuil, Sakari Ailus, Laurent Pinchart, Kim HeungJun,
	Linux Media Mailing List, Stanimir Varbanov

On 02/26/2011 02:03 PM, Guennadi Liakhovetski wrote:
> On Sat, 26 Feb 2011, Hans Verkuil wrote:
> 
>> On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:
>>
>> <snip>
>>
>>>>> configure the sensor to react on an external trigger provided by the flash
>>>>> controller is needed, and that could be a control on the flash sub-device.
>>>>> What we would probably miss is a way to issue a STREAMON with a number of
>>>>> frames to capture. A new ioctl is probably needed there. Maybe that would be
>>>>> an opportunity to create a new stream-control ioctl that could replace
>>>>> STREAMON and STREAMOFF in the long term (we could extend the subdev s_stream
>>>>> operation, and easily map STREAMON and STREAMOFF to the new ioctl in
>>>>> video_ioctl2 internally).
>>>>
>>>> How would this be different from queueing n frames (in total; count
>>>> dequeueing, too) and issuing streamon? --- Except that when the last frame
>>>> is processed the pipeline could be stopped already before issuing STREAMOFF.
>>>> That does indeed have some benefits. Something else?
>>>
>>> Well, you usually see in your host driver, that the videobuffer queue is
>>> empty (no more free buffers are available), so, you stop streaming
>>> immediately too.
>>
>> This probably assumes that the host driver knows that this is a special queue?
>> Because in general drivers will simply keep capturing in the last buffer and not
>> release it to userspace until a new buffer is queued.
> 
> Yes, I know about this spec requirement, but I also know, that not all
> drivers do that and not everyone is happy about that requirement:)

Right, similarly a v4l2 output device is not releasing the last buffer
to userland and keeps sending its content until a new buffer is queued to the driver.
But in case of capture device the requirement is a pain, since it only causes
draining the power source, when from a user view the video capture is stopped.
Also it limits a minimum number of buffers that could be used in preview pipeline.

In still capture mode (single shot) we might want to use only one buffer so adhering
to the requirement would not allow this, would it?

> 
>> That said, it wouldn't be hard to add some flag somewhere that puts a queue in
>> a 'stop streaming on last buffer capture' mode.
> 
> No, it wouldn't... But TBH this doesn't seem like the most elegant and
> complete solution. Maybe we have to think a bit more about it - which
> soncequences switching into the snapshot mode has on the host driver,
> apart from stopping after N frames. So, this is one of the possibilities,
> not sure if the best one.
> 
> Thanks
> Guennadi
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-26 13:39                 ` Sylwester Nawrocki
@ 2011-02-26 13:56                   ` Hans Verkuil
  2011-02-26 15:42                     ` Sylwester Nawrocki
  2011-02-28 10:28                     ` Laurent Pinchart
  0 siblings, 2 replies; 50+ messages in thread
From: Hans Verkuil @ 2011-02-26 13:56 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Guennadi Liakhovetski, Sakari Ailus, Laurent Pinchart,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

On Saturday, February 26, 2011 14:39:54 Sylwester Nawrocki wrote:
> On 02/26/2011 02:03 PM, Guennadi Liakhovetski wrote:
> > On Sat, 26 Feb 2011, Hans Verkuil wrote:
> > 
> >> On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:
> >>
> >> <snip>
> >>
> >>>>> configure the sensor to react on an external trigger provided by the flash
> >>>>> controller is needed, and that could be a control on the flash sub-device.
> >>>>> What we would probably miss is a way to issue a STREAMON with a number of
> >>>>> frames to capture. A new ioctl is probably needed there. Maybe that would be
> >>>>> an opportunity to create a new stream-control ioctl that could replace
> >>>>> STREAMON and STREAMOFF in the long term (we could extend the subdev s_stream
> >>>>> operation, and easily map STREAMON and STREAMOFF to the new ioctl in
> >>>>> video_ioctl2 internally).
> >>>>
> >>>> How would this be different from queueing n frames (in total; count
> >>>> dequeueing, too) and issuing streamon? --- Except that when the last frame
> >>>> is processed the pipeline could be stopped already before issuing STREAMOFF.
> >>>> That does indeed have some benefits. Something else?
> >>>
> >>> Well, you usually see in your host driver, that the videobuffer queue is
> >>> empty (no more free buffers are available), so, you stop streaming
> >>> immediately too.
> >>
> >> This probably assumes that the host driver knows that this is a special queue?
> >> Because in general drivers will simply keep capturing in the last buffer and not
> >> release it to userspace until a new buffer is queued.
> > 
> > Yes, I know about this spec requirement, but I also know, that not all
> > drivers do that and not everyone is happy about that requirement:)
> 
> Right, similarly a v4l2 output device is not releasing the last buffer
> to userland and keeps sending its content until a new buffer is queued to the driver.
> But in case of capture device the requirement is a pain, since it only causes
> draining the power source, when from a user view the video capture is stopped.
> Also it limits a minimum number of buffers that could be used in preview pipeline.

No, we can't change this. We can of course add some setting that will explicitly
request different behavior.

The reason this is done this way comes from the traditional TV/webcam viewing apps.
If for some reason the app can't keep up with the capture rate, then frames should
just be dropped silently. All apps assume this behavior. In a normal user environment
this scenario is perfectly normal (e.g. you use a webcam app, then do a CPU
intensive make run).

I agree that you might want different behavior in an embedded environment, but
that should be requested explicitly.

> In still capture mode (single shot) we might want to use only one buffer so adhering
> to the requirement would not allow this, would it?

That's one of the problems with still capture mode, yes.

I have not yet seen a proposal for this that I really like. Most are too specific
to this use-case (snapshot) and I'd like to see something more general.

Regards,

	Hans

> 
> > 
> >> That said, it wouldn't be hard to add some flag somewhere that puts a queue in
> >> a 'stop streaming on last buffer capture' mode.
> > 
> > No, it wouldn't... But TBH this doesn't seem like the most elegant and
> > complete solution. Maybe we have to think a bit more about it - which
> > soncequences switching into the snapshot mode has on the host driver,
> > apart from stopping after N frames. So, this is one of the possibilities,
> > not sure if the best one.
> > 
> > Thanks
> > Guennadi
> > ---
> > Guennadi Liakhovetski, Ph.D.
> > Freelance Open-Source Software Developer
> > http://www.open-technology.de/
> 
> 

-- 
Hans Verkuil - video4linux developer - sponsored by Cisco

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-26 13:56                   ` Hans Verkuil
@ 2011-02-26 15:42                     ` Sylwester Nawrocki
  2011-02-28 10:28                     ` Laurent Pinchart
  1 sibling, 0 replies; 50+ messages in thread
From: Sylwester Nawrocki @ 2011-02-26 15:42 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Guennadi Liakhovetski, Sakari Ailus, Laurent Pinchart,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

Hi Hans,

On 02/26/2011 02:56 PM, Hans Verkuil wrote:
> On Saturday, February 26, 2011 14:39:54 Sylwester Nawrocki wrote:
>> On 02/26/2011 02:03 PM, Guennadi Liakhovetski wrote:
>>> On Sat, 26 Feb 2011, Hans Verkuil wrote:
>>>
>>>> On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:
>>>>
>>>> <snip>
>>>>
>>>>>>> configure the sensor to react on an external trigger provided by the flash
>>>>>>> controller is needed, and that could be a control on the flash sub-device.
>>>>>>> What we would probably miss is a way to issue a STREAMON with a number of
>>>>>>> frames to capture. A new ioctl is probably needed there. Maybe that would be
>>>>>>> an opportunity to create a new stream-control ioctl that could replace
>>>>>>> STREAMON and STREAMOFF in the long term (we could extend the subdev s_stream
>>>>>>> operation, and easily map STREAMON and STREAMOFF to the new ioctl in
>>>>>>> video_ioctl2 internally).
>>>>>>
>>>>>> How would this be different from queueing n frames (in total; count
>>>>>> dequeueing, too) and issuing streamon? --- Except that when the last frame
>>>>>> is processed the pipeline could be stopped already before issuing STREAMOFF.
>>>>>> That does indeed have some benefits. Something else?
>>>>>
>>>>> Well, you usually see in your host driver, that the videobuffer queue is
>>>>> empty (no more free buffers are available), so, you stop streaming
>>>>> immediately too.
>>>>
>>>> This probably assumes that the host driver knows that this is a special queue?
>>>> Because in general drivers will simply keep capturing in the last buffer and not
>>>> release it to userspace until a new buffer is queued.
>>>
>>> Yes, I know about this spec requirement, but I also know, that not all
>>> drivers do that and not everyone is happy about that requirement:)
>>
>> Right, similarly a v4l2 output device is not releasing the last buffer
>> to userland and keeps sending its content until a new buffer is queued to the driver.
>> But in case of capture device the requirement is a pain, since it only causes
>> draining the power source, when from a user view the video capture is stopped.
>> Also it limits a minimum number of buffers that could be used in preview pipeline.
> 
> No, we can't change this. We can of course add some setting that will explicitly
> request different behavior.
> 
> The reason this is done this way comes from the traditional TV/webcam viewing apps.
> If for some reason the app can't keep up with the capture rate, then frames should
> just be dropped silently. All apps assume this behavior. In a normal user environment
> this scenario is perfectly normal (e.g. you use a webcam app, then do a CPU
> intensive make run).

All right, I have nothing against extra flags, e.g. in REQBUFS to define a specific
behavior. 

Perhaps I didn't express myself straight. I was thinking only about stopping
the capture/DMA engine when there is no more empty buffers. And releasing 
the last buffer rather than keeping it in the driver. Then when subsequent buffer
is queued by the app the driver would restart the capture engine.
Streaming as seen from user space is not stopped. This just corresponds to a frame
dropping mode, discarding just happens earlier in the H/W pipeline. It's
no different from the app POV than endlessly overwriting memory with new frames.

BTW, in STREAMON ioctl documentation we have following requirement:

"... Accordingly the output hardware is disabled, no video signal is produced until
 VIDIOC_STREAMON has been called. *The ioctl will succeed only when at least one
 output buffer is in the incoming queue*."

It has been discussed that memory-to-memory interface should be an exception
from the at least one buffer requirement on an output queue for STREAMON to succeed.
However I see no good way to implement it in videobuf2. Now there is a relevant check
in vb2_streamon. There were opinions that the above restriction causes more harm
than good. I'm not sure if we should keep it.

I'm working on mem-to-mem interface DocBook documentation and it would be nice
to have this clarified.


Regards,
Sylwester

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-25 20:33             ` Guennadi Liakhovetski
@ 2011-02-27 21:00               ` Sakari Ailus
  2011-02-28 11:20                 ` Guennadi Liakhovetski
  0 siblings, 1 reply; 50+ messages in thread
From: Sakari Ailus @ 2011-02-27 21:00 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Sakari Ailus, Laurent Pinchart, Kim HeungJun, Hans Verkuil,
	Linux Media Mailing List, Stanimir Varbanov

Hi,

Guennadi Liakhovetski wrote:
> On Fri, 25 Feb 2011, Sakari Ailus wrote:
>
>> Hi Guennadi,
>>
>> Guennadi Liakhovetski wrote:
>>> In principle - yes, and yes, I do realise, that the couple of controls,
>>> that I've proposed only cover a very minor subset of the whole flash
>>> function palette. The purposes of my RFC were:
>>
>> Why would there be a different interface for controlling the flash in
>> simple cases and more complex cases?
>
> Sorry, not sure what you mean. Do you mean different APIs when the flash
> is controlled directly by the sensor and by an external controller? No, of
> course we need one API, but you either issue those ioctl()s to the sensor
> (sub)device, or to the dedicated flash (sub)device. If you mean my "minor
> subset" above, then I was trying to say, that this is a basis, that has to
> be extended, but not, that we will develop a new API for more complicated
> cases.

I think I misunderstood you originally, sorry. I should have properly 
read the RFC. :-)

Your proposal of the flash mode is good, but what about software strobe 
(a little more on that below)?

Also, what about making this a V4L2 control instead? The ADP1653 driver 
that Laurent referred to implements flash control using V4L2 controls only.

A version of the driver is here:

<URL:http://gitorious.org/omap3camera/mainline/commit/a41027c857dfcbc268cf8d1c7c7d0ab8b6abac92>

It's not yet in mainline --- one reason for this is the lack of time to 
discuss a proper API for the flash. :-)

...

>>>> This doesn't solve the flash/capture synchronization problem though. I don't
>>>> think we need a dedicated snapshot capture mode at the V4L2 level. A way to
>>>> configure the sensor to react on an external trigger provided by the flash
>>>> controller is needed, and that could be a control on the flash sub-device.
>>>
>>> Well... Sensors call this a "snapshot mode." I don't care that much how we
>>> _call_ it, but I do think, that we should be able to use it.
>>
>> Some sensors and webcams might have that, but newer camera solutions
>> tend to contain a raw bayer sensor and and ISP. There is no concept of
>> snapsnot mode in these sensors.
>
> Hm, I am not sure I understand, why sensors with DSPs in them should have
> no notion of a snapshot mode. Do they have no strobe / trigger pins? And
> no built in possibility to synchronize with a flash?

I was referring to ISPs such as the OMAP 3 ISP. Some hardware have a 
flash strobe pin while some doesn't (such as the N900).

Still, even if the strobe pin is missing it should be possible to allow 
strobing the flash by using software strobe (usually an I2C message).

I agree using a hardware strobe is much much better if it's available.

>>> Hm, don't think only the "flash subdevice" has to know about this. First,
>>> you have to switch the sensor into that mode. Second, it might be either
>>> external trigger from the flash controller, or a programmed trigger and a
>>> flash strobe from the sensor to the flash (controller). Third, well, not
>>> quite sure, but doesn't the host have to know about the snapshot mode?
>>
>> I do not favour adding use case type of functionality to interfaces that
>> do not necessarily need it. Would the concept of a snapshot be
>> parametrisable on V4L2 level?
>
> I am open to this. I don't have a good idea of whether camera hosts have
> to know about the snapshot mode or not. It's open for discussion.

What functionality would the snapshot mode provide? Flash 
synchronisation? Something else?

I have to admit I don't know of any hardware which would recognise a 
concept of "snapshot". Do you have a smart sensor which does this, for 
example? The only hardware support for the flash use I know of is the 
flash strobe signal.

Flash synchronisation is indeed an issue, and how to tell that a given 
frame has been exposed with flash. The use of flash is just one of the 
parameters which would be nice to connect to frames, though.

Regards,

-- 
Sakari Ailus
sakari.ailus@iki.fi

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-26 13:03               ` Guennadi Liakhovetski
  2011-02-26 13:39                 ` Sylwester Nawrocki
@ 2011-02-28 10:24                 ` Laurent Pinchart
  1 sibling, 0 replies; 50+ messages in thread
From: Laurent Pinchart @ 2011-02-28 10:24 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Hans Verkuil, Sakari Ailus, Kim HeungJun,
	Linux Media Mailing List, Stanimir Varbanov

On Saturday 26 February 2011 14:03:53 Guennadi Liakhovetski wrote:
> On Sat, 26 Feb 2011, Hans Verkuil wrote:
> > On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:
> > 
> > <snip>
> > 
> > > > > configure the sensor to react on an external trigger provided by
> > > > > the flash controller is needed, and that could be a control on the
> > > > > flash sub-device. What we would probably miss is a way to issue a
> > > > > STREAMON with a number of frames to capture. A new ioctl is
> > > > > probably needed there. Maybe that would be an opportunity to
> > > > > create a new stream-control ioctl that could replace STREAMON and
> > > > > STREAMOFF in the long term (we could extend the subdev s_stream
> > > > > operation, and easily map STREAMON and STREAMOFF to the new ioctl
> > > > > in video_ioctl2 internally).
> > > > 
> > > > How would this be different from queueing n frames (in total; count
> > > > dequeueing, too) and issuing streamon? --- Except that when the last
> > > > frame is processed the pipeline could be stopped already before
> > > > issuing STREAMOFF. That does indeed have some benefits. Something
> > > > else?
> > > 
> > > Well, you usually see in your host driver, that the videobuffer queue
> > > is empty (no more free buffers are available), so, you stop streaming
> > > immediately too.
> > 
> > This probably assumes that the host driver knows that this is a special
> > queue? Because in general drivers will simply keep capturing in the last
> > buffer and not release it to userspace until a new buffer is queued.
> 
> Yes, I know about this spec requirement, but I also know, that not all
> drivers do that and not everyone is happy about that requirement:)

Is it a requirement, or just something some drivers do ? Several drivers just 
stop capturing when no buffer is available, and resume when a new buffer is 
queued.

> > That said, it wouldn't be hard to add some flag somewhere that puts a
> > queue in a 'stop streaming on last buffer capture' mode.
> 
> No, it wouldn't... But TBH this doesn't seem like the most elegant and
> complete solution. Maybe we have to think a bit more about it - which
> soncequences switching into the snapshot mode has on the host driver,
> apart from stopping after N frames. So, this is one of the possibilities,
> not sure if the best one.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-26 13:56                   ` Hans Verkuil
  2011-02-26 15:42                     ` Sylwester Nawrocki
@ 2011-02-28 10:28                     ` Laurent Pinchart
  2011-02-28 10:40                       ` Hans Verkuil
  1 sibling, 1 reply; 50+ messages in thread
From: Laurent Pinchart @ 2011-02-28 10:28 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Sylwester Nawrocki, Guennadi Liakhovetski, Sakari Ailus,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

Hi Hans,

On Saturday 26 February 2011 14:56:18 Hans Verkuil wrote:
> On Saturday, February 26, 2011 14:39:54 Sylwester Nawrocki wrote:
> > On 02/26/2011 02:03 PM, Guennadi Liakhovetski wrote:
> > > On Sat, 26 Feb 2011, Hans Verkuil wrote:
> > >> On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:
> > >> 
> > >> <snip>
> > >> 
> > >>>>> configure the sensor to react on an external trigger provided by
> > >>>>> the flash controller is needed, and that could be a control on the
> > >>>>> flash sub-device. What we would probably miss is a way to issue a
> > >>>>> STREAMON with a number of frames to capture. A new ioctl is
> > >>>>> probably needed there. Maybe that would be an opportunity to
> > >>>>> create a new stream-control ioctl that could replace STREAMON and
> > >>>>> STREAMOFF in the long term (we could extend the subdev s_stream
> > >>>>> operation, and easily map STREAMON and STREAMOFF to the new ioctl
> > >>>>> in video_ioctl2 internally).
> > >>>> 
> > >>>> How would this be different from queueing n frames (in total; count
> > >>>> dequeueing, too) and issuing streamon? --- Except that when the last
> > >>>> frame is processed the pipeline could be stopped already before
> > >>>> issuing STREAMOFF. That does indeed have some benefits. Something
> > >>>> else?
> > >>> 
> > >>> Well, you usually see in your host driver, that the videobuffer queue
> > >>> is empty (no more free buffers are available), so, you stop
> > >>> streaming immediately too.
> > >> 
> > >> This probably assumes that the host driver knows that this is a
> > >> special queue? Because in general drivers will simply keep capturing
> > >> in the last buffer and not release it to userspace until a new buffer
> > >> is queued.
> > > 
> > > Yes, I know about this spec requirement, but I also know, that not all
> > > drivers do that and not everyone is happy about that requirement:)
> > 
> > Right, similarly a v4l2 output device is not releasing the last buffer
> > to userland and keeps sending its content until a new buffer is queued to
> > the driver. But in case of capture device the requirement is a pain,
> > since it only causes draining the power source, when from a user view
> > the video capture is stopped. Also it limits a minimum number of buffers
> > that could be used in preview pipeline.
> 
> No, we can't change this. We can of course add some setting that will
> explicitly request different behavior.
> 
> The reason this is done this way comes from the traditional TV/webcam
> viewing apps. If for some reason the app can't keep up with the capture
> rate, then frames should just be dropped silently. All apps assume this
> behavior. In a normal user environment this scenario is perfectly normal
> (e.g. you use a webcam app, then do a CPU intensive make run).

Why couldn't drivers drop frames silently without a capture buffer ? If the 
hardware can be paused, the driver could just do that when the last buffer is 
given back to userspace, and resume the hardware when the next buffer is 
queued.

> I agree that you might want different behavior in an embedded environment,
> but that should be requested explicitly.
> 
> > In still capture mode (single shot) we might want to use only one buffer
> > so adhering to the requirement would not allow this, would it?
> 
> That's one of the problems with still capture mode, yes.
> 
> I have not yet seen a proposal for this that I really like. Most are too
> specific to this use-case (snapshot) and I'd like to see something more
> general.

I don't think snapshot capture is *that* special. I don't expect most embedded 
SoCs to implement snapshot capture in hardware. What usually happens is that 
the hardware provides some support (like two independent video streams for 
instance, or the ability to capture a given number of frames) and the 
scheduling is performed in userspace. Good quality snapshot capture requires 
complex algorithms and involves several hardware pieces (ISP, flash 
controller, lens controller, ...), so it can't be implemented in the kernel.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 10:28                     ` Laurent Pinchart
@ 2011-02-28 10:40                       ` Hans Verkuil
  2011-02-28 10:47                         ` Laurent Pinchart
  2011-02-28 11:02                         ` Guennadi Liakhovetski
  0 siblings, 2 replies; 50+ messages in thread
From: Hans Verkuil @ 2011-02-28 10:40 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Hans Verkuil, Sylwester Nawrocki, Guennadi Liakhovetski,
	Sakari Ailus, Kim HeungJun, Linux Media Mailing List,
	Stanimir Varbanov

On Monday, February 28, 2011 11:28:58 Laurent Pinchart wrote:
> Hi Hans,
> 
> On Saturday 26 February 2011 14:56:18 Hans Verkuil wrote:
> > On Saturday, February 26, 2011 14:39:54 Sylwester Nawrocki wrote:
> > > On 02/26/2011 02:03 PM, Guennadi Liakhovetski wrote:
> > > > On Sat, 26 Feb 2011, Hans Verkuil wrote:
> > > >> On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:
> > > >> 
> > > >> <snip>
> > > >> 
> > > >>>>> configure the sensor to react on an external trigger provided by
> > > >>>>> the flash controller is needed, and that could be a control on the
> > > >>>>> flash sub-device. What we would probably miss is a way to issue a
> > > >>>>> STREAMON with a number of frames to capture. A new ioctl is
> > > >>>>> probably needed there. Maybe that would be an opportunity to
> > > >>>>> create a new stream-control ioctl that could replace STREAMON and
> > > >>>>> STREAMOFF in the long term (we could extend the subdev s_stream
> > > >>>>> operation, and easily map STREAMON and STREAMOFF to the new ioctl
> > > >>>>> in video_ioctl2 internally).
> > > >>>> 
> > > >>>> How would this be different from queueing n frames (in total; count
> > > >>>> dequeueing, too) and issuing streamon? --- Except that when the 
last
> > > >>>> frame is processed the pipeline could be stopped already before
> > > >>>> issuing STREAMOFF. That does indeed have some benefits. Something
> > > >>>> else?
> > > >>> 
> > > >>> Well, you usually see in your host driver, that the videobuffer 
queue
> > > >>> is empty (no more free buffers are available), so, you stop
> > > >>> streaming immediately too.
> > > >> 
> > > >> This probably assumes that the host driver knows that this is a
> > > >> special queue? Because in general drivers will simply keep capturing
> > > >> in the last buffer and not release it to userspace until a new buffer
> > > >> is queued.
> > > > 
> > > > Yes, I know about this spec requirement, but I also know, that not all
> > > > drivers do that and not everyone is happy about that requirement:)
> > > 
> > > Right, similarly a v4l2 output device is not releasing the last buffer
> > > to userland and keeps sending its content until a new buffer is queued 
to
> > > the driver. But in case of capture device the requirement is a pain,
> > > since it only causes draining the power source, when from a user view
> > > the video capture is stopped. Also it limits a minimum number of buffers
> > > that could be used in preview pipeline.
> > 
> > No, we can't change this. We can of course add some setting that will
> > explicitly request different behavior.
> > 
> > The reason this is done this way comes from the traditional TV/webcam
> > viewing apps. If for some reason the app can't keep up with the capture
> > rate, then frames should just be dropped silently. All apps assume this
> > behavior. In a normal user environment this scenario is perfectly normal
> > (e.g. you use a webcam app, then do a CPU intensive make run).
> 
> Why couldn't drivers drop frames silently without a capture buffer ? If the 
> hardware can be paused, the driver could just do that when the last buffer 
is 
> given back to userspace, and resume the hardware when the next buffer is 
> queued.

It was my understanding that the streaming would stop if no capture buffers 
are available, requiring a VIDIOC_STREAMON to get it started again. Of course, 
there is nothing wrong with stopping the hardware and restarting it again when 
a new buffer becomes available if that can be done efficiently enough. Just as 
long as userspace doesn't notice.

Note that there are some problems with this anyway: often restarting DMA 
requires resyncing to the video stream, which may lead to lost frames. Also, 
the framecounter in struct v4l2_buffer will probably have failed to count the 
lost frames.

In my opinion trying this might cause more problems than it solves.

> > I agree that you might want different behavior in an embedded environment,
> > but that should be requested explicitly.
> > 
> > > In still capture mode (single shot) we might want to use only one buffer
> > > so adhering to the requirement would not allow this, would it?
> > 
> > That's one of the problems with still capture mode, yes.
> > 
> > I have not yet seen a proposal for this that I really like. Most are too
> > specific to this use-case (snapshot) and I'd like to see something more
> > general.
> 
> I don't think snapshot capture is *that* special. I don't expect most 
embedded 
> SoCs to implement snapshot capture in hardware. What usually happens is that 
> the hardware provides some support (like two independent video streams for 
> instance, or the ability to capture a given number of frames) and the 
> scheduling is performed in userspace. Good quality snapshot capture requires 
> complex algorithms and involves several hardware pieces (ISP, flash 
> controller, lens controller, ...), so it can't be implemented in the kernel.

I agree.

Regards,

	Hans

> 
> -- 
> Regards,
> 
> Laurent Pinchart
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 10:40                       ` Hans Verkuil
@ 2011-02-28 10:47                         ` Laurent Pinchart
  2011-02-28 11:02                         ` Guennadi Liakhovetski
  1 sibling, 0 replies; 50+ messages in thread
From: Laurent Pinchart @ 2011-02-28 10:47 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Hans Verkuil, Sylwester Nawrocki, Guennadi Liakhovetski,
	Sakari Ailus, Kim HeungJun, Linux Media Mailing List,
	Stanimir Varbanov

On Monday 28 February 2011 11:40:31 Hans Verkuil wrote:
> On Monday, February 28, 2011 11:28:58 Laurent Pinchart wrote:
> > On Saturday 26 February 2011 14:56:18 Hans Verkuil wrote:
> > > On Saturday, February 26, 2011 14:39:54 Sylwester Nawrocki wrote:
> > > > On 02/26/2011 02:03 PM, Guennadi Liakhovetski wrote:
> > > > > On Sat, 26 Feb 2011, Hans Verkuil wrote:
> > > > >> On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:

[snip]

> > > > >>> Well, you usually see in your host driver, that the videobuffer
> > > > >>> queue is empty (no more free buffers are available), so, you stop
> > > > >>> streaming immediately too.
> > > > >> 
> > > > >> This probably assumes that the host driver knows that this is a
> > > > >> special queue? Because in general drivers will simply keep
> > > > >> capturing in the last buffer and not release it to userspace
> > > > >> until a new buffer is queued.
> > > > > 
> > > > > Yes, I know about this spec requirement, but I also know, that not
> > > > > all drivers do that and not everyone is happy about that
> > > > > requirement:)
> > > > 
> > > > Right, similarly a v4l2 output device is not releasing the last
> > > > buffer to userland and keeps sending its content until a new buffer
> > > > is queued to the driver. But in case of capture device the requirement
> > > > is a pain, since it only causes draining the power source, when from a
> > > > user view the video capture is stopped. Also it limits a minimum
> > > > number of buffers that could be used in preview pipeline.
> > > 
> > > No, we can't change this. We can of course add some setting that will
> > > explicitly request different behavior.
> > > 
> > > The reason this is done this way comes from the traditional TV/webcam
> > > viewing apps. If for some reason the app can't keep up with the capture
> > > rate, then frames should just be dropped silently. All apps assume this
> > > behavior. In a normal user environment this scenario is perfectly
> > > normal (e.g. you use a webcam app, then do a CPU intensive make run).
> > 
> > Why couldn't drivers drop frames silently without a capture buffer ? If
> > the hardware can be paused, the driver could just do that when the last
> > buffer is given back to userspace, and resume the hardware when the next
> > buffer is queued.
> 
> It was my understanding that the streaming would stop if no capture buffers
> are available, requiring a VIDIOC_STREAMON to get it started again. Of
> course, there is nothing wrong with stopping the hardware and restarting
> it again when a new buffer becomes available if that can be done
> efficiently enough. Just as long as userspace doesn't notice.
> 
> Note that there are some problems with this anyway: often restarting DMA
> requires resyncing to the video stream, which may lead to lost frames.

You'll loose frames when you get a buffer underrun anyway :-)

> Also, the framecounter in struct v4l2_buffer will probably have failed to
> count the lost frames.
> 
> In my opinion trying this might cause more problems than it solves.

Whether drivers will hold on the last buffer and keep filling it again and 
again until a new buffer is available, or stop the stream and resume it 
transparently when a new buffer is queued, should probably be left as a choice 
to the drivers. I'm in favour of the second option, but I understand that it 
might be difficult to implement for some hardware. The spec should at least 
not preclude it when efficient hardware support is available.

> > > I agree that you might want different behavior in an embedded
> > > environment, but that should be requested explicitly.
> > > 
> > > > In still capture mode (single shot) we might want to use only one
> > > > buffer so adhering to the requirement would not allow this, would
> > > > it?
> > > 
> > > That's one of the problems with still capture mode, yes.
> > > 
> > > I have not yet seen a proposal for this that I really like. Most are
> > > too specific to this use-case (snapshot) and I'd like to see something
> > > more general.
> > 
> > I don't think snapshot capture is *that* special. I don't expect most
> > embedded SoCs to implement snapshot capture in hardware. What usually
> > happens is that the hardware provides some support (like two independent
> > video streams for instance, or the ability to capture a given number of
> > frames) and the scheduling is performed in userspace. Good quality
> > snapshot capture requires complex algorithms and involves several
> > hardware pieces (ISP, flash controller, lens controller, ...), so it
> > can't be implemented in the kernel.
> 
> I agree.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 10:40                       ` Hans Verkuil
  2011-02-28 10:47                         ` Laurent Pinchart
@ 2011-02-28 11:02                         ` Guennadi Liakhovetski
  2011-02-28 11:07                           ` Laurent Pinchart
  1 sibling, 1 reply; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-28 11:02 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Laurent Pinchart, Hans Verkuil, Sylwester Nawrocki, Sakari Ailus,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

On Mon, 28 Feb 2011, Hans Verkuil wrote:

> On Monday, February 28, 2011 11:28:58 Laurent Pinchart wrote:
> > Hi Hans,
> > 
> > On Saturday 26 February 2011 14:56:18 Hans Verkuil wrote:
> > > On Saturday, February 26, 2011 14:39:54 Sylwester Nawrocki wrote:
> > > > On 02/26/2011 02:03 PM, Guennadi Liakhovetski wrote:
> > > > > On Sat, 26 Feb 2011, Hans Verkuil wrote:
> > > > >> On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski wrote:
> > > > >> 
> > > > >> <snip>
> > > > >> 
> > > > >>>>> configure the sensor to react on an external trigger provided by
> > > > >>>>> the flash controller is needed, and that could be a control on the
> > > > >>>>> flash sub-device. What we would probably miss is a way to issue a
> > > > >>>>> STREAMON with a number of frames to capture. A new ioctl is
> > > > >>>>> probably needed there. Maybe that would be an opportunity to
> > > > >>>>> create a new stream-control ioctl that could replace STREAMON and
> > > > >>>>> STREAMOFF in the long term (we could extend the subdev s_stream
> > > > >>>>> operation, and easily map STREAMON and STREAMOFF to the new ioctl
> > > > >>>>> in video_ioctl2 internally).
> > > > >>>> 
> > > > >>>> How would this be different from queueing n frames (in total; count
> > > > >>>> dequeueing, too) and issuing streamon? --- Except that when the 
> last
> > > > >>>> frame is processed the pipeline could be stopped already before
> > > > >>>> issuing STREAMOFF. That does indeed have some benefits. Something
> > > > >>>> else?
> > > > >>> 
> > > > >>> Well, you usually see in your host driver, that the videobuffer 
> queue
> > > > >>> is empty (no more free buffers are available), so, you stop
> > > > >>> streaming immediately too.
> > > > >> 
> > > > >> This probably assumes that the host driver knows that this is a
> > > > >> special queue? Because in general drivers will simply keep capturing
> > > > >> in the last buffer and not release it to userspace until a new buffer
> > > > >> is queued.
> > > > > 
> > > > > Yes, I know about this spec requirement, but I also know, that not all
> > > > > drivers do that and not everyone is happy about that requirement:)
> > > > 
> > > > Right, similarly a v4l2 output device is not releasing the last buffer
> > > > to userland and keeps sending its content until a new buffer is queued 
> to
> > > > the driver. But in case of capture device the requirement is a pain,
> > > > since it only causes draining the power source, when from a user view
> > > > the video capture is stopped. Also it limits a minimum number of buffers
> > > > that could be used in preview pipeline.
> > > 
> > > No, we can't change this. We can of course add some setting that will
> > > explicitly request different behavior.
> > > 
> > > The reason this is done this way comes from the traditional TV/webcam
> > > viewing apps. If for some reason the app can't keep up with the capture
> > > rate, then frames should just be dropped silently. All apps assume this
> > > behavior. In a normal user environment this scenario is perfectly normal
> > > (e.g. you use a webcam app, then do a CPU intensive make run).
> > 
> > Why couldn't drivers drop frames silently without a capture buffer ? If the 
> > hardware can be paused, the driver could just do that when the last buffer 
> is 
> > given back to userspace, and resume the hardware when the next buffer is 
> > queued.
> 
> It was my understanding that the streaming would stop if no capture buffers 
> are available, requiring a VIDIOC_STREAMON to get it started again. Of course, 
> there is nothing wrong with stopping the hardware and restarting it again when 
> a new buffer becomes available if that can be done efficiently enough. Just as 
> long as userspace doesn't notice.
> 
> Note that there are some problems with this anyway: often restarting DMA 
> requires resyncing to the video stream, which may lead to lost frames. Also, 
> the framecounter in struct v4l2_buffer will probably have failed to count the 
> lost frames.
> 
> In my opinion trying this might cause more problems than it solves.

So, do I understand it right, that currently there are drivers, that 
overwrite the last buffers while waiting for a new one, and ones, that 
stop capture for that time. None of them violate the spec, but the former 
will not work with the "snapshot mode," and the latter will. Since we do 
not want / cannot enforce either way, we do need a way to tell the driver 
to enter the "snapshot mode" even if only to not overwrite the last 
buffer, right?

> > > I agree that you might want different behavior in an embedded environment,
> > > but that should be requested explicitly.
> > > 
> > > > In still capture mode (single shot) we might want to use only one buffer
> > > > so adhering to the requirement would not allow this, would it?
> > > 
> > > That's one of the problems with still capture mode, yes.
> > > 
> > > I have not yet seen a proposal for this that I really like. Most are too
> > > specific to this use-case (snapshot) and I'd like to see something more
> > > general.
> > 
> > I don't think snapshot capture is *that* special. I don't expect most 
> embedded 
> > SoCs to implement snapshot capture in hardware. What usually happens is that 
> > the hardware provides some support (like two independent video streams for 
> > instance, or the ability to capture a given number of frames) and the 
> > scheduling is performed in userspace. Good quality snapshot capture requires 
> > complex algorithms and involves several hardware pieces (ISP, flash 
> > controller, lens controller, ...), so it can't be implemented in the kernel.
> 
> I agree.

Right, but sensors do need it. It is not enough to just tell the sensor - 
a per-frame flash is used and let the driver figure out, that it has to 
switch to snapshot mode. The snapshot mode has other effects too, e.g., on 
some sensors it enables the external trigger pin, which some designs might 
want to use also without a flash. Maybe there are also some other side 
effects of such snapshot modes on some other sensors, that I'm not aware 
of.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 11:02                         ` Guennadi Liakhovetski
@ 2011-02-28 11:07                           ` Laurent Pinchart
  2011-02-28 11:17                             ` Hans Verkuil
  2011-02-28 11:37                             ` Guennadi Liakhovetski
  0 siblings, 2 replies; 50+ messages in thread
From: Laurent Pinchart @ 2011-02-28 11:07 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Hans Verkuil, Hans Verkuil, Sylwester Nawrocki, Sakari Ailus,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

Hi Guennadi,

On Monday 28 February 2011 12:02:41 Guennadi Liakhovetski wrote:
> On Mon, 28 Feb 2011, Hans Verkuil wrote:
> > On Monday, February 28, 2011 11:28:58 Laurent Pinchart wrote:
> > > On Saturday 26 February 2011 14:56:18 Hans Verkuil wrote:
> > > > On Saturday, February 26, 2011 14:39:54 Sylwester Nawrocki wrote:
> > > > > On 02/26/2011 02:03 PM, Guennadi Liakhovetski wrote:
> > > > > > On Sat, 26 Feb 2011, Hans Verkuil wrote:
> > > > > >> On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski
> > > > > >> wrote:
> > > > > >> 
> > > > > >> <snip>
> > > > > >> 
> > > > > >>>>> configure the sensor to react on an external trigger provided
> > > > > >>>>> by the flash controller is needed, and that could be a
> > > > > >>>>> control on the flash sub-device. What we would probably miss
> > > > > >>>>> is a way to issue a STREAMON with a number of frames to
> > > > > >>>>> capture. A new ioctl is probably needed there. Maybe that
> > > > > >>>>> would be an opportunity to create a new stream-control ioctl
> > > > > >>>>> that could replace STREAMON and STREAMOFF in the long term
> > > > > >>>>> (we could extend the subdev s_stream operation, and easily
> > > > > >>>>> map STREAMON and STREAMOFF to the new ioctl in video_ioctl2
> > > > > >>>>> internally).
> > > > > >>>> 
> > > > > >>>> How would this be different from queueing n frames (in total;
> > > > > >>>> count dequeueing, too) and issuing streamon? --- Except that
> > > > > >>>> when the last frame is processed the pipeline could be stopped
> > > > > >>>> already before issuing STREAMOFF. That does indeed have some
> > > > > >>>> benefits. Something else?
> > > > > >>> 
> > > > > >>> Well, you usually see in your host driver, that the videobuffer
> > > > > >>> queue is empty (no more free buffers are available), so, you
> > > > > >>> stop streaming immediately too.
> > > > > >> 
> > > > > >> This probably assumes that the host driver knows that this is a
> > > > > >> special queue? Because in general drivers will simply keep
> > > > > >> capturing in the last buffer and not release it to userspace
> > > > > >> until a new buffer is queued.
> > > > > > 
> > > > > > Yes, I know about this spec requirement, but I also know, that
> > > > > > not all drivers do that and not everyone is happy about that
> > > > > > requirement:)
> > > > > 
> > > > > Right, similarly a v4l2 output device is not releasing the last
> > > > > buffer to userland and keeps sending its content until a new
> > > > > buffer is queued to the driver. But in case of capture device the
> > > > > requirement is a pain, since it only causes draining the power
> > > > > source, when from a user view the video capture is stopped. Also it
> > > > > limits a minimum number of buffers that could be used in preview
> > > > > pipeline.
> > > > 
> > > > No, we can't change this. We can of course add some setting that will
> > > > explicitly request different behavior.
> > > > 
> > > > The reason this is done this way comes from the traditional TV/webcam
> > > > viewing apps. If for some reason the app can't keep up with the
> > > > capture rate, then frames should just be dropped silently. All apps
> > > > assume this behavior. In a normal user environment this scenario is
> > > > perfectly normal (e.g. you use a webcam app, then do a CPU intensive
> > > > make run).
> > > 
> > > Why couldn't drivers drop frames silently without a capture buffer ? If
> > > the hardware can be paused, the driver could just do that when the
> > > last buffer is given back to userspace, and resume the hardware when the
> > > next buffer is queued.
> > 
> > It was my understanding that the streaming would stop if no capture
> > buffers are available, requiring a VIDIOC_STREAMON to get it started
> > again. Of course, there is nothing wrong with stopping the hardware and
> > restarting it again when a new buffer becomes available if that can be
> > done efficiently enough. Just as long as userspace doesn't notice.
> > 
> > Note that there are some problems with this anyway: often restarting DMA
> > requires resyncing to the video stream, which may lead to lost frames.
> > Also, the framecounter in struct v4l2_buffer will probably have failed
> > to count the lost frames.
> > 
> > In my opinion trying this might cause more problems than it solves.
> 
> So, do I understand it right, that currently there are drivers, that
> overwrite the last buffers while waiting for a new one, and ones, that
> stop capture for that time. None of them violate the spec, but the former
> will not work with the "snapshot mode," and the latter will. Since we do
> not want / cannot enforce either way, we do need a way to tell the driver
> to enter the "snapshot mode" even if only to not overwrite the last
> buffer, right?
> 
> > > > I agree that you might want different behavior in an embedded
> > > > environment, but that should be requested explicitly.
> > > > 
> > > > > In still capture mode (single shot) we might want to use only one
> > > > > buffer so adhering to the requirement would not allow this, would
> > > > > it?
> > > > 
> > > > That's one of the problems with still capture mode, yes.
> > > > 
> > > > I have not yet seen a proposal for this that I really like. Most are
> > > > too specific to this use-case (snapshot) and I'd like to see
> > > > something more general.
> > > 
> > > I don't think snapshot capture is *that* special. I don't expect most
> > > embedded SoCs to implement snapshot capture in hardware. What usually
> > > happens is that the hardware provides some support (like two independent
> > > video streams for instance, or the ability to capture a given number of
> > > frames) and the scheduling is performed in userspace. Good quality
> > > snapshot capture requires complex algorithms and involves several
> > > hardware pieces (ISP, flash controller, lens controller, ...), so it
> > > can't be implemented in the kernel.
> > 
> > I agree.
> 
> Right, but sensors do need it. It is not enough to just tell the sensor -
> a per-frame flash is used and let the driver figure out, that it has to
> switch to snapshot mode. The snapshot mode has other effects too, e.g., on
> some sensors it enables the external trigger pin, which some designs might
> want to use also without a flash. Maybe there are also some other side
> effects of such snapshot modes on some other sensors, that I'm not aware
> of.

This makes me wonder if we need a snapshot mode at all. Why should we tie 
flash, capture trigger (and other such options that you're not aware of yet 
:-)) together under a single high-level control (in the general sense, not to 
be strictly taken as a V4L2 CID) ? Wouldn't it be better to expose those 
features individually instead ? User might want to use the flash in video 
capture mode for a stroboscopic effect for instance.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 11:07                           ` Laurent Pinchart
@ 2011-02-28 11:17                             ` Hans Verkuil
  2011-02-28 11:19                               ` Laurent Pinchart
                                                 ` (2 more replies)
  2011-02-28 11:37                             ` Guennadi Liakhovetski
  1 sibling, 3 replies; 50+ messages in thread
From: Hans Verkuil @ 2011-02-28 11:17 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Guennadi Liakhovetski, Hans Verkuil, Sylwester Nawrocki,
	Sakari Ailus, Kim HeungJun, Linux Media Mailing List,
	Stanimir Varbanov

On Monday, February 28, 2011 12:07:33 Laurent Pinchart wrote:
> Hi Guennadi,
> 
> On Monday 28 February 2011 12:02:41 Guennadi Liakhovetski wrote:
> > On Mon, 28 Feb 2011, Hans Verkuil wrote:
> > > On Monday, February 28, 2011 11:28:58 Laurent Pinchart wrote:
> > > > On Saturday 26 February 2011 14:56:18 Hans Verkuil wrote:
> > > > > On Saturday, February 26, 2011 14:39:54 Sylwester Nawrocki wrote:
> > > > > > On 02/26/2011 02:03 PM, Guennadi Liakhovetski wrote:
> > > > > > > On Sat, 26 Feb 2011, Hans Verkuil wrote:
> > > > > > >> On Friday, February 25, 2011 18:08:07 Guennadi Liakhovetski
> > > > > > >> wrote:
> > > > > > >> 
> > > > > > >> <snip>
> > > > > > >> 
> > > > > > >>>>> configure the sensor to react on an external trigger 
provided
> > > > > > >>>>> by the flash controller is needed, and that could be a
> > > > > > >>>>> control on the flash sub-device. What we would probably miss
> > > > > > >>>>> is a way to issue a STREAMON with a number of frames to
> > > > > > >>>>> capture. A new ioctl is probably needed there. Maybe that
> > > > > > >>>>> would be an opportunity to create a new stream-control ioctl
> > > > > > >>>>> that could replace STREAMON and STREAMOFF in the long term
> > > > > > >>>>> (we could extend the subdev s_stream operation, and easily
> > > > > > >>>>> map STREAMON and STREAMOFF to the new ioctl in video_ioctl2
> > > > > > >>>>> internally).
> > > > > > >>>> 
> > > > > > >>>> How would this be different from queueing n frames (in total;
> > > > > > >>>> count dequeueing, too) and issuing streamon? --- Except that
> > > > > > >>>> when the last frame is processed the pipeline could be 
stopped
> > > > > > >>>> already before issuing STREAMOFF. That does indeed have some
> > > > > > >>>> benefits. Something else?
> > > > > > >>> 
> > > > > > >>> Well, you usually see in your host driver, that the 
videobuffer
> > > > > > >>> queue is empty (no more free buffers are available), so, you
> > > > > > >>> stop streaming immediately too.
> > > > > > >> 
> > > > > > >> This probably assumes that the host driver knows that this is a
> > > > > > >> special queue? Because in general drivers will simply keep
> > > > > > >> capturing in the last buffer and not release it to userspace
> > > > > > >> until a new buffer is queued.
> > > > > > > 
> > > > > > > Yes, I know about this spec requirement, but I also know, that
> > > > > > > not all drivers do that and not everyone is happy about that
> > > > > > > requirement:)
> > > > > > 
> > > > > > Right, similarly a v4l2 output device is not releasing the last
> > > > > > buffer to userland and keeps sending its content until a new
> > > > > > buffer is queued to the driver. But in case of capture device the
> > > > > > requirement is a pain, since it only causes draining the power
> > > > > > source, when from a user view the video capture is stopped. Also 
it
> > > > > > limits a minimum number of buffers that could be used in preview
> > > > > > pipeline.
> > > > > 
> > > > > No, we can't change this. We can of course add some setting that 
will
> > > > > explicitly request different behavior.
> > > > > 
> > > > > The reason this is done this way comes from the traditional 
TV/webcam
> > > > > viewing apps. If for some reason the app can't keep up with the
> > > > > capture rate, then frames should just be dropped silently. All apps
> > > > > assume this behavior. In a normal user environment this scenario is
> > > > > perfectly normal (e.g. you use a webcam app, then do a CPU intensive
> > > > > make run).
> > > > 
> > > > Why couldn't drivers drop frames silently without a capture buffer ? 
If
> > > > the hardware can be paused, the driver could just do that when the
> > > > last buffer is given back to userspace, and resume the hardware when 
the
> > > > next buffer is queued.
> > > 
> > > It was my understanding that the streaming would stop if no capture
> > > buffers are available, requiring a VIDIOC_STREAMON to get it started
> > > again. Of course, there is nothing wrong with stopping the hardware and
> > > restarting it again when a new buffer becomes available if that can be
> > > done efficiently enough. Just as long as userspace doesn't notice.
> > > 
> > > Note that there are some problems with this anyway: often restarting DMA
> > > requires resyncing to the video stream, which may lead to lost frames.
> > > Also, the framecounter in struct v4l2_buffer will probably have failed
> > > to count the lost frames.
> > > 
> > > In my opinion trying this might cause more problems than it solves.
> > 
> > So, do I understand it right, that currently there are drivers, that
> > overwrite the last buffers while waiting for a new one, and ones, that
> > stop capture for that time.

Does anyone know which drivers stop capture if there are no buffers available? 
I'm not aware of any.

> > None of them violate the spec, but the former
> > will not work with the "snapshot mode," and the latter will. Since we do
> > not want / cannot enforce either way, we do need a way to tell the driver
> > to enter the "snapshot mode" even if only to not overwrite the last
> > buffer, right?
> > 
> > > > > I agree that you might want different behavior in an embedded
> > > > > environment, but that should be requested explicitly.
> > > > > 
> > > > > > In still capture mode (single shot) we might want to use only one
> > > > > > buffer so adhering to the requirement would not allow this, would
> > > > > > it?
> > > > > 
> > > > > That's one of the problems with still capture mode, yes.
> > > > > 
> > > > > I have not yet seen a proposal for this that I really like. Most are
> > > > > too specific to this use-case (snapshot) and I'd like to see
> > > > > something more general.
> > > > 
> > > > I don't think snapshot capture is *that* special. I don't expect most
> > > > embedded SoCs to implement snapshot capture in hardware. What usually
> > > > happens is that the hardware provides some support (like two 
independent
> > > > video streams for instance, or the ability to capture a given number 
of
> > > > frames) and the scheduling is performed in userspace. Good quality
> > > > snapshot capture requires complex algorithms and involves several
> > > > hardware pieces (ISP, flash controller, lens controller, ...), so it
> > > > can't be implemented in the kernel.
> > > 
> > > I agree.
> > 
> > Right, but sensors do need it. It is not enough to just tell the sensor -
> > a per-frame flash is used and let the driver figure out, that it has to
> > switch to snapshot mode. The snapshot mode has other effects too, e.g., on
> > some sensors it enables the external trigger pin, which some designs might
> > want to use also without a flash. Maybe there are also some other side
> > effects of such snapshot modes on some other sensors, that I'm not aware
> > of.
> 
> This makes me wonder if we need a snapshot mode at all. Why should we tie 
> flash, capture trigger (and other such options that you're not aware of yet 
> :-)) together under a single high-level control (in the general sense, not 
to 
> be strictly taken as a V4L2 CID) ? Wouldn't it be better to expose those 
> features individually instead ? User might want to use the flash in video 
> capture mode for a stroboscopic effect for instance.

I think this is certainly a good initial approach.

Can someone make a list of things needed for flash/snapshot? So don't look yet 
at the implementation, but just start a list of functionalities that we need 
to support. I don't think I have seen that yet.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 11:17                             ` Hans Verkuil
@ 2011-02-28 11:19                               ` Laurent Pinchart
  2011-02-28 11:54                               ` Guennadi Liakhovetski
  2011-02-28 13:33                               ` Andy Walls
  2 siblings, 0 replies; 50+ messages in thread
From: Laurent Pinchart @ 2011-02-28 11:19 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Guennadi Liakhovetski, Hans Verkuil, Sylwester Nawrocki,
	Sakari Ailus, Kim HeungJun, Linux Media Mailing List,
	Stanimir Varbanov

On Monday 28 February 2011 12:17:12 Hans Verkuil wrote:
> On Monday, February 28, 2011 12:07:33 Laurent Pinchart wrote:
> > On Monday 28 February 2011 12:02:41 Guennadi Liakhovetski wrote:
> > > On Mon, 28 Feb 2011, Hans Verkuil wrote:

[snip]

> > > > It was my understanding that the streaming would stop if no capture
> > > > buffers are available, requiring a VIDIOC_STREAMON to get it started
> > > > again. Of course, there is nothing wrong with stopping the hardware
> > > > and restarting it again when a new buffer becomes available if that
> > > > can be done efficiently enough. Just as long as userspace doesn't
> > > > notice.
> > > > 
> > > > Note that there are some problems with this anyway: often restarting
> > > > DMA requires resyncing to the video stream, which may lead to lost
> > > > frames. Also, the framecounter in struct v4l2_buffer will probably
> > > > have failed to count the lost frames.
> > > > 
> > > > In my opinion trying this might cause more problems than it solves.
> > > 
> > > So, do I understand it right, that currently there are drivers, that
> > > overwrite the last buffers while waiting for a new one, and ones, that
> > > stop capture for that time.
> 
> Does anyone know which drivers stop capture if there are no buffers
> available? I'm not aware of any.

Do you mean stop capture in a way that requires an explicit VIDIOC_STREAMON ? 
None that I'm aware of (and I think that would violate the spec). If you 
instead mean pause capture and restart it on the next VIDIOC_QBUF, uvcvideo 
(somehow) does that, and the OMAP3 ISP does as well.

> > > None of them violate the spec, but the former will not work with the
> > > "snapshot mode," and the latter will. Since we do not want / cannot
> > > enforce either way, we do need a way to tell the driver to enter the
> > > "snapshot mode" even if only to not overwrite the last buffer, right?

[snip]

> > > Right, but sensors do need it. It is not enough to just tell the sensor
> > > - a per-frame flash is used and let the driver figure out, that it has
> > > to switch to snapshot mode. The snapshot mode has other effects too,
> > > e.g., on some sensors it enables the external trigger pin, which some
> > > designs might want to use also without a flash. Maybe there are also
> > > some other side effects of such snapshot modes on some other sensors,
> > > that I'm not aware of.
> > 
> > This makes me wonder if we need a snapshot mode at all. Why should we tie
> > flash, capture trigger (and other such options that you're not aware of
> > yet :-)) together under a single high-level control (in the general sense,
> > not to be strictly taken as a V4L2 CID) ? Wouldn't it be better to expose
> > those features individually instead ? User might want to use the flash in
> > video capture mode for a stroboscopic effect for instance.
> 
> I think this is certainly a good initial approach.
> 
> Can someone make a list of things needed for flash/snapshot? So don't look
> yet at the implementation, but just start a list of functionalities that
> we need to support. I don't think I have seen that yet.

That's the right approach. I'll ping people internally to see if we have such 
a list already.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-27 21:00               ` Sakari Ailus
@ 2011-02-28 11:20                 ` Guennadi Liakhovetski
  2011-02-28 13:44                   ` Sakari Ailus
  0 siblings, 1 reply; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-28 11:20 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Sakari Ailus, Laurent Pinchart, Kim HeungJun, Hans Verkuil,
	Linux Media Mailing List, Stanimir Varbanov

On Sun, 27 Feb 2011, Sakari Ailus wrote:

> Hi,
> 
> Guennadi Liakhovetski wrote:
> > On Fri, 25 Feb 2011, Sakari Ailus wrote:
> > 
> > > Hi Guennadi,
> > > 
> > > Guennadi Liakhovetski wrote:
> > > > In principle - yes, and yes, I do realise, that the couple of controls,
> > > > that I've proposed only cover a very minor subset of the whole flash
> > > > function palette. The purposes of my RFC were:
> > > 
> > > Why would there be a different interface for controlling the flash in
> > > simple cases and more complex cases?
> > 
> > Sorry, not sure what you mean. Do you mean different APIs when the flash
> > is controlled directly by the sensor and by an external controller? No, of
> > course we need one API, but you either issue those ioctl()s to the sensor
> > (sub)device, or to the dedicated flash (sub)device. If you mean my "minor
> > subset" above, then I was trying to say, that this is a basis, that has to
> > be extended, but not, that we will develop a new API for more complicated
> > cases.
> 
> I think I misunderstood you originally, sorry. I should have properly read the
> RFC. :-)
> 
> Your proposal of the flash mode is good, but what about software strobe (a
> little more on that below)?
> 
> Also, what about making this a V4L2 control instead?

These are two things, I think: first we have to decide which functions we 
need, second - how to implement them. Sure, controls are also a 
possibility.

> The ADP1653 driver that
> Laurent referred to implements flash control using V4L2 controls only.
> 
> A version of the driver is here:
> 
> <URL:http://gitorious.org/omap3camera/mainline/commit/a41027c857dfcbc268cf8d1c7c7d0ab8b6abac92>
> 
> It's not yet in mainline --- one reason for this is the lack of time to
> discuss a proper API for the flash. :-)
> 
> ...
> 
> > > > > This doesn't solve the flash/capture synchronization problem though. I
> > > > > don't
> > > > > think we need a dedicated snapshot capture mode at the V4L2 level. A
> > > > > way to
> > > > > configure the sensor to react on an external trigger provided by the
> > > > > flash
> > > > > controller is needed, and that could be a control on the flash
> > > > > sub-device.
> > > > 
> > > > Well... Sensors call this a "snapshot mode." I don't care that much how
> > > > we
> > > > _call_ it, but I do think, that we should be able to use it.
> > > 
> > > Some sensors and webcams might have that, but newer camera solutions
> > > tend to contain a raw bayer sensor and and ISP. There is no concept of
> > > snapsnot mode in these sensors.
> > 
> > Hm, I am not sure I understand, why sensors with DSPs in them should have
> > no notion of a snapshot mode. Do they have no strobe / trigger pins? And
> > no built in possibility to synchronize with a flash?
> 
> I was referring to ISPs such as the OMAP 3 ISP. Some hardware have a flash
> strobe pin while some doesn't (such as the N900).

Of course, if no flash is present, you don't have to support it;)

> Still, even if the strobe pin is missing it should be possible to allow
> strobing the flash by using software strobe (usually an I2C message).
> 
> I agree using a hardware strobe is much much better if it's available.

Again - don't understand. Above (i2c message) you're referring to the 
sensor. But I don't think toggling the flash on per-frame basis from 
software via the sensor makes much sense. That way you could also just 
wire your flash to a GPIO. The main advantage of a sensor-controlled flash 
is, that is toggles the flash automatically, synchronised with its image 
read-out. You would, however, toggle the flash manually, if you just 
wanted to turn it on permanently (torch-mode).

> > > > Hm, don't think only the "flash subdevice" has to know about this.
> > > > First,
> > > > you have to switch the sensor into that mode. Second, it might be either
> > > > external trigger from the flash controller, or a programmed trigger and
> > > > a
> > > > flash strobe from the sensor to the flash (controller). Third, well, not
> > > > quite sure, but doesn't the host have to know about the snapshot mode?
> > > 
> > > I do not favour adding use case type of functionality to interfaces that
> > > do not necessarily need it. Would the concept of a snapshot be
> > > parametrisable on V4L2 level?
> > 
> > I am open to this. I don't have a good idea of whether camera hosts have
> > to know about the snapshot mode or not. It's open for discussion.
> 
> What functionality would the snapshot mode provide? Flash synchronisation?
> Something else?

Also pre-defined number of images, enabling of the trigger pin and trigger 
i2c command. Just noticed - on mt9t031 and mt9v022 the snapshot mode also 
enables externally controlled exposure...

> I have to admit I don't know of any hardware which would recognise a concept
> of "snapshot". Do you have a smart sensor which does this, for example? The
> only hardware support for the flash use I know of is the flash strobe signal.

Several Aptina / Micron sensors have such a mode, e.g., mt9p031, mt9t031, 
mt9v022, mt9m001, also ov7725 from OmniVision (there it's called a "Single 
frame" mode), and, I presume, many others. And no, those are not some 
"smart" sensors, the ones from Aptina are pretty primitive Bayer / 
monochrome cameras.

> Flash synchronisation is indeed an issue, and how to tell that a given frame
> has been exposed with flash. The use of flash is just one of the parameters
> which would be nice to connect to frames, though.

Hm, yes, that's something to think about too...

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 11:07                           ` Laurent Pinchart
  2011-02-28 11:17                             ` Hans Verkuil
@ 2011-02-28 11:37                             ` Guennadi Liakhovetski
  2011-02-28 12:03                               ` Sakari Ailus
  1 sibling, 1 reply; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-28 11:37 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Hans Verkuil, Hans Verkuil, Sylwester Nawrocki, Sakari Ailus,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

On Mon, 28 Feb 2011, Laurent Pinchart wrote:

> > > > I don't think snapshot capture is *that* special. I don't expect most
> > > > embedded SoCs to implement snapshot capture in hardware. What usually
> > > > happens is that the hardware provides some support (like two independent
> > > > video streams for instance, or the ability to capture a given number of
> > > > frames) and the scheduling is performed in userspace. Good quality
> > > > snapshot capture requires complex algorithms and involves several
> > > > hardware pieces (ISP, flash controller, lens controller, ...), so it
> > > > can't be implemented in the kernel.
> > > 
> > > I agree.
> > 
> > Right, but sensors do need it. It is not enough to just tell the sensor -
> > a per-frame flash is used and let the driver figure out, that it has to
> > switch to snapshot mode. The snapshot mode has other effects too, e.g., on
> > some sensors it enables the external trigger pin, which some designs might
> > want to use also without a flash. Maybe there are also some other side
> > effects of such snapshot modes on some other sensors, that I'm not aware
> > of.
> 
> This makes me wonder if we need a snapshot mode at all. Why should we tie 
> flash, capture trigger (and other such options that you're not aware of yet 
> :-)) together under a single high-level control (in the general sense, not to 
> be strictly taken as a V4L2 CID) ? Wouldn't it be better to expose those 
> features individually instead ? User might want to use the flash in video 
> capture mode for a stroboscopic effect for instance.

So, you'd also need a separate control for external exposure, there are 
also sensors, that can be configured to different shutter / exposure / 
readout sequence controlling... No, we don't have to support all that 
variety, but we have to be aware of it, while making decisions;)

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 11:17                             ` Hans Verkuil
  2011-02-28 11:19                               ` Laurent Pinchart
@ 2011-02-28 11:54                               ` Guennadi Liakhovetski
  2011-02-28 22:41                                 ` Guennadi Liakhovetski
  2011-02-28 13:33                               ` Andy Walls
  2 siblings, 1 reply; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-28 11:54 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Laurent Pinchart, Hans Verkuil, Sylwester Nawrocki, Sakari Ailus,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

On Mon, 28 Feb 2011, Hans Verkuil wrote:

> Does anyone know which drivers stop capture if there are no buffers available? 
> I'm not aware of any.

Many soc-camera hosts do that.

> I think this is certainly a good initial approach.
> 
> Can someone make a list of things needed for flash/snapshot? So don't look yet 
> at the implementation, but just start a list of functionalities that we need 
> to support. I don't think I have seen that yet.

These are not the features, that we _have_ to implement, these are just 
the ones, that are related to the snapshot mode:

* flash strobe (provided, we do not want to control its timing from 
	generic controls, and leave that to "reasonable defaults" or to 
	private controls)
* trigger pin / command
* external exposure
* exposure mode (ERS, GRR,...)
* use "trigger" or "shutter" for readout
* number of frames to capture

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-25 20:56               ` Guennadi Liakhovetski
@ 2011-02-28 11:57                 ` Guennadi Liakhovetski
  2011-03-06  9:53                 ` Sakari Ailus
  1 sibling, 0 replies; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-28 11:57 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Laurent Pinchart, Kim HeungJun, Hans Verkuil,
	Linux Media Mailing List, Stanimir Varbanov

On Fri, 25 Feb 2011, Guennadi Liakhovetski wrote:

> On Fri, 25 Feb 2011, Sakari Ailus wrote:
> 
> > On Fri, Feb 25, 2011 at 06:08:07PM +0100, Guennadi Liakhovetski wrote:
> > 
> > > What we could also do, we could add an optional callback to subdev (core?) 
> > > operations, which, if activated, the host would call on each frame 
> > > completion.
> > 
> > It's not quite that simple. The exposure of the next frame has started long
> > time before that. This requires much more thought probably --- in the case
> > of lack of hardware support, when the parameters need to be actually given
> > to the sensor depend somewhat on sensors, I suppose.
> 
> Yes, that's right. I seem to remember, there was a case, for which such a 
> callback would have been useful... Don't remember what that was though.

I remember now:) I meant to use it to trigger the next frame from the 
software per sensor I2C command, if no hardware trigger is available.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 11:37                             ` Guennadi Liakhovetski
@ 2011-02-28 12:03                               ` Sakari Ailus
  2011-02-28 12:44                                 ` Guennadi Liakhovetski
  0 siblings, 1 reply; 50+ messages in thread
From: Sakari Ailus @ 2011-02-28 12:03 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Laurent Pinchart, Hans Verkuil, Hans Verkuil, Sylwester Nawrocki,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

On Mon, Feb 28, 2011 at 12:37:06PM +0100, Guennadi Liakhovetski wrote:
> So, you'd also need a separate control for external exposure, there are 
> also sensors, that can be configured to different shutter / exposure / 
> readout sequence controlling... No, we don't have to support all that 
> variety, but we have to be aware of it, while making decisions;)

Hi Guennadi,

Do you mean that there are sensors that can synchronise these parameters at
frame level, or how? There are use cases for that but it doesn't limit to
still capture.

Are there any public datasheets that you know of on these?

Regards,

-- 
Sakari Ailus
sakari dot ailus at iki dot fi

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 12:03                               ` Sakari Ailus
@ 2011-02-28 12:44                                 ` Guennadi Liakhovetski
  2011-02-28 15:07                                   ` Sakari Ailus
  0 siblings, 1 reply; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-28 12:44 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Laurent Pinchart, Hans Verkuil, Hans Verkuil, Sylwester Nawrocki,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

On Mon, 28 Feb 2011, Sakari Ailus wrote:

> On Mon, Feb 28, 2011 at 12:37:06PM +0100, Guennadi Liakhovetski wrote:
> > So, you'd also need a separate control for external exposure, there are 
> > also sensors, that can be configured to different shutter / exposure / 
> > readout sequence controlling... No, we don't have to support all that 
> > variety, but we have to be aware of it, while making decisions;)
> 
> Hi Guennadi,
> 
> Do you mean that there are sensors that can synchronise these parameters at
> frame level, or how? There are use cases for that but it doesn't limit to
> still capture.

No, sorry, I don't mean exposure value, by "external exposure" I meant the 
EXPOSURE pin. But in fact, as I see now, it is just another name for the 
TRIGGER pin:( But what we do have on some sensors, e.g., on MT9T031.

On mt9t031 they distinguish between the beginning of the shutter sequence, 
the exposure and the read sequence, and depending on a parameter they 
decide which signals to use to start which action.

> Are there any public datasheets that you know of on these?

I think, I just searched for mt9t031 and found a datasheet somewhere in 
the wild...

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 11:17                             ` Hans Verkuil
  2011-02-28 11:19                               ` Laurent Pinchart
  2011-02-28 11:54                               ` Guennadi Liakhovetski
@ 2011-02-28 13:33                               ` Andy Walls
  2011-02-28 13:37                                 ` Andy Walls
  2 siblings, 1 reply; 50+ messages in thread
From: Andy Walls @ 2011-02-28 13:33 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Laurent Pinchart, Guennadi Liakhovetski, Hans Verkuil,
	Sylwester Nawrocki, Sakari Ailus, Kim HeungJun,
	Linux Media Mailing List, Stanimir Varbanov

On Mon, 2011-02-28 at 12:17 +0100, Hans Verkuil wrote:
> On Monday, February 28, 2011 12:07:33 Laurent Pinchart wrote:

> > > So, do I understand it right, that currently there are drivers, that
> > > overwrite the last buffers while waiting for a new one, and ones, that
> > > stop capture for that time.
> 
> Does anyone know which drivers stop capture if there are no buffers available? 
> I'm not aware of any.

Not that it is a camera driver, but...

cx18 will stall the stream, due to the CX23418 engine being starved of
buffers for that stream, if the application doesn't read the buffers.
The reasoning for this behavior is that one large gap is better than a
series of small gaps, if the application has fallen behind.

The exceptional case is the cx18 MPEG Index stream, which will steal the
oldest buffers back in the call to cx18_stream_rotate_idx_mdls().

The CX23418 engine seems to gracefully handle being starved of buffers
for a stream for a period of time.

This driver does not use videobuf currently.

-Andy


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 13:33                               ` Andy Walls
@ 2011-02-28 13:37                                 ` Andy Walls
  0 siblings, 0 replies; 50+ messages in thread
From: Andy Walls @ 2011-02-28 13:37 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Laurent Pinchart, Guennadi Liakhovetski, Hans Verkuil,
	Sylwester Nawrocki, Sakari Ailus, Kim HeungJun,
	Linux Media Mailing List, Stanimir Varbanov

On Mon, 2011-02-28 at 08:33 -0500, Andy Walls wrote:
> On Mon, 2011-02-28 at 12:17 +0100, Hans Verkuil wrote:
> > On Monday, February 28, 2011 12:07:33 Laurent Pinchart wrote:
> 
> > > > So, do I understand it right, that currently there are drivers, that
> > > > overwrite the last buffers while waiting for a new one, and ones, that
> > > > stop capture for that time.
> > 
> > Does anyone know which drivers stop capture if there are no buffers available? 
> > I'm not aware of any.
> 
> Not that it is a camera driver, but...
> 
> cx18 will stall the stream, due to the CX23418 engine being starved of
> buffers for that stream, if the application doesn't read the buffers.

> The reasoning for this behavior is that one large gap is better than a
> series of small gaps, if the application has fallen behind.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Gah.  That didn't make sense.  I need more coffee before sending email.


-Andy


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 11:20                 ` Guennadi Liakhovetski
@ 2011-02-28 13:44                   ` Sakari Ailus
  0 siblings, 0 replies; 50+ messages in thread
From: Sakari Ailus @ 2011-02-28 13:44 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Laurent Pinchart, Kim HeungJun, Hans Verkuil,
	Linux Media Mailing List, Stanimir Varbanov

On Mon, Feb 28, 2011 at 12:20:52PM +0100, Guennadi Liakhovetski wrote:
> > > Sorry, not sure what you mean. Do you mean different APIs when the flash
> > > is controlled directly by the sensor and by an external controller? No, of
> > > course we need one API, but you either issue those ioctl()s to the sensor
> > > (sub)device, or to the dedicated flash (sub)device. If you mean my "minor
> > > subset" above, then I was trying to say, that this is a basis, that has to
> > > be extended, but not, that we will develop a new API for more complicated
> > > cases.
> > 
> > I think I misunderstood you originally, sorry. I should have properly read the
> > RFC. :-)
> > 
> > Your proposal of the flash mode is good, but what about software strobe (a
> > little more on that below)?
> > 
> > Also, what about making this a V4L2 control instead?
> 
> These are two things, I think: first we have to decide which functions we 
> need, second - how to implement them. Sure, controls are also a 
> possibility.

Agreed.

...

> > > > > > This doesn't solve the flash/capture synchronization problem though. I
> > > > > > don't
> > > > > > think we need a dedicated snapshot capture mode at the V4L2 level. A
> > > > > > way to
> > > > > > configure the sensor to react on an external trigger provided by the
> > > > > > flash
> > > > > > controller is needed, and that could be a control on the flash
> > > > > > sub-device.
> > > > > 
> > > > > Well... Sensors call this a "snapshot mode." I don't care that much how
> > > > > we
> > > > > _call_ it, but I do think, that we should be able to use it.
> > > > 
> > > > Some sensors and webcams might have that, but newer camera solutions
> > > > tend to contain a raw bayer sensor and and ISP. There is no concept of
> > > > snapsnot mode in these sensors.
> > > 
> > > Hm, I am not sure I understand, why sensors with DSPs in them should have
> > > no notion of a snapshot mode. Do they have no strobe / trigger pins? And
> > > no built in possibility to synchronize with a flash?
> > 
> > I was referring to ISPs such as the OMAP 3 ISP. Some hardware have a flash
> > strobe pin while some doesn't (such as the N900).
> 
> Of course, if no flash is present, you don't have to support it;)

There is flash but no hardware flash strobe. Ok? :-)

> > Still, even if the strobe pin is missing it should be possible to allow
> > strobing the flash by using software strobe (usually an I2C message).
> > 
> > I agree using a hardware strobe is much much better if it's available.
> 
> Again - don't understand. Above (i2c message) you're referring to the 
> sensor. But I don't think toggling the flash on per-frame basis from 

I'm referring to the flash controller, not the sensor. They're often
controlled using the I2C bus.

> software via the sensor makes much sense. That way you could also just 
> wire your flash to a GPIO. The main advantage of a sensor-controlled flash 
> is, that is toggles the flash automatically, synchronised with its image 
> read-out. You would, however, toggle the flash manually, if you just 

This is very true but unfortunately not all hardware has separate flash
strobe signal.

> wanted to turn it on permanently (torch-mode).

Not quite. There are use cases for the torch mode (an application called
Torch, for example).

In case the hardware strobe is absent the flash must be strobed using
software. This means that the user has to be better aware of the sensor
pixel area exposure timing than otherwise would be necessary.

> > > > > Hm, don't think only the "flash subdevice" has to know about this.
> > > > > First,
> > > > > you have to switch the sensor into that mode. Second, it might be either
> > > > > external trigger from the flash controller, or a programmed trigger and
> > > > > a
> > > > > flash strobe from the sensor to the flash (controller). Third, well, not
> > > > > quite sure, but doesn't the host have to know about the snapshot mode?
> > > > 
> > > > I do not favour adding use case type of functionality to interfaces that
> > > > do not necessarily need it. Would the concept of a snapshot be
> > > > parametrisable on V4L2 level?
> > > 
> > > I am open to this. I don't have a good idea of whether camera hosts have
> > > to know about the snapshot mode or not. It's open for discussion.
> > 
> > What functionality would the snapshot mode provide? Flash synchronisation?
> > Something else?
> 
> Also pre-defined number of images, enabling of the trigger pin and trigger 
> i2c command. Just noticed - on mt9t031 and mt9v022 the snapshot mode also 
> enables externally controlled exposure...

Is the externally controlled exposure somehow essentially different from
just sending new esposure settings to the sensor via I2C?

> > I have to admit I don't know of any hardware which would recognise a concept
> > of "snapshot". Do you have a smart sensor which does this, for example? The
> > only hardware support for the flash use I know of is the flash strobe signal.
> 
> Several Aptina / Micron sensors have such a mode, e.g., mt9p031, mt9t031, 
> mt9v022, mt9m001, also ov7725 from OmniVision (there it's called a "Single 
> frame" mode), and, I presume, many others. And no, those are not some 
> "smart" sensors, the ones from Aptina are pretty primitive Bayer / 
> monochrome cameras.

Are there use cases for such modes? I mean, are there such use for these
modes which cannot be done using the regular streaming mode?

> > Flash synchronisation is indeed an issue, and how to tell that a given frame
> > has been exposed with flash. The use of flash is just one of the parameters
> > which would be nice to connect to frames, though.
> 
> Hm, yes, that's something to think about too...

One of the many things. :-) :-)

Best regards,

-- 
Sakari Ailus
sakari dot ailus at iki dot fi

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 12:44                                 ` Guennadi Liakhovetski
@ 2011-02-28 15:07                                   ` Sakari Ailus
  0 siblings, 0 replies; 50+ messages in thread
From: Sakari Ailus @ 2011-02-28 15:07 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Laurent Pinchart, Hans Verkuil, Hans Verkuil, Sylwester Nawrocki,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

On Mon, Feb 28, 2011 at 01:44:25PM +0100, Guennadi Liakhovetski wrote:
> On Mon, 28 Feb 2011, Sakari Ailus wrote:
> 
> > On Mon, Feb 28, 2011 at 12:37:06PM +0100, Guennadi Liakhovetski wrote:
> > > So, you'd also need a separate control for external exposure, there are 
> > > also sensors, that can be configured to different shutter / exposure / 
> > > readout sequence controlling... No, we don't have to support all that 
> > > variety, but we have to be aware of it, while making decisions;)
> > 
> > Hi Guennadi,
> > 
> > Do you mean that there are sensors that can synchronise these parameters at
> > frame level, or how? There are use cases for that but it doesn't limit to
> > still capture.
> 
> No, sorry, I don't mean exposure value, by "external exposure" I meant the 
> EXPOSURE pin. But in fact, as I see now, it is just another name for the 
> TRIGGER pin:( But what we do have on some sensors, e.g., on MT9T031.

The partial datasheet of that sensor I was able to find in Aptina website
suggests there are separate trigger and strobe pins. Trigger is another name
for global reset which can be performed over the I2C as well, also on that
sensor.

> On mt9t031 they distinguish between the beginning of the shutter sequence, 
> the exposure and the read sequence, and depending on a parameter they 
> decide which signals to use to start which action.

The global reset and mechanical shutter control is exclusively related to
operation with mechanical shutter as far as I know. This is essentially
extra hardware to enhance one use case a little.

What about newer sensors; do they support this kind of functionality?

Regards,

-- 
Sakari Ailus
sakari dot ailus at iki dot fi

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 11:54                               ` Guennadi Liakhovetski
@ 2011-02-28 22:41                                 ` Guennadi Liakhovetski
  2011-03-02 17:51                                   ` Guennadi Liakhovetski
  0 siblings, 1 reply; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-28 22:41 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Laurent Pinchart, Hans Verkuil, Sylwester Nawrocki, Sakari Ailus,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:

> On Mon, 28 Feb 2011, Hans Verkuil wrote:
> 
> > Does anyone know which drivers stop capture if there are no buffers available? 
> > I'm not aware of any.
> 
> Many soc-camera hosts do that.
> 
> > I think this is certainly a good initial approach.
> > 
> > Can someone make a list of things needed for flash/snapshot? So don't look yet 
> > at the implementation, but just start a list of functionalities that we need 
> > to support. I don't think I have seen that yet.
> 
> These are not the features, that we _have_ to implement, these are just 
> the ones, that are related to the snapshot mode:
> 
> * flash strobe (provided, we do not want to control its timing from 
> 	generic controls, and leave that to "reasonable defaults" or to 
> 	private controls)
> * trigger pin / command
> * external exposure
> * exposure mode (ERS, GRR,...)
> * use "trigger" or "shutter" for readout
> * number of frames to capture

Add

* multiple videobuffer queues

to the list

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-28 22:41                                 ` Guennadi Liakhovetski
@ 2011-03-02 17:51                                   ` Guennadi Liakhovetski
  2011-03-02 18:19                                     ` Hans Verkuil
  0 siblings, 1 reply; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-03-02 17:51 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Laurent Pinchart, Hans Verkuil, Sylwester Nawrocki, Sakari Ailus,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

...Just occurred to me:

On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:

> On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> 
> > On Mon, 28 Feb 2011, Hans Verkuil wrote:
> > 
> > > Does anyone know which drivers stop capture if there are no buffers available? 
> > > I'm not aware of any.
> > 
> > Many soc-camera hosts do that.
> > 
> > > I think this is certainly a good initial approach.
> > > 
> > > Can someone make a list of things needed for flash/snapshot? So don't look yet 
> > > at the implementation, but just start a list of functionalities that we need 
> > > to support. I don't think I have seen that yet.
> > 
> > These are not the features, that we _have_ to implement, these are just 
> > the ones, that are related to the snapshot mode:
> > 
> > * flash strobe (provided, we do not want to control its timing from 
> > 	generic controls, and leave that to "reasonable defaults" or to 
> > 	private controls)

Wouldn't it be a good idea to also export an LED (drivers/leds/) API from 
our flash implementation? At least for applications like torch. Downside: 
the LED API itself is not advanced enough for all our uses, and exporting 
two interfaces to the same device is usually a bad idea. Still, 
conceptually it seems to be a good fit.

Thanks
Guennadi

> > * trigger pin / command
> > * external exposure
> > * exposure mode (ERS, GRR,...)
> > * use "trigger" or "shutter" for readout
> > * number of frames to capture
> 
> Add
> 
> * multiple videobuffer queues
> 
> to the list

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-03-02 17:51                                   ` Guennadi Liakhovetski
@ 2011-03-02 18:19                                     ` Hans Verkuil
  2011-03-03  1:05                                       ` Andy Walls
  2011-03-03  8:02                                       ` Guennadi Liakhovetski
  0 siblings, 2 replies; 50+ messages in thread
From: Hans Verkuil @ 2011-03-02 18:19 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Hans Verkuil, Laurent Pinchart, Sylwester Nawrocki, Sakari Ailus,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

On Wednesday, March 02, 2011 18:51:43 Guennadi Liakhovetski wrote:
> ...Just occurred to me:
> 
> On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> 
> > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > 
> > > On Mon, 28 Feb 2011, Hans Verkuil wrote:
> > > 
> > > > Does anyone know which drivers stop capture if there are no buffers available? 
> > > > I'm not aware of any.
> > > 
> > > Many soc-camera hosts do that.
> > > 
> > > > I think this is certainly a good initial approach.
> > > > 
> > > > Can someone make a list of things needed for flash/snapshot? So don't look yet 
> > > > at the implementation, but just start a list of functionalities that we need 
> > > > to support. I don't think I have seen that yet.
> > > 
> > > These are not the features, that we _have_ to implement, these are just 
> > > the ones, that are related to the snapshot mode:
> > > 
> > > * flash strobe (provided, we do not want to control its timing from 
> > > 	generic controls, and leave that to "reasonable defaults" or to 
> > > 	private controls)
> 
> Wouldn't it be a good idea to also export an LED (drivers/leds/) API from 
> our flash implementation? At least for applications like torch. Downside: 
> the LED API itself is not advanced enough for all our uses, and exporting 
> two interfaces to the same device is usually a bad idea. Still, 
> conceptually it seems to be a good fit.

I believe we discussed LEDs before (during a discussion about adding illuminator
controls). I think the preference was to export LEDs as V4L controls.

In general I am no fan of exporting multiple interfaces. It only leads to double
maintenance and I see no noticable advantage to userspace, only confusion.

Just my 2 cents.

Regards,

	Hans

> 
> Thanks
> Guennadi
> 
> > > * trigger pin / command
> > > * external exposure
> > > * exposure mode (ERS, GRR,...)
> > > * use "trigger" or "shutter" for readout
> > > * number of frames to capture
> > 
> > Add
> > 
> > * multiple videobuffer queues
> > 
> > to the list
> 
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

-- 
Hans Verkuil - video4linux developer - sponsored by Cisco

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-03-02 18:19                                     ` Hans Verkuil
@ 2011-03-03  1:05                                       ` Andy Walls
  2011-03-03 11:50                                         ` Laurent Pinchart
  2011-03-03  8:02                                       ` Guennadi Liakhovetski
  1 sibling, 1 reply; 50+ messages in thread
From: Andy Walls @ 2011-03-03  1:05 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Guennadi Liakhovetski, Hans Verkuil, Laurent Pinchart,
	Sylwester Nawrocki, Sakari Ailus, Kim HeungJun,
	Linux Media Mailing List, Stanimir Varbanov

On Wed, 2011-03-02 at 19:19 +0100, Hans Verkuil wrote:
> On Wednesday, March 02, 2011 18:51:43 Guennadi Liakhovetski wrote:
> > ...Just occurred to me:
> > 
> > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > 
> > > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > > 
> > > > On Mon, 28 Feb 2011, Hans Verkuil wrote:

> > > > These are not the features, that we _have_ to implement, these are just 
> > > > the ones, that are related to the snapshot mode:
> > > > 
> > > > * flash strobe (provided, we do not want to control its timing from 
> > > > 	generic controls, and leave that to "reasonable defaults" or to 
> > > > 	private controls)

I consider a flash strobe to be an illuminator.  I modifies the subject
matter to be captured in the image.

 
> > Wouldn't it be a good idea to also export an LED (drivers/leds/) API from 
> > our flash implementation? At least for applications like torch. Downside: 
> > the LED API itself is not advanced enough for all our uses, and exporting 
> > two interfaces to the same device is usually a bad idea. Still, 
> > conceptually it seems to be a good fit.
> 
> I believe we discussed LEDs before (during a discussion about adding illuminator
> controls). I think the preference was to export LEDs as V4L controls.

That is certainly my preference, especially for LED's integrated into
what the end user considers a discrete, consumer electronics device:
e.g. a USB connected webcam or microscope.

I cannot imagine a real use-case repurposing the flash strobe of a
camera purposes other than subject matter illumination.  (Inducing
seizures?  An intrusion detection systems alarm that doesn't use the
camera to which the flash is connected?)

For laptop frame integrated webcam LEDs, I can understand the desire to
perhaps co-opt the LED for some other indicator purpose.  A WLAN NIC
traffic indicator was suggested previously.

Does anyone know of any example where it could possibly make sense to
repurpose the LED of a discrete external camera or capture device for
some indication other than the camera/capture function?  (I consider
both extisngishing the LED for lighting purposes, and manipulating the
LED for the purpose of deception of the actual state of the
camera/capture function, still related to the camera function.)



> In general I am no fan of exporting multiple interfaces. It only leads to double
> maintenance and I see no noticable advantage to userspace, only confusion.

I agree.

Regards,
Andy



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-24 12:18 [RFC] snapshot mode, flash capabilities and control Guennadi Liakhovetski
  2011-02-24 12:40 ` Hans Verkuil
@ 2011-03-03  7:09 ` Kim, HeungJun
  2011-03-03  7:30   ` Guennadi Liakhovetski
  1 sibling, 1 reply; 50+ messages in thread
From: Kim, HeungJun @ 2011-03-03  7:09 UTC (permalink / raw)
  To: Guennadi Liakhovetski; +Cc: Linux Media Mailing List

Hi Guennadi,

I have another question about capture, not related with exact this topic.

Dose the sensor which you use make EXIF information in itself while capturing??

If it is right, how to deliver EXIF information from v4l2(subdev or media driver)
to userapplication?

Regards,
Heungjun Kim



2011-02-24 오후 9:18, Guennadi Liakhovetski 쓴 글:
> Agenda.
> =======
> 
> In a recent RFC [1] I proposed V4L2 API extensions to support fast switching
> between multiple capture modes or data formats. However, this is not sufficient
> to efficiently leverage snapshot capabilities of existing hardware - sensors and
> SoCs, and to satisfy user-space needs, a few more functions have to be
> implemented.
> 
> Snapshot and strobe / flash capabilities vary significantly between sensors.
> Some of them only capture a single image upon trigger activation, some can
> capture several images, readout and exposure capabilities vary too. Not all
> sensors support a strobe signal, and those, that support it, also offer very
> different options to select strobe beginning and duration. This proposal is
> trying to select a minimum API, that can be reasonably supported by many
> systems and provide a reasonable functionality set to the user.
> 
> Proposed implementation.
> ========================
> 
> 1. Switch the interface into the snapshot mode. This is required in addition to
> simply configuring the interface with a different format to activate hardware-
> specific support for triggered single image capture. It is proposed to use the
> VIDIOC_S_PARM ioctl() with a new V4L2_MODE_SNAPSHOT value for the
> struct v4l2_captureparm::capturemode and ::capability fields. Further
> hardware-specific details can be passed in ::extendedmode, ::readbuffers can be
> used to specify the exact number of frames to be captured. Similarly,
> VIDIOC_G_PARM shall return supported and current capture modes.
> 
> Many sensors provide the ability to trigger snapshot capture either from an
> external source or from a control register. Usually, however, there is no
> possibility to select the trigger source, either of them can be used at any
> time.
> 
> 2. Specify a flash mode. Define new capture capabilities to be used with
> struct v4l2_captureparm::capturemode:
> 
> V4L2_MODE_FLASH_SYNC	/* synchronise flash with image capture */
> V4L2_MODE_FLASH_ON	/* turn on - "torch-mode" */
> V4L2_MODE_FLASH_OFF	/* turn off */
> 
> Obviously, the above synchronous operation does not exactly define beginning and
> duration of the strobe signal. It is proposed to leave the specific flash timing
> configuration to the driver itself and, possibly, to driver-specific extended
> mode flags.
> 
> 3. Add a sensor-subdev operation
> 
> 	int (*snapshot_trigger)(struct v4l2_subdev *sd)
> 
> to start capturing the next frame in the snapshot mode.
> 
> References.
> ===========
> 
> [1] http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/29357
> 
> Thanks
> Guennadi
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-03-03  7:09 ` Kim, HeungJun
@ 2011-03-03  7:30   ` Guennadi Liakhovetski
  0 siblings, 0 replies; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-03-03  7:30 UTC (permalink / raw)
  To: Kim, HeungJun; +Cc: Linux Media Mailing List

Hi

On Thu, 3 Mar 2011, Kim, HeungJun wrote:

> Hi Guennadi,
> 
> I have another question about capture, not related with exact this topic.
> 
> Dose the sensor which you use make EXIF information in itself while capturing??

So far we have no sensors, about which we know, that they're delivering 
any metainformation with frames. There are a couple of sensors, about 
which we suspect, that a part of the image data might be some metadata, 
but we don't know for sure.

> If it is right, how to deliver EXIF information from v4l2(subdev or media driver)
> to userapplication?

I don't think this is currently possible and it is among the topics to be 
discussed at the coming v4l2-summit.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-03-02 18:19                                     ` Hans Verkuil
  2011-03-03  1:05                                       ` Andy Walls
@ 2011-03-03  8:02                                       ` Guennadi Liakhovetski
  2011-03-03  9:25                                         ` Hans Verkuil
  1 sibling, 1 reply; 50+ messages in thread
From: Guennadi Liakhovetski @ 2011-03-03  8:02 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Hans Verkuil, Laurent Pinchart, Sylwester Nawrocki, Sakari Ailus,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

On Wed, 2 Mar 2011, Hans Verkuil wrote:

> On Wednesday, March 02, 2011 18:51:43 Guennadi Liakhovetski wrote:
> > ...Just occurred to me:
> > 
> > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > 
> > > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > > 
> > > > On Mon, 28 Feb 2011, Hans Verkuil wrote:
> > > > 
> > > > > Does anyone know which drivers stop capture if there are no buffers available? 
> > > > > I'm not aware of any.
> > > > 
> > > > Many soc-camera hosts do that.
> > > > 
> > > > > I think this is certainly a good initial approach.
> > > > > 
> > > > > Can someone make a list of things needed for flash/snapshot? So don't look yet 
> > > > > at the implementation, but just start a list of functionalities that we need 
> > > > > to support. I don't think I have seen that yet.
> > > > 
> > > > These are not the features, that we _have_ to implement, these are just 
> > > > the ones, that are related to the snapshot mode:
> > > > 
> > > > * flash strobe (provided, we do not want to control its timing from 
> > > > 	generic controls, and leave that to "reasonable defaults" or to 
> > > > 	private controls)
> > 
> > Wouldn't it be a good idea to also export an LED (drivers/leds/) API from 
> > our flash implementation? At least for applications like torch. Downside: 
> > the LED API itself is not advanced enough for all our uses, and exporting 
> > two interfaces to the same device is usually a bad idea. Still, 
> > conceptually it seems to be a good fit.
> 
> I believe we discussed LEDs before (during a discussion about adding illuminator
> controls). I think the preference was to export LEDs as V4L controls.

Unfortunately, I missed that one.

> In general I am no fan of exporting multiple interfaces. It only leads to double
> maintenance and I see no noticable advantage to userspace, only confusion.

On the one hand - yes, but OTOH: think about MFDs. Also think about some 
other functions internal to cameras, like i2c busses. Before those I2C 
busses have been handled internally, but we now prefer properly exporting 
them at the system level and abstracting devices on them as normal i2c 
devices. Think about audio, say, on HDMI. I don't think we have any such 
examples in the mainline atm, but if you have to implement an HDMI 
output as a v4l2 device - you will export a standard audio interface too, 
and they probably will share some register spaces, at least on the PHY. 
Think about cameras with a separate illumitation sensor (yes, I have such 
a webcam, which has a separate sensor window, used to control its "flash" 
LEDs, no idea whether that's also available to the user, it works 
automatically - close the sensor, LEDs go on;)) - wouldn't you export it 
as an "ambient light sensor" device?

Wouldn't using a standard API like the LED one make it easier to cover the 
variety of implementations like: sensor-strobe driven, external dedicated 
flash controller, coupled with the sensor, primitive GPIO- or PWM-operated 
light. The LED API also has an in-kernel part (triggers) and a 
user-interface (sysfs), which is also something, that we need. Consider a 
case, when you have some LED-controller on the system, that controls 
several LEDs, some for camera status, some for other system statuses.

So, not sure...

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-03-03  8:02                                       ` Guennadi Liakhovetski
@ 2011-03-03  9:25                                         ` Hans Verkuil
  0 siblings, 0 replies; 50+ messages in thread
From: Hans Verkuil @ 2011-03-03  9:25 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Hans Verkuil, Laurent Pinchart, Sylwester Nawrocki, Sakari Ailus,
	Kim HeungJun, Linux Media Mailing List, Stanimir Varbanov

On Thursday, March 03, 2011 09:02:20 Guennadi Liakhovetski wrote:
> On Wed, 2 Mar 2011, Hans Verkuil wrote:
> 
> > On Wednesday, March 02, 2011 18:51:43 Guennadi Liakhovetski wrote:
> > > ...Just occurred to me:
> > > 
> > > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > > 
> > > > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > > > 
> > > > > On Mon, 28 Feb 2011, Hans Verkuil wrote:
> > > > > 
> > > > > > Does anyone know which drivers stop capture if there are no 
buffers available? 
> > > > > > I'm not aware of any.
> > > > > 
> > > > > Many soc-camera hosts do that.
> > > > > 
> > > > > > I think this is certainly a good initial approach.
> > > > > > 
> > > > > > Can someone make a list of things needed for flash/snapshot? So 
don't look yet 
> > > > > > at the implementation, but just start a list of functionalities 
that we need 
> > > > > > to support. I don't think I have seen that yet.
> > > > > 
> > > > > These are not the features, that we _have_ to implement, these are 
just 
> > > > > the ones, that are related to the snapshot mode:
> > > > > 
> > > > > * flash strobe (provided, we do not want to control its timing from 
> > > > > 	generic controls, and leave that to "reasonable defaults" or to 
> > > > > 	private controls)
> > > 
> > > Wouldn't it be a good idea to also export an LED (drivers/leds/) API 
from 
> > > our flash implementation? At least for applications like torch. 
Downside: 
> > > the LED API itself is not advanced enough for all our uses, and 
exporting 
> > > two interfaces to the same device is usually a bad idea. Still, 
> > > conceptually it seems to be a good fit.
> > 
> > I believe we discussed LEDs before (during a discussion about adding 
illuminator
> > controls). I think the preference was to export LEDs as V4L controls.
> 
> Unfortunately, I missed that one.
> 
> > In general I am no fan of exporting multiple interfaces. It only leads to 
double
> > maintenance and I see no noticable advantage to userspace, only confusion.
> 
> On the one hand - yes, but OTOH: think about MFDs. Also think about some 
> other functions internal to cameras, like i2c busses. Before those I2C 
> busses have been handled internally, but we now prefer properly exporting 
> them at the system level and abstracting devices on them as normal i2c 
> devices. Think about audio, say, on HDMI. I don't think we have any such 
> examples in the mainline atm, but if you have to implement an HDMI 
> output as a v4l2 device - you will export a standard audio interface too, 
> and they probably will share some register spaces, at least on the PHY.

And the fact that audio is handled through a separate device has always been a 
source of problems. That said, the complexity of audio and the fact that you 
want to make it available to audio applications overrides the problems 
introduced by requiring separate devices/APIs.

> Think about cameras with a separate illumitation sensor (yes, I have such 
> a webcam, which has a separate sensor window, used to control its "flash" 
> LEDs, no idea whether that's also available to the user, it works 
> automatically - close the sensor, LEDs go on;)) - wouldn't you export it 
> as an "ambient light sensor" device?

For me it would depend on how it is used. If it is a 'random' LED that does 
relate to the general functioning of the device, then the standard led API is 
perfectly fine. But I would associate it more with test LEDs on a developer 
board, not with LEDs that are clearly part of the device.

One other advantage of having this as e.g. controls is that they will appear 
in the list of V4L2 controls that most apps show. It is immediately available 
to the end user. If we would do this as a LED driver then apps would need to 
find the LEDs (presumably using the media controller), somehow discover what 
the LED is for and then show the possible functions of the LED to the end 
user.

Frankly, that's never going to happen.

> Wouldn't using a standard API like the LED one make it easier to cover the 
> variety of implementations like: sensor-strobe driven, external dedicated 
> flash controller, coupled with the sensor, primitive GPIO- or PWM-operated 
> light. The LED API also has an in-kernel part (triggers) and a 
> user-interface (sysfs), which is also something, that we need. Consider a 
> case, when you have some LED-controller on the system, that controls 
> several LEDs, some for camera status, some for other system statuses.
> 
> So, not sure...

You have more flexibility on the SoC since you can make the reasonable 
assumption that the software running on it knows the hardware. So I am not 
saying that you should never do it using the standard LED driver. But here too 
I think it depends on how the LED is implemented in hardware: if it is clearly 
specific to the camera flash (e.g. because it is a separate i2c flash 
controller), then I would expect to see it implemented as a control (or at 
least as part of the V4L2 API). If it is controlled through a generic LED 
controller, then it might more sense to use the LED driver and add a reference 
to it in the media controller.

But I suspect that that will be the exception rather than the rule.

Regards,

	Hans

> 
> Thanks
> Guennadi
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-03-03  1:05                                       ` Andy Walls
@ 2011-03-03 11:50                                         ` Laurent Pinchart
  2011-03-03 13:56                                           ` Andy Walls
  0 siblings, 1 reply; 50+ messages in thread
From: Laurent Pinchart @ 2011-03-03 11:50 UTC (permalink / raw)
  To: Andy Walls
  Cc: Hans Verkuil, Guennadi Liakhovetski, Hans Verkuil,
	Sylwester Nawrocki, Sakari Ailus, Kim HeungJun,
	Linux Media Mailing List, Stanimir Varbanov

Hi Andy,

On Thursday 03 March 2011 02:05:00 Andy Walls wrote:
> On Wed, 2011-03-02 at 19:19 +0100, Hans Verkuil wrote:
> > On Wednesday, March 02, 2011 18:51:43 Guennadi Liakhovetski wrote:
> > > ...Just occurred to me:
> > > 
> > > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > > > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > > > > On Mon, 28 Feb 2011, Hans Verkuil wrote:
> > > > > 
> > > > > These are not the features, that we _have_ to implement, these are
> > > > > just the ones, that are related to the snapshot mode:
> > > > > 
> > > > > * flash strobe (provided, we do not want to control its timing from
> > > > > 
> > > > > 	generic controls, and leave that to "reasonable defaults" or to
> > > > > 	private controls)
> 
> I consider a flash strobe to be an illuminator.  I modifies the subject
> matter to be captured in the image.
> 
> > > Wouldn't it be a good idea to also export an LED (drivers/leds/) API
> > > from our flash implementation? At least for applications like torch.
> > > Downside: the LED API itself is not advanced enough for all our uses,
> > > and exporting two interfaces to the same device is usually a bad idea.
> > > Still, conceptually it seems to be a good fit.
> > 
> > I believe we discussed LEDs before (during a discussion about adding
> > illuminator controls). I think the preference was to export LEDs as V4L
> > controls.
> 
> That is certainly my preference, especially for LED's integrated into
> what the end user considers a discrete, consumer electronics device:
> e.g. a USB connected webcam or microscope.
> 
> I cannot imagine a real use-case repurposing the flash strobe of a
> camera purposes other than subject matter illumination.  (Inducing
> seizures?  An intrusion detection systems alarm that doesn't use the
> camera to which the flash is connected?)
> 
> For laptop frame integrated webcam LEDs, I can understand the desire to
> perhaps co-opt the LED for some other indicator purpose.  A WLAN NIC
> traffic indicator was suggested previously.
> 
> Does anyone know of any example where it could possibly make sense to
> repurpose the LED of a discrete external camera or capture device for
> some indication other than the camera/capture function?  (I consider
> both extisngishing the LED for lighting purposes, and manipulating the
> LED for the purpose of deception of the actual state of the
> camera/capture function, still related to the camera function.)

What about using the flash LED on a cellphone as a torch ?

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-03-03 11:50                                         ` Laurent Pinchart
@ 2011-03-03 13:56                                           ` Andy Walls
  2011-03-03 14:04                                             ` Laurent Pinchart
  2011-03-03 14:29                                             ` Sakari Ailus
  0 siblings, 2 replies; 50+ messages in thread
From: Andy Walls @ 2011-03-03 13:56 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Hans Verkuil, Guennadi Liakhovetski, Hans Verkuil,
	Sylwester Nawrocki, Sakari Ailus, Kim HeungJun,
	Linux Media Mailing List, Stanimir Varbanov

On Thu, 2011-03-03 at 12:50 +0100, Laurent Pinchart wrote:
> Hi Andy,
> 
> On Thursday 03 March 2011 02:05:00 Andy Walls wrote:
> > On Wed, 2011-03-02 at 19:19 +0100, Hans Verkuil wrote:
> > > On Wednesday, March 02, 2011 18:51:43 Guennadi Liakhovetski wrote:
> > > > ...Just occurred to me:
> > > > 
> > > > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > > > > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > > > > > On Mon, 28 Feb 2011, Hans Verkuil wrote:
> > > > > > 
> > > > > > These are not the features, that we _have_ to implement, these are
> > > > > > just the ones, that are related to the snapshot mode:
> > > > > > 
> > > > > > * flash strobe (provided, we do not want to control its timing from
> > > > > > 
> > > > > > 	generic controls, and leave that to "reasonable defaults" or to
> > > > > > 	private controls)
> > 
> > I consider a flash strobe to be an illuminator.  I modifies the subject
> > matter to be captured in the image.
> > 
> > > > Wouldn't it be a good idea to also export an LED (drivers/leds/) API
> > > > from our flash implementation? At least for applications like torch.
> > > > Downside: the LED API itself is not advanced enough for all our uses,
> > > > and exporting two interfaces to the same device is usually a bad idea.
> > > > Still, conceptually it seems to be a good fit.
> > > 
> > > I believe we discussed LEDs before (during a discussion about adding
> > > illuminator controls). I think the preference was to export LEDs as V4L
> > > controls.
> > 
> > That is certainly my preference, especially for LED's integrated into
> > what the end user considers a discrete, consumer electronics device:
> > e.g. a USB connected webcam or microscope.
> > 
> > I cannot imagine a real use-case repurposing the flash strobe of a
> > camera purposes other than subject matter illumination.  (Inducing
> > seizures?  An intrusion detection systems alarm that doesn't use the
> > camera to which the flash is connected?)
> > 
> > For laptop frame integrated webcam LEDs, I can understand the desire to
> > perhaps co-opt the LED for some other indicator purpose.  A WLAN NIC
> > traffic indicator was suggested previously.
> > 
> > Does anyone know of any example where it could possibly make sense to
> > repurpose the LED of a discrete external camera or capture device for
> > some indication other than the camera/capture function?  (I consider
> > both extisngishing the LED for lighting purposes, and manipulating the
> > LED for the purpose of deception of the actual state of the
> > camera/capture function, still related to the camera function.)

Hi Laurent,

> What about using the flash LED on a cellphone as a torch ?

Yes, it could be the case that the flash LED is used as a flashlight
(American English for "torch").

I use the LCD screen myself:
http://socialnmobile.blogspot.com/2009/06/android-app-color-flashlight-flashlight.html 



With embedded platforms, like a mobile phone, are the LEDs really tied
to the camera device: controlled by the GPIOs from the camera bridge
chip or sensor chip?  Or are they more general purpose peripherals, not
necessarily tied to the camera?

On mobile phone platforms, I'm assuming the manufacturers are in a much
better position to take care of any discovery and association problems.
Given the controlled nature of the hardware and deployed OS, I assume
they use platform-configuration information and abstraction layers to
handle the problem for all applications.

Desktop and laptop machines normally don't have such vendor support.
Nor is the hardware configuration fixed as it is on a mobile phone.


Now to go way beyond your answer:

Since it is relatively easy to add an LED interface to a driver,

	http://linuxtv.org/hg/~awalls/qx3/

we could just let V4L2 devices that can be on embedded platform
implement the LED API, while V4L2 devices that are computer peripherals
implement the V4L2 control API.

IMO the answer need not be a choice between the V4L2 API or the LED API.
Why not either or both, given the two domains of embedded vs. computer
peripheral?


My prototype changes for the QX3 microscope implemented both the V4L2
Control API and the LED API for the illuminators.

Like all sysfs interfaces, the names I chose for the LED API sysfs nodes
were mostly arbitrary, but followed the guidelines in the LED API
document.  They showed up in sysfs like this:


        /sys/class/leds/video0:white:illuminator0

        /sys/class/leds/video0:white:illuminator0/device/video4linux/video0

        /sys/class/video4linux/video0/device/leds/video0:white:illuminator0
     
	/sys/bus/pci/devices/0000:00:12.0/usb3/3-2/3-2:1.0/leds/video0:white:illuminator0

        /sys/bus/pci/devices/0000:00:12.0/usb3/3-2/3-2:1.0/video4linux/video0

and similar results for video0:white:illuminator1.

I don't know who is going to write applications that hunt those down,
just to figure out if the video device even has an LED, what type of LED
it is, or how to set the timing parameters for a flash LED.

Given that the driver can arbitrarily choose the name of the LEDs in
sysfs; and also provide additional, driver-specific, custom control
nodes is sysfs; I really think the LED API is a dead-end for desktop
video and camera applications.

Regards,
Andy


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-03-03 13:56                                           ` Andy Walls
@ 2011-03-03 14:04                                             ` Laurent Pinchart
  2011-03-03 14:55                                               ` Andy Walls
  2011-03-03 14:29                                             ` Sakari Ailus
  1 sibling, 1 reply; 50+ messages in thread
From: Laurent Pinchart @ 2011-03-03 14:04 UTC (permalink / raw)
  To: Andy Walls
  Cc: Hans Verkuil, Guennadi Liakhovetski, Hans Verkuil,
	Sylwester Nawrocki, Sakari Ailus, Kim HeungJun,
	Linux Media Mailing List, Stanimir Varbanov

Hi Andy,

On Thursday 03 March 2011 14:56:25 Andy Walls wrote:
> On Thu, 2011-03-03 at 12:50 +0100, Laurent Pinchart wrote:
> > On Thursday 03 March 2011 02:05:00 Andy Walls wrote:
> > > On Wed, 2011-03-02 at 19:19 +0100, Hans Verkuil wrote:
> > > > On Wednesday, March 02, 2011 18:51:43 Guennadi Liakhovetski wrote:
> > > > > ...Just occurred to me:
> > > > > 
> > > > > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > > > > > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > > > > > > On Mon, 28 Feb 2011, Hans Verkuil wrote:
> > > > > > > 
> > > > > > > These are not the features, that we _have_ to implement, these
> > > > > > > are just the ones, that are related to the snapshot mode:
> > > > > > > 
> > > > > > > * flash strobe (provided, we do not want to control its timing
> > > > > > > from
> > > > > > > 
> > > > > > > 	generic controls, and leave that to "reasonable defaults" or
> > > > > > > 	to private controls)
> > > 
> > > I consider a flash strobe to be an illuminator.  I modifies the subject
> > > matter to be captured in the image.
> > > 
> > > > > Wouldn't it be a good idea to also export an LED (drivers/leds/)
> > > > > API from our flash implementation? At least for applications like
> > > > > torch. Downside: the LED API itself is not advanced enough for all
> > > > > our uses, and exporting two interfaces to the same device is
> > > > > usually a bad idea. Still, conceptually it seems to be a good fit.
> > > > 
> > > > I believe we discussed LEDs before (during a discussion about adding
> > > > illuminator controls). I think the preference was to export LEDs as
> > > > V4L controls.
> > > 
> > > That is certainly my preference, especially for LED's integrated into
> > > what the end user considers a discrete, consumer electronics device:
> > > e.g. a USB connected webcam or microscope.
> > > 
> > > I cannot imagine a real use-case repurposing the flash strobe of a
> > > camera purposes other than subject matter illumination.  (Inducing
> > > seizures?  An intrusion detection systems alarm that doesn't use the
> > > camera to which the flash is connected?)
> > > 
> > > For laptop frame integrated webcam LEDs, I can understand the desire to
> > > perhaps co-opt the LED for some other indicator purpose.  A WLAN NIC
> > > traffic indicator was suggested previously.
> > > 
> > > Does anyone know of any example where it could possibly make sense to
> > > repurpose the LED of a discrete external camera or capture device for
> > > some indication other than the camera/capture function?  (I consider
> > > both extisngishing the LED for lighting purposes, and manipulating the
> > > LED for the purpose of deception of the actual state of the
> > > camera/capture function, still related to the camera function.)
> 
> Hi Laurent,
> 
> > What about using the flash LED on a cellphone as a torch ?
> 
> Yes, it could be the case that the flash LED is used as a flashlight
> (American English for "torch").
> 
> I use the LCD screen myself:
> http://socialnmobile.blogspot.com/2009/06/android-app-color-flashlight-flas
> hlight.html
> 
> 
> 
> With embedded platforms, like a mobile phone, are the LEDs really tied
> to the camera device: controlled by the GPIOs from the camera bridge
> chip or sensor chip?  Or are they more general purpose peripherals, not
> necessarily tied to the camera?

On mobile phones flash is usually controlled by a dedicated flash controller 
chip, usually through I2C.

> On mobile phone platforms, I'm assuming the manufacturers are in a much
> better position to take care of any discovery and association problems.
> Given the controlled nature of the hardware and deployed OS, I assume
> they use platform-configuration information and abstraction layers to
> handle the problem for all applications.
> 
> Desktop and laptop machines normally don't have such vendor support.
> Nor is the hardware configuration fixed as it is on a mobile phone.
> 
> 
> Now to go way beyond your answer:
> 
> Since it is relatively easy to add an LED interface to a driver,
> 
> 	http://linuxtv.org/hg/~awalls/qx3/
> 
> we could just let V4L2 devices that can be on embedded platform
> implement the LED API, while V4L2 devices that are computer peripherals
> implement the V4L2 control API.
> 
> IMO the answer need not be a choice between the V4L2 API or the LED API.
> Why not either or both, given the two domains of embedded vs. computer
> peripheral?

The LED API is too limited. We need to program flash time, pre-flash time, 
current limits, report overheat/overcurrent events, ... See 
http://www.analog.com/static/imported-files/data_sheets/ADP1653.pdf for an 
example of the features found in LED flash controllers.

> My prototype changes for the QX3 microscope implemented both the V4L2
> Control API and the LED API for the illuminators.
> 
> Like all sysfs interfaces, the names I chose for the LED API sysfs nodes
> were mostly arbitrary, but followed the guidelines in the LED API
> document.  They showed up in sysfs like this:
> 
> 
>         /sys/class/leds/video0:white:illuminator0
> 
>         /sys/class/leds/video0:white:illuminator0/device/video4linux/video0
> 
>         /sys/class/video4linux/video0/device/leds/video0:white:illuminator0
> 
> 	/sys/bus/pci/devices/0000:00:12.0/usb3/3-2/3-2:1.0/leds/video0:white:illum
> inator0
> 
>        
> /sys/bus/pci/devices/0000:00:12.0/usb3/3-2/3-2:1.0/video4linux/video0
> 
> and similar results for video0:white:illuminator1.
> 
> I don't know who is going to write applications that hunt those down,
> just to figure out if the video device even has an LED, what type of LED
> it is, or how to set the timing parameters for a flash LED.
> 
> Given that the driver can arbitrarily choose the name of the LEDs in
> sysfs; and also provide additional, driver-specific, custom control
> nodes is sysfs; I really think the LED API is a dead-end for desktop
> video and camera applications.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-03-03 13:56                                           ` Andy Walls
  2011-03-03 14:04                                             ` Laurent Pinchart
@ 2011-03-03 14:29                                             ` Sakari Ailus
  1 sibling, 0 replies; 50+ messages in thread
From: Sakari Ailus @ 2011-03-03 14:29 UTC (permalink / raw)
  To: Andy Walls
  Cc: Laurent Pinchart, Hans Verkuil, Guennadi Liakhovetski,
	Hans Verkuil, Sylwester Nawrocki, Kim HeungJun,
	Linux Media Mailing List, Stanimir Varbanov

On Thu, Mar 03, 2011 at 08:56:25AM -0500, Andy Walls wrote:
> With embedded platforms, like a mobile phone, are the LEDs really tied
> to the camera device: controlled by the GPIOs from the camera bridge
> chip or sensor chip?  Or are they more general purpose peripherals, not
> necessarily tied to the camera?

Besides to what Laurent noted, there are cameras which have a hardware
strobe signal for the flash. The sensor triggers the flash pulse which is
programmed to the flash chip by the host before that. So there is an actual
hardware dependency between the sensor and the flash.

Also, the flash is physically associated with the sensor. This information
would be (eventually) enumerable using the MC API.

-- 
Sakari Ailus
sakari dot ailus at iki dot fi

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-03-03 14:04                                             ` Laurent Pinchart
@ 2011-03-03 14:55                                               ` Andy Walls
  0 siblings, 0 replies; 50+ messages in thread
From: Andy Walls @ 2011-03-03 14:55 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Hans Verkuil, Guennadi Liakhovetski, Hans Verkuil,
	Sylwester Nawrocki, Sakari Ailus, Kim HeungJun,
	Linux Media Mailing List, Stanimir Varbanov

On Thu, 2011-03-03 at 15:04 +0100, Laurent Pinchart wrote:


> The LED API is too limited. We need to program flash time, pre-flash time, 
> current limits, report overheat/overcurrent events, ... See 
> http://www.analog.com/static/imported-files/data_sheets/ADP1653.pdf for an 
> example of the features found in LED flash controllers.


OK.  Thanks.

Since, I have unwittingly managed to get myself into the position of
Devil's Advocate for the LED API, I should mention that a driver can add
whatever custom sysfs nodes it needs to the LED's sysfs directory, for
various parameter settings and controls.

Using those driver-custom sysfs nodes can lead to significant variation
in how applications must control LEDs exported via the LED API by
various drivers.  If *all* drivers in the kernel providing an LED API
don't have a common convention for controlling flash LED's, then you
have a problem analogous to the problem with driver-private V4L2
controls.

To clarify my position, I don't like the LED API being used in
conjunction with video capture applications.  The LED API is too
open-ended and more suited for twiddling LEDs with scripts, than for use
with general video capture or camera applications.

Regards,
Andy


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC] snapshot mode, flash capabilities and control
  2011-02-25 20:56               ` Guennadi Liakhovetski
  2011-02-28 11:57                 ` Guennadi Liakhovetski
@ 2011-03-06  9:53                 ` Sakari Ailus
  1 sibling, 0 replies; 50+ messages in thread
From: Sakari Ailus @ 2011-03-06  9:53 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Laurent Pinchart, Kim HeungJun, Hans Verkuil,
	Linux Media Mailing List, Stanimir Varbanov

Guennadi Liakhovetski wrote:
> On Fri, 25 Feb 2011, Sakari Ailus wrote:
>
>> On Fri, Feb 25, 2011 at 06:08:07PM +0100, Guennadi Liakhovetski wrote:
>>> Hi Sakari
>>
>> Hi Guennadi,
>>
>>> On Fri, 25 Feb 2011, Sakari Ailus wrote:
>>>> I agree with that. Flash synchronisation is just one of the many parameters
>>>> that would benefit from frame level synchronisation. Exposure time, gain
>>>> etc. are also such. The sensors provide varying level of hardware support
>>>> for all these.
>>>
>>> Well, that's true, but... From what I've seen so far, many sensors
>>> synchronise such sensitive configuration changes with their frame readout
>>> automatically, i.e., you configure some new parameter in a sensor
>>> register, but it will only become valid with the next frame. On other
>>> sensors you can issue a "hold" command, perform any needed changed, then
>>> issue a "commit" and all your changes will be applied atomically.
>>
>> At that level it's automatic, but what I meant is synchronising the
>> application of the settings to the exposure start of a given frame. This is
>> very similar to flash synchronisation.
>
> Right... But I don't think we can do more, than what the sensor supports
> about this, can we? Only stop the sensor, apply parameters, start the
> sensor...

It's possible to calculate on what frame the parameters should take 
effect and apply them to the sensor at the right time. But this is 
highly timing dependent and I'm not certain it's best to implement this 
in the driver at all in the case there is no hardware support.

>>> Also, we already _can_ configure gain, exposure and many other parameters,
>>> but we have no way to use sensor snapshot and flash-synchronisation
>>> capabilities.
>>
>> There is a way to configure them but the interface doesn't allow to specify
>> when they should take effect.
>
> ??? There are V4L ioctl()s to control the flash?...

No flash, but gain, exposure and many others. There is just no way to 
tell when they should take effect.

>> FCam type applications requires this sort of functionality.
>>
>> <URL:http://fcam.garage.maemo.org/>
>>
>>> What we could also do, we could add an optional callback to subdev (core?)
>>> operations, which, if activated, the host would call on each frame
>>> completion.
>>
>> It's not quite that simple. The exposure of the next frame has started long
>> time before that. This requires much more thought probably --- in the case
>> of lack of hardware support, when the parameters need to be actually given
>> to the sensor depend somewhat on sensors, I suppose.
>
> Yes, that's right. I seem to remember, there was a case, for which such a
> callback would have been useful... Don't remember what that was though.
>
>>>> Flash and indicator power setting can be included to the list of controls
>>>> above.
>>>
>>> As I replied to Laurent, not sure we need to control the power indicator
>>> from V4L2, unless there are sensors, that have support for that.
>>
>> Um, flash controllers, that is. Yes, there are; the ADP1653, which is just
>> one example.
>
> No, not flash controllers, just an indicator, that a capture is running
> (normally a small red LED).

That led is often controlled by the flash controller. And its power can 
be adjusted, too...

>>>> The power management of the camera is
>>>> preferrably optimised for speed so that the camera related devices need not
>>>> to be power cycled when using it. If the flash interface is available on a
>>>> subdev separately the flash can also be easily powered separately without
>>>> making this a special case --- the rest of the camera related devices (ISP,
>>>> lens and sensor) should stay powered off.
>>>>
>>>>> configure the sensor to react on an external trigger provided by the flash
>>>>> controller is needed, and that could be a control on the flash sub-device.
>>>>> What we would probably miss is a way to issue a STREAMON with a number of
>>>>> frames to capture. A new ioctl is probably needed there. Maybe that would be
>>>>> an opportunity to create a new stream-control ioctl that could replace
>>>>> STREAMON and STREAMOFF in the long term (we could extend the subdev s_stream
>>>>> operation, and easily map STREAMON and STREAMOFF to the new ioctl in
>>>>> video_ioctl2 internally).
>>>>
>>>> How would this be different from queueing n frames (in total; count
>>>> dequeueing, too) and issuing streamon? --- Except that when the last frame
>>>> is processed the pipeline could be stopped already before issuing STREAMOFF.
>>>> That does indeed have some benefits. Something else?
>>>
>>> Well, you usually see in your host driver, that the videobuffer queue is
>>> empty (no more free buffers are available), so, you stop streaming
>>> immediately too.
>>
>> That's right. Disabling streaming does save some power but even more is
>> saved when switching the devices off completely. This is important in
>> embedded systems that are often battery powered.
>>
>> The hardware could be switched off when no streaming takes place. However,
>> this introduces extra delays to power-up at times they are unwanted --- for
>> example, when switching from viewfinder to still capture.
>>
>> The alternative to this is to add a timer to the driver: power off if no
>> streaming has taken place for n seconds, for example. I would consider this
>> much inferior to just providing a simple subdev for the flash chip and not
>> involve the ISP at all.
>
> There's an .s_power() method already in subdev core-ops.

This op only exist to tell the subdev driver to go to a given power 
state. I don't see a connection to the above problem. :-)

Regards,

-- 
Sakari Ailus
sakari.ailus@iki.fi

^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2011-03-06  9:52 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-02-24 12:18 [RFC] snapshot mode, flash capabilities and control Guennadi Liakhovetski
2011-02-24 12:40 ` Hans Verkuil
2011-02-24 16:07   ` Guennadi Liakhovetski
2011-02-24 17:57     ` Kim HeungJun
2011-02-25 10:05       ` Laurent Pinchart
2011-02-25 13:53         ` Sakari Ailus
2011-02-25 17:08           ` Guennadi Liakhovetski
2011-02-25 18:55             ` Sakari Ailus
2011-02-25 20:56               ` Guennadi Liakhovetski
2011-02-28 11:57                 ` Guennadi Liakhovetski
2011-03-06  9:53                 ` Sakari Ailus
2011-02-26 12:31             ` Hans Verkuil
2011-02-26 13:03               ` Guennadi Liakhovetski
2011-02-26 13:39                 ` Sylwester Nawrocki
2011-02-26 13:56                   ` Hans Verkuil
2011-02-26 15:42                     ` Sylwester Nawrocki
2011-02-28 10:28                     ` Laurent Pinchart
2011-02-28 10:40                       ` Hans Verkuil
2011-02-28 10:47                         ` Laurent Pinchart
2011-02-28 11:02                         ` Guennadi Liakhovetski
2011-02-28 11:07                           ` Laurent Pinchart
2011-02-28 11:17                             ` Hans Verkuil
2011-02-28 11:19                               ` Laurent Pinchart
2011-02-28 11:54                               ` Guennadi Liakhovetski
2011-02-28 22:41                                 ` Guennadi Liakhovetski
2011-03-02 17:51                                   ` Guennadi Liakhovetski
2011-03-02 18:19                                     ` Hans Verkuil
2011-03-03  1:05                                       ` Andy Walls
2011-03-03 11:50                                         ` Laurent Pinchart
2011-03-03 13:56                                           ` Andy Walls
2011-03-03 14:04                                             ` Laurent Pinchart
2011-03-03 14:55                                               ` Andy Walls
2011-03-03 14:29                                             ` Sakari Ailus
2011-03-03  8:02                                       ` Guennadi Liakhovetski
2011-03-03  9:25                                         ` Hans Verkuil
2011-02-28 13:33                               ` Andy Walls
2011-02-28 13:37                                 ` Andy Walls
2011-02-28 11:37                             ` Guennadi Liakhovetski
2011-02-28 12:03                               ` Sakari Ailus
2011-02-28 12:44                                 ` Guennadi Liakhovetski
2011-02-28 15:07                                   ` Sakari Ailus
2011-02-28 10:24                 ` Laurent Pinchart
2011-02-25 16:58         ` Guennadi Liakhovetski
2011-02-25 18:49           ` Sakari Ailus
2011-02-25 20:33             ` Guennadi Liakhovetski
2011-02-27 21:00               ` Sakari Ailus
2011-02-28 11:20                 ` Guennadi Liakhovetski
2011-02-28 13:44                   ` Sakari Ailus
2011-03-03  7:09 ` Kim, HeungJun
2011-03-03  7:30   ` Guennadi Liakhovetski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.