All of lore.kernel.org
 help / color / mirror / Atom feed
* Media controller: sysfs vs ioctl
@ 2009-09-11 22:21 Hans Verkuil
  2009-09-11 23:01 ` Andy Walls
                   ` (4 more replies)
  0 siblings, 5 replies; 25+ messages in thread
From: Hans Verkuil @ 2009-09-11 22:21 UTC (permalink / raw)
  To: linux-media

Hi all,

I've started this as a new thread to prevent polluting the discussions of the
media controller as a concept.

First of all, I have no doubt that everything that you can do with an ioctl,
you can also do with sysfs and vice versa. That's not the problem here.

The problem is deciding which approach is the best.

What is sysfs? (taken from http://lwn.net/Articles/31185/)

"Sysfs is a virtual filesystem which provides a userspace-visible representation
of the device model. The device model and sysfs are sometimes confused with each
other, but they are distinct entities. The device model functions just fine
without sysfs (but the reverse is not true)."

Currently both a v4l driver and the device nodes are all represented in sysfs.
This is handled automatically by the kernel.

Sub-devices are not represented in sysfs since they are not based on struct
device. They are v4l-internal structures. Actually, if the subdev represents
an i2c device, then that i2c device will be present in sysfs, but not all
subdevs are i2c devices.

Should we make all sub-devices based on struct device? Currently this is not
required. Doing this would probably mean registering a virtual bus, then
attaching the sub-device to that. Of course, this only applies to sub-devices
that represent something that is not an i2c device (e.g. something internal
to the media board like a resizer, or something connected to GPIO pins).

If we decide to go with sysfs, then we have to do this. This part shouldn't
be too difficult to implement. And also if we do not go with sysfs this might
be interesting to do eventually.

The media controller topology as I see it should contain the device nodes
since the application has to know what device node to open to do the streaming.
It should also contain the sub-devices so the application can control them.
Is this enough? I think that eventually we also want to show the physical
connectors. I left them out (mostly) from the initial media controller proposal,
but I suspect that we want those as well eventually. But connectors are
definitely not devices. In that respect the entity concept of the media
controller is more abstract than sysfs.

However, for now I think we can safely assume that sub-devices can be made
visible in sysfs.

The next point is how to access sub-devices.

One approach is to create sub-device attributes that can be read and written.
The other approach is to use a media controller device node and use ioctls.

A third approach would be to create device nodes for each sub-device. This was
the original idea I had when they were still called 'media processors'. I no
longer think that is a good idea. The problem with that is that it will create
even more device nodes (dozens in some cases) in addition to the many streaming
device nodes that we already have. We have no choice in the case of streaming,
but very few sub-devices (if any) will need to do read/write or streaming.
Should this be needed, then it is always possible to create a device node just
for that and link it as an output to the sub-device. In 99% of the cases we
just need to be able to call ioctl on the sub-device. And we can do that
through a media controller device just as well.

What sort of interaction do we need with sub-devices?

1) Controls. Most sub-devices will have controls. Some of these controls are
the same as are exposed through a v4l device node (brightness, contrast, etc),
but others are more advanced controls. E.g. fine-grained gain controls. We have
seen this lately in several driver submissions that there are a lot of controls
that are hardware specific and that we do not want to expose to the average
xawtv-type application. But advanced apps still want to be able to use them.

Making such controls private to the sub-device is a good solution. What that
means is that they do not appear when you do QUERYCTRL on the video device node,
but you can set/get them when dealing directly to the sub-device.

If we do this with an ioctl then we can just reuse the existing ioctls for
dealing with controls. As you know I am working on a control framework that
will move most of the implementation details to the core. It is easy to make
this available as well to sub-devices. So calling QUERYCTRL through a media
controller will also enumerate all the 'private' controls. In addition, with
VIDIOC_S/G_EXT_CTRLS you can set and get multiple controls atomically.

Such a control framework can also expose those controls as sysfs attributes.
This works fine except for setting multiple controls atomically. E.g. for
motor control you will definitely want this.

You can do this I guess by implementing an 'all' attribute that will output
all the control values when queried and you can write things like:

motor_x=100
motor_y=200

to the 'all' attribute.

Yuck.

Oh, and I don't see how you can implement the information that v4l2_queryctrl
and v4l2_querymenu can return in sysfs. Except by creating even more
attributes.

Note that I would like to see controls in sysfs. They can be very handy for
scripting purposes. But that API is simply not powerful enough for use in
applications.


2) Private ioctls. Basically a way to set and get data that is hardware
specific from the sub-device. This can be anything from statistics, histogram
information, setting resizer coefficients, configuring colorspace converters,
whatever. Furthermore, just like the regular V4L2 API it has to be designed
with future additions in mind (i.e. it should use something like reserved
fields).

In my opinion ioctls are ideal for this since they are very flexible and easy
to use and efficient. Especially because you can combine multiple fields into
one unit. So getting histogram data through an ioctl will also provide the
timestamp. And you can both write and read data in one atomic system call.
Yes, you can combine data in sysfs reads as well. Although an IORW ioctls is
hard to model in sysfs. But whereas an ioctl can just copy a struct from kernel
to userspace, for sysfs you have to go through a painful process of parsing and
formatting.

And not all this data is small. Histogram data can be 10s of kilobytes. 'cat'
will typically only read 4 kB at a time, so you will probably have to keep
track of how much is read in the read implementation of the attribute. Or does
the kernel do that for you?

And how do you make sysfs attribute data flexible enough to be able to add new
fields in the future? Of course, having each bit of data in a separate
attribute is one way of doing it, but then you loose atomicity. And I don't
think you can just add new fields at the end of the data you read from an
attribute: that would really mess up the parsing.

Finally, some of these ioctls might have to be called very frequently.
Statistics and histogram information is generally obtained for every captured
frame. Having to format and parse all that information at that frequency will
become a real performance hog. I'm sorry, but sysfs is simply not designed for
that.


3) Setting up formatting for inputs and outputs. This is related to open issue
#3 in the media controller RFC. The current V4L2 API does not provide precise
enough control when dealing with e.g. a sensor able to capture certain
resolutions combines with a scaler that has restrictions of its own. Depending
on the final resolution it may be better to upscale or downscale from a
particular sensor resolution (this is an actual problem with omap3). You want
to be able to set this up by commanding the sensor and resizer sub-devices and
explicitly configuring them.

I need to think more about this. It is a complex issue that needs to be broken
down into more manageable pieces first.


The final part is how to represent the topology. Device nodes and sub-devices
can be exposed to sysfs as discussed earlier. Representing both the possible
and current links between them is a lot harder. This is especially true for
non-v4l device nodes since we cannot just add attributes there. I think this
can be done though by providing that information as attributes of an mc sysfs
node. That way it remains under control of the v4l core.

Per entity you can make attributes like this:

inputs: number of inputs to the entity
outputs: number of outputs of the entity

input0_possible_sources: give a list of possible sources (each source in the
form entity:output)
input0_current_sources: a bitmask (or perhaps a string like 00010001) that
returns the current selected sources or can be set to select the sources.

Ditto for all inputs and outputs.

I agree, this is perfectly possible. You are going to generate a shitload of
attributes and you need a userspace library to organize all this information
in a sensible set of enumeration functions.

Alternatively, you can just implement enumeration ioctls directly. No need
to do a zillion open/read/close syscalls just to get hold of all this
information. We might still add sysfs support later, in addition to the ioctls,
but I see no advantage whatsoever in doing this in sysfs.

I also really, really do not bloat the core with all the formatting and
parsing that is needed to make this work. It's all unnecessary complex and
error-prone.

I see sysfs as a means of exposing kernel structures to the outside world.
Should we also start exposing driver internals in this way? In some cases
we can do that as an extra service (e.g. exposing sub-devices and controls),
but *not* as the primary API.

Of course, one more reason is that this forces applications do use two wildly
different APIs to control a media board: ioctls and sysfs. That's particularly
bad.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-11 22:21 Media controller: sysfs vs ioctl Hans Verkuil
@ 2009-09-11 23:01 ` Andy Walls
  2009-09-12  1:37   ` hermann pitton
  2009-09-12 13:31 ` Mauro Carvalho Chehab
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 25+ messages in thread
From: Andy Walls @ 2009-09-11 23:01 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media

On Sat, 2009-09-12 at 00:21 +0200, Hans Verkuil wrote:
> Hi all,
> 
> I've started this as a new thread to prevent polluting the discussions of the
> media controller as a concept.
> 
> First of all, I have no doubt that everything that you can do with an ioctl,
> you can also do with sysfs and vice versa. That's not the problem here.
> 
> The problem is deciding which approach is the best.

I've wanted to reply earlier but I cannot collect enough time to do
proper research, but since you asked this particular question, I happen
to have something which may help.

I would suggest evaluating a representative proposals by applying
Baldwin and Clark's Net Options Value (NOV) metric to a simple system
representation.  The system to which to apply the metric would comprise:

	- a representative user space app
	- a representative v4l-dvb driver
	- an API concept/suggestion/proposal

I think this metric is appropriate to apply, because the NOV is a way to
assign value to implementing options (i.e. options in modular systems).
An API itself is not a modular system and hard to evaluate in isolation,
so it needs to be evaluated in the context of the options it provies to
the system designers and maintainers.

The NOV boils to simple concepts:

1. a system design has a total value that is its present value plus the
value of it's options that can be exploited in the future.

2. an option represents a potential value that may provide a return in
the future

3. an option has only a potential value (in the present)

4. an option only yields a return if that option may be exploited in the
future.  The probability that the option may be exploited needs to be
taken into account.

5. an option has costs associated with exploiting it (in the future)

I'm not advocating a rigorous computation of the metric for the
proposals, but more a qualitative look at the proposals but still using
the precise definition of the metric (sorry I don't have a URL
handy...).


I will note that I think am in agreement with Hans on sysfs.  I think
the cost of trying to exploit any option provided through sysfs in a
userspace apppllication will nullify any technical benefit of said
option to the application.

Lets say we want to convert an existing app to a "Media Controller
aware" version of that app.  There is a cost to do that.  Will the API
proposal make exploting some options have a large cost?  Do some of the
options of the API have a low probability of being exploited?  Do some
of the options of the API provide very low technical benefit?  What does
the API proposal do to the total value of the system (e.g. an API with
no flexibility fixes the total value close to the present value and
there is no value to be realized from exploiting options in the future).


OK, I hope I've communicated what I mean.  I feel like that all may be
less than clear.


These ideas have come from a confluence of research I've been doing at
work, and V4L-DVB work (thinking about Multiproto vs. DVB v5, and the
v4l2_subdev IR ops, etc.).


Regards,
Andy


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-11 23:01 ` Andy Walls
@ 2009-09-12  1:37   ` hermann pitton
  0 siblings, 0 replies; 25+ messages in thread
From: hermann pitton @ 2009-09-12  1:37 UTC (permalink / raw)
  To: Andy Walls; +Cc: Hans Verkuil, linux-media

Hi,

Am Freitag, den 11.09.2009, 19:01 -0400 schrieb Andy Walls:
> On Sat, 2009-09-12 at 00:21 +0200, Hans Verkuil wrote:
> > Hi all,
> > 
> > I've started this as a new thread to prevent polluting the discussions of the
> > media controller as a concept.
> > 
> > First of all, I have no doubt that everything that you can do with an ioctl,
> > you can also do with sysfs and vice versa. That's not the problem here.
> > 
> > The problem is deciding which approach is the best.
> 
> I've wanted to reply earlier but I cannot collect enough time to do
> proper research, but since you asked this particular question, I happen
> to have something which may help.
> 
> I would suggest evaluating a representative proposals by applying
> Baldwin and Clark's Net Options Value (NOV) metric to a simple system
> representation.  The system to which to apply the metric would comprise:
> 
> 	- a representative user space app
> 	- a representative v4l-dvb driver
> 	- an API concept/suggestion/proposal
> 
> I think this metric is appropriate to apply, because the NOV is a way to
> assign value to implementing options (i.e. options in modular systems).
> An API itself is not a modular system and hard to evaluate in isolation,
> so it needs to be evaluated in the context of the options it provies to
> the system designers and maintainers.
> 
> The NOV boils to simple concepts:
> 
> 1. a system design has a total value that is its present value plus the
> value of it's options that can be exploited in the future.
> 
> 2. an option represents a potential value that may provide a return in
> the future
> 
> 3. an option has only a potential value (in the present)
> 
> 4. an option only yields a return if that option may be exploited in the
> future.  The probability that the option may be exploited needs to be
> taken into account.
> 
> 5. an option has costs associated with exploiting it (in the future)
> 
> I'm not advocating a rigorous computation of the metric for the
> proposals, but more a qualitative look at the proposals but still using
> the precise definition of the metric (sorry I don't have a URL
> handy...).
> 
> 
> I will note that I think am in agreement with Hans on sysfs.  I think
> the cost of trying to exploit any option provided through sysfs in a
> userspace apppllication will nullify any technical benefit of said
> option to the application.
> 
> Lets say we want to convert an existing app to a "Media Controller
> aware" version of that app.  There is a cost to do that.  Will the API
> proposal make exploting some options have a large cost?  Do some of the
> options of the API have a low probability of being exploited?  Do some
> of the options of the API provide very low technical benefit?  What does
> the API proposal do to the total value of the system (e.g. an API with
> no flexibility fixes the total value close to the present value and
> there is no value to be realized from exploiting options in the future).
> 
> 
> OK, I hope I've communicated what I mean.  I feel like that all may be
> less than clear.
> 
> 
> These ideas have come from a confluence of research I've been doing at
> work, and V4L-DVB work (thinking about Multiproto vs. DVB v5, and the
> v4l2_subdev IR ops, etc.).
> 
> 
> Regards,
> Andy
> 

just as a side note.

We are not forced to accept any hardware design under all conditions
anymore.

If there are reasons, we can also tell them to go to Bill and Steve and
to pay their fees per device.

Cheers,
Hermann



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-11 22:21 Media controller: sysfs vs ioctl Hans Verkuil
  2009-09-11 23:01 ` Andy Walls
@ 2009-09-12 13:31 ` Mauro Carvalho Chehab
  2009-09-12 13:41   ` Devin Heitmueller
  2009-09-13  6:13 ` Nathaniel Kim
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 25+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-12 13:31 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media

Em Sat, 12 Sep 2009 00:21:48 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> Hi all,
> 
> I've started this as a new thread to prevent polluting the discussions of the
> media controller as a concept.
> 
> First of all, I have no doubt that everything that you can do with an ioctl,
> you can also do with sysfs and vice versa. That's not the problem here.
> 
> The problem is deciding which approach is the best.

True. Choosing the better approach is very important since, once merged, we'll
need to stick it for a very long time.

I saw your proposal of a ioctl-only implementation for the media control. It is
important to have a sysfs implementation also to compare. I can do it.

However, we are currently in the middle of a merge window, and this one will
require even more time than usual, since we have 2 series of patches for
soc_camera and for DaVinci/OMAP that depends on arm and omap architecture merge.

Also, there are some pending merges that requires some time to analyze, like
the ISDB-T/ISDB-S patches and API changes that were proposed for 2.6.32, that
requiring the analysis of both Japanese and Brazilian specs and do some
tests, and the tuner changes for better handling the i2c gates, and the V4L and
DVB specs that we can now merge upstream, as both got converted to DocBook XML
4.1.2 (the same version used upstream).

So, during the next two weeks, we'll have enough fun to handle, in order to get
our patches merged for 2.6.32. So, unfortunately, I'm afraid that we'll need to
give a break on those discussions until the end of the merge window, focusing
on merging the patches we have for 2.6.32.


Cheers,
Mauro

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-12 13:31 ` Mauro Carvalho Chehab
@ 2009-09-12 13:41   ` Devin Heitmueller
  2009-09-12 14:45     ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 25+ messages in thread
From: Devin Heitmueller @ 2009-09-12 13:41 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Hans Verkuil, linux-media

On Sat, Sep 12, 2009 at 9:31 AM, Mauro Carvalho Chehab
<mchehab@infradead.org> wrote:
> True. Choosing the better approach is very important since, once merged, we'll
> need to stick it for a very long time.
>
> I saw your proposal of a ioctl-only implementation for the media control. It is
> important to have a sysfs implementation also to compare. I can do it.
>
> However, we are currently in the middle of a merge window, and this one will
> require even more time than usual, since we have 2 series of patches for
> soc_camera and for DaVinci/OMAP that depends on arm and omap architecture merge.
>
> Also, there are some pending merges that requires some time to analyze, like
> the ISDB-T/ISDB-S patches and API changes that were proposed for 2.6.32, that
> requiring the analysis of both Japanese and Brazilian specs and do some
> tests, and the tuner changes for better handling the i2c gates, and the V4L and
> DVB specs that we can now merge upstream, as both got converted to DocBook XML
> 4.1.2 (the same version used upstream).
>
> So, during the next two weeks, we'll have enough fun to handle, in order to get
> our patches merged for 2.6.32. So, unfortunately, I'm afraid that we'll need to
> give a break on those discussions until the end of the merge window, focusing
> on merging the patches we have for 2.6.32.
>
>
> Cheers,
> Mauro

Mauro,

I respectfully disagree.  The original version of this RFC has been
pending for almost a year now.  Hans has written a prototype
implementation.  We should strive to get this locked down by the LPC
conference.

I think we all know that you are busy, but this conversation needs to
continue even if you personally do not have the cycles to give it your
full attention.

There is finally some real momentum behind this initiative, and the
lack of this functionality is crippling usability for many, many
users.  "Hi I a new user to tvtime.  I can see analog tv with tvtime,
but how do I make audio work?"

Let's finally put this issue to rest.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-12 13:41   ` Devin Heitmueller
@ 2009-09-12 14:45     ` Mauro Carvalho Chehab
  2009-09-12 15:12       ` Hans Verkuil
  0 siblings, 1 reply; 25+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-12 14:45 UTC (permalink / raw)
  To: Devin Heitmueller; +Cc: Hans Verkuil, linux-media

Em Sat, 12 Sep 2009 09:41:58 -0400
Devin Heitmueller <dheitmueller@kernellabs.com> escreveu:

> I respectfully disagree.

Are you suggesting that we should not submit any patches upstream during this
merge window in order to discuss this? Sorry, but this is not right with all 
the developers that did their homework and submitted changes for 2.6.32.

>  The original version of this RFC has been
> pending for almost a year now. 

It was not pending since then. As there were all the framework changes needed,
we've agreed on doing the V4L framework changes, that were finally merged at 2.6.30[1],
that were required, before proceeding with further discussions.

> Hans has written a prototype
> implementation.  We should strive to get this locked down by the LPC
> conference.

Why? Nothing stops discussing it there and better prepare a proposal, but,
considering all the noise we had after the DVB S2API last year, I don't think
we should ever repeat using a conference, where only some of us will be there
to approve a proposal. It is the right forum for discussing and better formulate
the issues, in order to prepare a proposal, but the decisions should happen at
the ML.

Hans took a year to prepare RFCv2, and we all know that he was hardly
working on implementing what was discussed during the first RFC proposal during
all those timeframe. This shows that this is a complex matter.

> I think we all know that you are busy, but this conversation needs to
> continue even if you personally do not have the cycles to give it your
> full attention.

It is not only me that has pending tasks, but other developers are also focused
on merging their things. For example, Mkrufky already pointed that he is
waiting for the merge of the first part of his series, since he needs to send a
complementary set of patches. I'm sure that there are other developers that
are still finishing some working for the merge or that may need to solve the
usual troubles that happens when some patches went upstream via other trees,
needing to be backported and tested.

> There is finally some real momentum behind this initiative, and the
> lack of this functionality is crippling usability for many, many
> users.  "Hi I a new user to tvtime.  I can see analog tv with tvtime,
> but how do I make audio work?"
> 
> Let's finally put this issue to rest.

Yes, let's do it for 2.6.33, but it this discussion started too late for 2.6.32.

Cheers,
Mauro

[1] FYI, we had several regression fixes on 2.6.31 due to that - and there are
still some unsolved regressions related to them - basically i2c gate and radio
breakages yet requiring fixes. Some are documented at Kernel regression list
and at bugzilla, others were just reported at ML. As I said a countless number
of times, we need to focus fixing the regressions caused by those API changes
before doing another series of changes at API. So, if people are with spare
time, I suggest a task force to fix the remaining regressions mostly on saa7134.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-12 14:45     ` Mauro Carvalho Chehab
@ 2009-09-12 15:12       ` Hans Verkuil
  2009-09-12 15:54         ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 25+ messages in thread
From: Hans Verkuil @ 2009-09-12 15:12 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Devin Heitmueller, linux-media

On Saturday 12 September 2009 16:45:35 Mauro Carvalho Chehab wrote:
> Em Sat, 12 Sep 2009 09:41:58 -0400
> Devin Heitmueller <dheitmueller@kernellabs.com> escreveu:
> 
> > I respectfully disagree.
> 
> Are you suggesting that we should not submit any patches upstream during this
> merge window in order to discuss this? Sorry, but this is not right with all 
> the developers that did their homework and submitted changes for 2.6.32.
> 
> >  The original version of this RFC has been
> > pending for almost a year now. 
> 
> It was not pending since then. As there were all the framework changes needed,
> we've agreed on doing the V4L framework changes, that were finally merged at 2.6.30[1],
> that were required, before proceeding with further discussions.
> 
> > Hans has written a prototype
> > implementation.  We should strive to get this locked down by the LPC
> > conference.
> 
> Why? Nothing stops discussing it there and better prepare a proposal, but,
> considering all the noise we had after the DVB S2API last year, I don't think
> we should ever repeat using a conference, where only some of us will be there
> to approve a proposal. It is the right forum for discussing and better formulate
> the issues, in order to prepare a proposal, but the decisions should happen at
> the ML.

In that particular case you would have had a lot of noise no matter what you
did :-)

The final decisions will indeed be taken here, but the conference is an
excellent place to talk face-to-face and to see how well the current proposal
will fit actual media hardware.

I don't have complete access or knowledge of all the current and upcoming
media boards, but there are several TI and Nokia engineers present who can
help with that. Also interesting is to see how and if the framework can be
extended to dvb.

> Hans took a year to prepare RFCv2, and we all know that he was hardly
> working on implementing what was discussed during the first RFC proposal during
> all those timeframe. This shows that this is a complex matter.

Not entirely true, I worked on the necessary building blocks for such a media
controller in the past year. There is a reason why it only took me 400-odd
lines to get the basic mc support in...

> > I think we all know that you are busy, but this conversation needs to
> > continue even if you personally do not have the cycles to give it your
> > full attention.
> 
> It is not only me that has pending tasks, but other developers are also focused
> on merging their things. For example, Mkrufky already pointed that he is
> waiting for the merge of the first part of his series, since he needs to send a
> complementary set of patches. I'm sure that there are other developers that
> are still finishing some working for the merge or that may need to solve the
> usual troubles that happens when some patches went upstream via other trees,
> needing to be backported and tested.
> 
> > There is finally some real momentum behind this initiative, and the
> > lack of this functionality is crippling usability for many, many
> > users.  "Hi I a new user to tvtime.  I can see analog tv with tvtime,
> > but how do I make audio work?"
> > 
> > Let's finally put this issue to rest.
> 
> Yes, let's do it for 2.6.33, but it this discussion started too late for 2.6.32.

I don't think anyone advocated getting anything merged for 2.6.32. Certainly
not me. I'm not even sure whether 2.6.33 is feasible: before we merge anything
I'd really like to see it implemented in e.g. omap3, uvcvideo and ivtv at the
least. The proof of the pudding is in the eating, and since this is meant to
cover a wide range of media boards we should have some idea if theory and
practice actually match.

Personally I think that the fact that I got an initial version implemented so
quickly is very promising.

I'm currently trying to get ivtv media-controller-aware. It's probably the
most complex driver when it comes to topology that I have access to, so that
would be a good test case.

> 
> Cheers,
> Mauro
> 
> [1] FYI, we had several regression fixes on 2.6.31 due to that - and there are
> still some unsolved regressions related to them - basically i2c gate and radio
> breakages yet requiring fixes. Some are documented at Kernel regression list
> and at bugzilla, others were just reported at ML. As I said a countless number
> of times, we need to focus fixing the regressions caused by those API changes
> before doing another series of changes at API. So, if people are with spare
> time, I suggest a task force to fix the remaining regressions mostly on saa7134.

Can someone keep a list of regressions and post that to the list every week or
so? You mention those regressions, but I've no idea what they are. I've seen
two regressions caused by the subdev changes: one related to the link order in
the kernel, one in bttv. I fixed both promptly.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-12 15:12       ` Hans Verkuil
@ 2009-09-12 15:54         ` Mauro Carvalho Chehab
  2009-09-12 18:48           ` Andy Walls
       [not found]           ` <434302664.20090915195043@ntlworld.com>
  0 siblings, 2 replies; 25+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-12 15:54 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Devin Heitmueller, linux-media

Em Sat, 12 Sep 2009 17:12:35 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> > Why? Nothing stops discussing it there and better prepare a proposal, but,
> > considering all the noise we had after the DVB S2API last year, I don't think
> > we should ever repeat using a conference, where only some of us will be there
> > to approve a proposal. It is the right forum for discussing and better formulate
> > the issues, in order to prepare a proposal, but the decisions should happen at
> > the ML.
> 
> In that particular case you would have had a lot of noise no matter what you
> did :-)

True.

> The final decisions will indeed be taken here, but the conference is an
> excellent place to talk face-to-face and to see how well the current proposal
> will fit actual media hardware.
> 
> I don't have complete access or knowledge of all the current and upcoming
> media boards, but there are several TI and Nokia engineers present who can
> help with that. Also interesting is to see how and if the framework can be
> extended to dvb.

As I've said at #irc, it would be really great if you could find some ways for
remote people to participate, maybe setting an audio conference channel. I
think we should try to discuss with LPC runners if we can find some ways for it.

I would love to discuss this there with you, but this year's budget and
logistics didn't allow. For those that will also be in Japan for JLC, we can
compliment there with some useful face-to-face discussions.
> 
> > Hans took a year to prepare RFCv2, and we all know that he was hardly
> > working on implementing what was discussed during the first RFC proposal during
> > all those timeframe. This shows that this is a complex matter.
> 
> Not entirely true, I worked on the necessary building blocks for such a media
> controller in the past year. There is a reason why it only took me 400-odd
> lines to get the basic mc support in...

Yes, I know. having the drivers using the framework is for sure the first step.
Yet, unfortunately, this means that we'll still need to do lots of work with
the webcam and dvb drivers for them to use the i2c kernel support and the
proper media core. As some webcams has audio input streaming over USB, it is
important to extend media controller features also to them.

> > Yes, let's do it for 2.6.33, but it this discussion started too late for 2.6.32.
> 
> I don't think anyone advocated getting anything merged for 2.6.32. Certainly
> not me. I'm not even sure whether 2.6.33 is feasible: before we merge anything
> I'd really like to see it implemented in e.g. omap3, uvcvideo and ivtv at the
> least. The proof of the pudding is in the eating, and since this is meant to
> cover a wide range of media boards we should have some idea if theory and
> practice actually match.
> 
> Personally I think that the fact that I got an initial version implemented so
> quickly is very promising.
> 
> I'm currently trying to get ivtv media-controller-aware. It's probably the
> most complex driver when it comes to topology that I have access to, so that
> would be a good test case.

The most complex hardware for PC I'm aware is the cx25821. Unfortunately, the
driver is currently in bad shape, in terms of CodingStyle (including the
removal of large blocks of code that are repeated several times along the
driver), needing lots of changes in order to get merged.

For those interested, the code is at:
	http://linuxtv.org/hg/~mchehab/cx25821/

I'll likely do some work on it during this merge window for its inclusion
upstream (probably at drivers/staging - since I doubt we'll have enough time to
clean it up right now).

It has several blocks that can be used for video in and video out. The current
driver has support for 8 simultaneous video inputs and 4 simultaneous video
output. I'm not sure, but I won't doubt that you can exchange inputs and
outputs or even group them. So, this is a good candidate for some media
controller tests. I'll try to do it via sysfs, running some tests and post the
results.

Cheers,
Mauro

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-12 15:54         ` Mauro Carvalho Chehab
@ 2009-09-12 18:48           ` Andy Walls
  2009-09-13  1:33             ` Mauro Carvalho Chehab
       [not found]           ` <434302664.20090915195043@ntlworld.com>
  1 sibling, 1 reply; 25+ messages in thread
From: Andy Walls @ 2009-09-12 18:48 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Hans Verkuil, Devin Heitmueller, linux-media

On Sat, 2009-09-12 at 12:54 -0300, Mauro Carvalho Chehab wrote:
> Em Sat, 12 Sep 2009 17:12:35 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> > I'm currently trying to get ivtv media-controller-aware. It's probably the
> > most complex driver when it comes to topology that I have access to, so that
> > would be a good test case.
> 
> The most complex hardware for PC I'm aware is the cx25821. Unfortunately, the
> driver is currently in bad shape, in terms of CodingStyle (including the
> removal of large blocks of code that are repeated several times along the
> driver), needing lots of changes in order to get merged.
> 
> For those interested, the code is at:
> 	http://linuxtv.org/hg/~mchehab/cx25821/
> 
> I'll likely do some work on it during this merge window for its inclusion
> upstream (probably at drivers/staging - since I doubt we'll have enough time to
> clean it up right now).
> 
> It has several blocks that can be used for video in and video out. The current
> driver has support for 8 simultaneous video inputs and 4 simultaneous video
> output. I'm not sure, but I won't doubt that you can exchange inputs and
> outputs or even group them. So, this is a good candidate for some media
> controller tests. I'll try to do it via sysfs, running some tests and post the
> results.


I read the available specs for that chip when I saw the source code
appear in a repo of yours several months ago.  The public data sheet is
here.

http://www.conexant.com/servlets/DownloadServlet/PBR-201499-004.pdf?docid=1501&revid=4

The chip looks like it is a good fit for surveillance applications.

The block diagram indicates it is essentially a Video (10x CCIR656) and
Audio (5x I2S) router, with a pile of GPIOS (48), 3 I2C busses, and
support for inbound and outbound DMA channels.  The chip also has built
in scalers and motion detection.  Managing the chip itself doesn't look
too complicated, but once intergrated with other devices like
compression CODECs, a CX25853 devices, or general purpose
microcontrollers, I imagine it could get complex to manage.

The reference design brief is here:

http://www.conexant.com/servlets/DownloadServlet/RED-202183-001.pdf?docid=2184&revid=1


I agree with the coding style problems of the current driver.

Regards,
Andy

> Cheers,
> Mauro



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-12 18:48           ` Andy Walls
@ 2009-09-13  1:33             ` Mauro Carvalho Chehab
  0 siblings, 0 replies; 25+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-13  1:33 UTC (permalink / raw)
  To: Andy Walls; +Cc: Hans Verkuil, Devin Heitmueller, linux-media

Em Sat, 12 Sep 2009 14:48:23 -0400
Andy Walls <awalls@radix.net> escreveu:

> On Sat, 2009-09-12 at 12:54 -0300, Mauro Carvalho Chehab wrote:
> > Em Sat, 12 Sep 2009 17:12:35 +0200
> > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
> > > I'm currently trying to get ivtv media-controller-aware. It's probably the
> > > most complex driver when it comes to topology that I have access to, so that
> > > would be a good test case.
> > 
> > The most complex hardware for PC I'm aware is the cx25821. Unfortunately, the
> > driver is currently in bad shape, in terms of CodingStyle (including the
> > removal of large blocks of code that are repeated several times along the
> > driver), needing lots of changes in order to get merged.
> > 
> > For those interested, the code is at:
> > 	http://linuxtv.org/hg/~mchehab/cx25821/
> > 
> > I'll likely do some work on it during this merge window for its inclusion
> > upstream (probably at drivers/staging - since I doubt we'll have enough time to
> > clean it up right now).
> > 
> > It has several blocks that can be used for video in and video out. The current
> > driver has support for 8 simultaneous video inputs and 4 simultaneous video
> > output. I'm not sure, but I won't doubt that you can exchange inputs and
> > outputs or even group them. So, this is a good candidate for some media
> > controller tests. I'll try to do it via sysfs, running some tests and post the
> > results.
> 
> 
> I read the available specs for that chip when I saw the source code
> appear in a repo of yours several months ago.  The public data sheet is
> here.
> 
> http://www.conexant.com/servlets/DownloadServlet/PBR-201499-004.pdf?docid=1501&revid=4
> 
> The chip looks like it is a good fit for surveillance applications.
> 
> The block diagram indicates it is essentially a Video (10x CCIR656) and
> Audio (5x I2S) router, with a pile of GPIOS (48), 3 I2C busses, and
> support for inbound and outbound DMA channels.  The chip also has built
> in scalers and motion detection.  Managing the chip itself doesn't look
> too complicated, but once intergrated with other devices like
> compression CODECs, a CX25853 devices, or general purpose
> microcontrollers, I imagine it could get complex to manage.
> 
> The reference design brief is here:
> 
> http://www.conexant.com/servlets/DownloadServlet/RED-202183-001.pdf?docid=2184&revid=1

Yes, this board is very powerful, and very complex designs are possible with it.

> I agree with the coding style problems of the current driver.

Yes. It will take some time until cleaning it up and letting it done for being
at drivers/media.

Cheers,
Mauro

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-11 22:21 Media controller: sysfs vs ioctl Hans Verkuil
  2009-09-11 23:01 ` Andy Walls
  2009-09-12 13:31 ` Mauro Carvalho Chehab
@ 2009-09-13  6:13 ` Nathaniel Kim
  2009-09-13  9:03   ` Hans Verkuil
  2009-09-13 13:27   ` Mauro Carvalho Chehab
  2009-09-13 15:54 ` wk
  2009-09-15 14:10 ` Laurent Pinchart
  4 siblings, 2 replies; 25+ messages in thread
From: Nathaniel Kim @ 2009-09-13  6:13 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media


2009. 9. 12., 오전 7:21, Hans Verkuil 작성:

> Hi all,
>
> I've started this as a new thread to prevent polluting the  
> discussions of the
> media controller as a concept.
>
> First of all, I have no doubt that everything that you can do with  
> an ioctl,
> you can also do with sysfs and vice versa. That's not the problem  
> here.
>
> The problem is deciding which approach is the best.
>
> What is sysfs? (taken from http://lwn.net/Articles/31185/)
>
> "Sysfs is a virtual filesystem which provides a userspace-visible  
> representation
> of the device model. The device model and sysfs are sometimes  
> confused with each
> other, but they are distinct entities. The device model functions  
> just fine
> without sysfs (but the reverse is not true)."
>
> Currently both a v4l driver and the device nodes are all represented  
> in sysfs.
> This is handled automatically by the kernel.
>
> Sub-devices are not represented in sysfs since they are not based on  
> struct
> device. They are v4l-internal structures. Actually, if the subdev  
> represents
> an i2c device, then that i2c device will be present in sysfs, but  
> not all
> subdevs are i2c devices.
>
> Should we make all sub-devices based on struct device? Currently  
> this is not
> required. Doing this would probably mean registering a virtual bus,  
> then
> attaching the sub-device to that. Of course, this only applies to  
> sub-devices
> that represent something that is not an i2c device (e.g. something  
> internal
> to the media board like a resizer, or something connected to GPIO  
> pins).
>
> If we decide to go with sysfs, then we have to do this. This part  
> shouldn't
> be too difficult to implement. And also if we do not go with sysfs  
> this might
> be interesting to do eventually.
>
> The media controller topology as I see it should contain the device  
> nodes
> since the application has to know what device node to open to do the  
> streaming.
> It should also contain the sub-devices so the application can  
> control them.
> Is this enough? I think that eventually we also want to show the  
> physical
> connectors. I left them out (mostly) from the initial media  
> controller proposal,
> but I suspect that we want those as well eventually. But connectors  
> are
> definitely not devices. In that respect the entity concept of the  
> media
> controller is more abstract than sysfs.
>
> However, for now I think we can safely assume that sub-devices can  
> be made
> visible in sysfs.
>

Hans,

First of all I'm very sorry that I had not enough time to go through  
your new RFC. I'll checkout right after posting this mail.

I think this is a good approach and I also had in my mind that sysfs  
might be a good method if we could control and monitor through this.  
Recalling memory when we had a talk in San Francisco, I was frustrated  
that there is no way to catch events from sort of sub-devices like  
lens actuator (I mean pizeo motors in camera module). As you know lens  
actuator is an extremely slow device in comparison with common v4l2  
devices we are using and we need to know whether it has succeeded or  
not in moving to expected position.
So I considered sysfs and udev as candidates for catching events from  
sub-devices. events like success/failure of lens movement, change of  
status of subdevices.
Does anybody experiencing same issue? I think I've seen a lens  
controller driver in omap3 kernel from TI but not sure how did they  
control that.

My point is that we need a kind of framework to give and event to user  
space and catching them properly just like udev does.
Cheers,

Nate

=
DongSoo, Nathaniel Kim
Engineer
Mobile S/W Platform Lab.
Digital Media & Communications R&D Centre
Samsung Electronics CO., LTD.
e-mail : dongsoo.kim@gmail.com
           dongsoo45.kim@samsung.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-13  6:13 ` Nathaniel Kim
@ 2009-09-13  9:03   ` Hans Verkuil
  2009-09-21 17:23     ` Sakari Ailus
  2009-09-13 13:27   ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 25+ messages in thread
From: Hans Verkuil @ 2009-09-13  9:03 UTC (permalink / raw)
  To: Nathaniel Kim; +Cc: linux-media, Laurent Pinchart

On Sunday 13 September 2009 08:13:04 Nathaniel Kim wrote:
> 
> 2009. 9. 12., 오전 7:21, Hans Verkuil 작성:
> 
> > Hi all,
> >
> > I've started this as a new thread to prevent polluting the  
> > discussions of the
> > media controller as a concept.
> >
> > First of all, I have no doubt that everything that you can do with  
> > an ioctl,
> > you can also do with sysfs and vice versa. That's not the problem  
> > here.
> >
> > The problem is deciding which approach is the best.
> >
> > What is sysfs? (taken from http://lwn.net/Articles/31185/)
> >
> > "Sysfs is a virtual filesystem which provides a userspace-visible  
> > representation
> > of the device model. The device model and sysfs are sometimes  
> > confused with each
> > other, but they are distinct entities. The device model functions  
> > just fine
> > without sysfs (but the reverse is not true)."
> >
> > Currently both a v4l driver and the device nodes are all represented  
> > in sysfs.
> > This is handled automatically by the kernel.
> >
> > Sub-devices are not represented in sysfs since they are not based on  
> > struct
> > device. They are v4l-internal structures. Actually, if the subdev  
> > represents
> > an i2c device, then that i2c device will be present in sysfs, but  
> > not all
> > subdevs are i2c devices.
> >
> > Should we make all sub-devices based on struct device? Currently  
> > this is not
> > required. Doing this would probably mean registering a virtual bus,  
> > then
> > attaching the sub-device to that. Of course, this only applies to  
> > sub-devices
> > that represent something that is not an i2c device (e.g. something  
> > internal
> > to the media board like a resizer, or something connected to GPIO  
> > pins).
> >
> > If we decide to go with sysfs, then we have to do this. This part  
> > shouldn't
> > be too difficult to implement. And also if we do not go with sysfs  
> > this might
> > be interesting to do eventually.
> >
> > The media controller topology as I see it should contain the device  
> > nodes
> > since the application has to know what device node to open to do the  
> > streaming.
> > It should also contain the sub-devices so the application can  
> > control them.
> > Is this enough? I think that eventually we also want to show the  
> > physical
> > connectors. I left them out (mostly) from the initial media  
> > controller proposal,
> > but I suspect that we want those as well eventually. But connectors  
> > are
> > definitely not devices. In that respect the entity concept of the  
> > media
> > controller is more abstract than sysfs.
> >
> > However, for now I think we can safely assume that sub-devices can  
> > be made
> > visible in sysfs.
> >
> 
> Hans,
> 
> First of all I'm very sorry that I had not enough time to go through  
> your new RFC. I'll checkout right after posting this mail.
> 
> I think this is a good approach and I also had in my mind that sysfs  
> might be a good method if we could control and monitor through this.  
> Recalling memory when we had a talk in San Francisco, I was frustrated  
> that there is no way to catch events from sort of sub-devices like  
> lens actuator (I mean pizeo motors in camera module). As you know lens  
> actuator is an extremely slow device in comparison with common v4l2  
> devices we are using and we need to know whether it has succeeded or  
> not in moving to expected position.
> So I considered sysfs and udev as candidates for catching events from  
> sub-devices. events like success/failure of lens movement, change of  
> status of subdevices.
> Does anybody experiencing same issue? I think I've seen a lens  
> controller driver in omap3 kernel from TI but not sure how did they  
> control that.
> 
> My point is that we need a kind of framework to give and event to user  
> space and catching them properly just like udev does.

When I was talking to Laurent Pinchart and Sakari and his team at Nokia
we discussed just such a framework. It actually exists already, although
it is poorly implemented.

Look at include/linux/dvb/video.h, struct video_event and ioctl VIDEO_GET_EVENT.
It is used in ivtv (ivtv-ioctl.c, look for VIDEO_GET_EVENT).

The idea is that you can either call VIDEO_GET_EVENT to wait for an event
or use select() and wait for an exception to arrive, and then call
VIDEO_GET_EVENT to find which event it was.

This is ideal for streaming-related events. In ivtv it is used to report
VSYNCs and to report when the MPEG decoder stopped (there is a delay between
stopping sending new data to the decoder and when it actually processed all
its internal buffers).

Laurent is going to look into this to clean it up and present it as a new
proper official V4L2 event mechanism.

For events completely specific to a subdev I wonder whether it wouldn't be
a good idea to use the media controller device for that. I like the select()
mechanism since in an application you can just select() on a whole bunch of
filehandles. If you can't use select() then you are forced to do awkward coding
(e.g. make a separate thread just to handle that other event mechanism).

So with the media controller we can easily let sub-devices notify the media
controller when an event is ready and the media controller can then generate
an exception. An application can just select() on the mc filehandle.

There are two ways of implementing this. One is that the media controller
keeps a global queue of pending events and subdevices just queue events to
that when they arrive (with some queue size limit to prevent run-away events).

So when you call some GET_EVENT type ioctl it should return the ID of the
subdevice (aka entity) as well. What makes me slightly uncomfortable is that
you still want to use that same ioctl on a normal video node. And the subdev
ID has really no meaning there. But making two different ioctls doesn't sit
well with me either.

The alternative implementation is that the mc will only wait for events from
the currently selected sub-device. So if you want to wait on events from
different sub-devices, then you have to open the mc multiple times, once for
each subdev that you want to receive events from.

I think I would probably go for the second implementation because it is
consistent with the way ioctls are passed to sub-devices. I like the idea that
you can just pass regular V4L2 ioctls to sub-devices. Not all ioctls make
sense, obviously (e.g. any of the streaming I/O ioctls), but a surprisingly
large number of ioctls can be used in that way.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-13  6:13 ` Nathaniel Kim
  2009-09-13  9:03   ` Hans Verkuil
@ 2009-09-13 13:27   ` Mauro Carvalho Chehab
  2009-09-13 13:43     ` Hans Verkuil
  1 sibling, 1 reply; 25+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-13 13:27 UTC (permalink / raw)
  To: Nathaniel Kim; +Cc: Hans Verkuil, linux-media

Em Sun, 13 Sep 2009 15:13:04 +0900
Nathaniel Kim <dongsoo.kim@gmail.com> escreveu:

> 
> 2009. 9. 12., 오전 7:21, Hans Verkuil 작성:
> 
> > Hi all,
> >
> > I've started this as a new thread to prevent polluting the  
> > discussions of the
> > media controller as a concept.
> >
> > First of all, I have no doubt that everything that you can do with  
> > an ioctl,
> > you can also do with sysfs and vice versa. That's not the problem  
> > here.
> >
> > The problem is deciding which approach is the best.
> >
> > What is sysfs? (taken from http://lwn.net/Articles/31185/)
> >
> > "Sysfs is a virtual filesystem which provides a userspace-visible  
> > representation
> > of the device model. The device model and sysfs are sometimes  
> > confused with each
> > other, but they are distinct entities. The device model functions  
> > just fine
> > without sysfs (but the reverse is not true)."
> >
> > Currently both a v4l driver and the device nodes are all represented  
> > in sysfs.
> > This is handled automatically by the kernel.
> >
> > Sub-devices are not represented in sysfs since they are not based on  
> > struct
> > device. They are v4l-internal structures. Actually, if the subdev  
> > represents
> > an i2c device, then that i2c device will be present in sysfs, but  
> > not all
> > subdevs are i2c devices.
> >
> > Should we make all sub-devices based on struct device? Currently  
> > this is not
> > required. Doing this would probably mean registering a virtual bus,  
> > then
> > attaching the sub-device to that. Of course, this only applies to  
> > sub-devices
> > that represent something that is not an i2c device (e.g. something  
> > internal
> > to the media board like a resizer, or something connected to GPIO  
> > pins).
> >
> > If we decide to go with sysfs, then we have to do this. This part  
> > shouldn't
> > be too difficult to implement. And also if we do not go with sysfs  
> > this might
> > be interesting to do eventually.
> >
> > The media controller topology as I see it should contain the device  
> > nodes
> > since the application has to know what device node to open to do the  
> > streaming.
> > It should also contain the sub-devices so the application can  
> > control them.
> > Is this enough? I think that eventually we also want to show the  
> > physical
> > connectors. I left them out (mostly) from the initial media  
> > controller proposal,
> > but I suspect that we want those as well eventually. But connectors  
> > are
> > definitely not devices. In that respect the entity concept of the  
> > media
> > controller is more abstract than sysfs.
> >
> > However, for now I think we can safely assume that sub-devices can  
> > be made
> > visible in sysfs.
> >
> 
> Hans,
> 
> First of all I'm very sorry that I had not enough time to go through  
> your new RFC. I'll checkout right after posting this mail.
> 
> I think this is a good approach and I also had in my mind that sysfs  
> might be a good method if we could control and monitor through this.  
> Recalling memory when we had a talk in San Francisco, I was frustrated  
> that there is no way to catch events from sort of sub-devices like  
> lens actuator (I mean pizeo motors in camera module). As you know lens  
> actuator is an extremely slow device in comparison with common v4l2  
> devices we are using and we need to know whether it has succeeded or  
> not in moving to expected position.
> So I considered sysfs and udev as candidates for catching events from  
> sub-devices. events like success/failure of lens movement, change of  
> status of subdevices.
> Does anybody experiencing same issue? I think I've seen a lens  
> controller driver in omap3 kernel from TI but not sure how did they  
> control that.
> 
> My point is that we need a kind of framework to give and event to user  
> space and catching them properly just like udev does.

Maybe the Kernel event interface could be used for that.
> Cheers,
> 
> Nate
> 
> =
> DongSoo, Nathaniel Kim
> Engineer
> Mobile S/W Platform Lab.
> Digital Media & Communications R&D Centre
> Samsung Electronics CO., LTD.
> e-mail : dongsoo.kim@gmail.com
>            dongsoo45.kim@samsung.com--
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html




Cheers,
Mauro

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-13 13:27   ` Mauro Carvalho Chehab
@ 2009-09-13 13:43     ` Hans Verkuil
  2009-09-13 14:00       ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 25+ messages in thread
From: Hans Verkuil @ 2009-09-13 13:43 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Nathaniel Kim, linux-media

On Sunday 13 September 2009 15:27:57 Mauro Carvalho Chehab wrote:
> Em Sun, 13 Sep 2009 15:13:04 +0900
> Nathaniel Kim <dongsoo.kim@gmail.com> escreveu:
> 
> > 
> > 2009. 9. 12., 오전 7:21, Hans Verkuil 작성:
> > 
> > > Hi all,
> > >
> > > I've started this as a new thread to prevent polluting the  
> > > discussions of the
> > > media controller as a concept.
> > >
> > > First of all, I have no doubt that everything that you can do with  
> > > an ioctl,
> > > you can also do with sysfs and vice versa. That's not the problem  
> > > here.
> > >
> > > The problem is deciding which approach is the best.
> > >
> > > What is sysfs? (taken from http://lwn.net/Articles/31185/)
> > >
> > > "Sysfs is a virtual filesystem which provides a userspace-visible  
> > > representation
> > > of the device model. The device model and sysfs are sometimes  
> > > confused with each
> > > other, but they are distinct entities. The device model functions  
> > > just fine
> > > without sysfs (but the reverse is not true)."
> > >
> > > Currently both a v4l driver and the device nodes are all represented  
> > > in sysfs.
> > > This is handled automatically by the kernel.
> > >
> > > Sub-devices are not represented in sysfs since they are not based on  
> > > struct
> > > device. They are v4l-internal structures. Actually, if the subdev  
> > > represents
> > > an i2c device, then that i2c device will be present in sysfs, but  
> > > not all
> > > subdevs are i2c devices.
> > >
> > > Should we make all sub-devices based on struct device? Currently  
> > > this is not
> > > required. Doing this would probably mean registering a virtual bus,  
> > > then
> > > attaching the sub-device to that. Of course, this only applies to  
> > > sub-devices
> > > that represent something that is not an i2c device (e.g. something  
> > > internal
> > > to the media board like a resizer, or something connected to GPIO  
> > > pins).
> > >
> > > If we decide to go with sysfs, then we have to do this. This part  
> > > shouldn't
> > > be too difficult to implement. And also if we do not go with sysfs  
> > > this might
> > > be interesting to do eventually.
> > >
> > > The media controller topology as I see it should contain the device  
> > > nodes
> > > since the application has to know what device node to open to do the  
> > > streaming.
> > > It should also contain the sub-devices so the application can  
> > > control them.
> > > Is this enough? I think that eventually we also want to show the  
> > > physical
> > > connectors. I left them out (mostly) from the initial media  
> > > controller proposal,
> > > but I suspect that we want those as well eventually. But connectors  
> > > are
> > > definitely not devices. In that respect the entity concept of the  
> > > media
> > > controller is more abstract than sysfs.
> > >
> > > However, for now I think we can safely assume that sub-devices can  
> > > be made
> > > visible in sysfs.
> > >
> > 
> > Hans,
> > 
> > First of all I'm very sorry that I had not enough time to go through  
> > your new RFC. I'll checkout right after posting this mail.
> > 
> > I think this is a good approach and I also had in my mind that sysfs  
> > might be a good method if we could control and monitor through this.  
> > Recalling memory when we had a talk in San Francisco, I was frustrated  
> > that there is no way to catch events from sort of sub-devices like  
> > lens actuator (I mean pizeo motors in camera module). As you know lens  
> > actuator is an extremely slow device in comparison with common v4l2  
> > devices we are using and we need to know whether it has succeeded or  
> > not in moving to expected position.
> > So I considered sysfs and udev as candidates for catching events from  
> > sub-devices. events like success/failure of lens movement, change of  
> > status of subdevices.
> > Does anybody experiencing same issue? I think I've seen a lens  
> > controller driver in omap3 kernel from TI but not sure how did they  
> > control that.
> > 
> > My point is that we need a kind of framework to give and event to user  
> > space and catching them properly just like udev does.
> 
> Maybe the Kernel event interface could be used for that.

Are you talking about the input event interface? There is no standard kernel
way of doing events afaik.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-13 13:43     ` Hans Verkuil
@ 2009-09-13 14:00       ` Mauro Carvalho Chehab
  2009-09-15 14:33         ` Laurent Pinchart
  0 siblings, 1 reply; 25+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-13 14:00 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Nathaniel Kim, linux-media

Em Sun, 13 Sep 2009 15:43:02 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> On Sunday 13 September 2009 15:27:57 Mauro Carvalho Chehab wrote:
> > Em Sun, 13 Sep 2009 15:13:04 +0900
> > Nathaniel Kim <dongsoo.kim@gmail.com> escreveu:
> > 
> > > 
> > > 2009. 9. 12., 오전 7:21, Hans Verkuil 작성:
> > > 
> > > > Hi all,
> > > >
> > > > I've started this as a new thread to prevent polluting the  
> > > > discussions of the
> > > > media controller as a concept.
> > > >
> > > > First of all, I have no doubt that everything that you can do with  
> > > > an ioctl,
> > > > you can also do with sysfs and vice versa. That's not the problem  
> > > > here.
> > > >
> > > > The problem is deciding which approach is the best.
> > > >
> > > > What is sysfs? (taken from http://lwn.net/Articles/31185/)
> > > >
> > > > "Sysfs is a virtual filesystem which provides a userspace-visible  
> > > > representation
> > > > of the device model. The device model and sysfs are sometimes  
> > > > confused with each
> > > > other, but they are distinct entities. The device model functions  
> > > > just fine
> > > > without sysfs (but the reverse is not true)."
> > > >
> > > > Currently both a v4l driver and the device nodes are all represented  
> > > > in sysfs.
> > > > This is handled automatically by the kernel.
> > > >
> > > > Sub-devices are not represented in sysfs since they are not based on  
> > > > struct
> > > > device. They are v4l-internal structures. Actually, if the subdev  
> > > > represents
> > > > an i2c device, then that i2c device will be present in sysfs, but  
> > > > not all
> > > > subdevs are i2c devices.
> > > >
> > > > Should we make all sub-devices based on struct device? Currently  
> > > > this is not
> > > > required. Doing this would probably mean registering a virtual bus,  
> > > > then
> > > > attaching the sub-device to that. Of course, this only applies to  
> > > > sub-devices
> > > > that represent something that is not an i2c device (e.g. something  
> > > > internal
> > > > to the media board like a resizer, or something connected to GPIO  
> > > > pins).
> > > >
> > > > If we decide to go with sysfs, then we have to do this. This part  
> > > > shouldn't
> > > > be too difficult to implement. And also if we do not go with sysfs  
> > > > this might
> > > > be interesting to do eventually.
> > > >
> > > > The media controller topology as I see it should contain the device  
> > > > nodes
> > > > since the application has to know what device node to open to do the  
> > > > streaming.
> > > > It should also contain the sub-devices so the application can  
> > > > control them.
> > > > Is this enough? I think that eventually we also want to show the  
> > > > physical
> > > > connectors. I left them out (mostly) from the initial media  
> > > > controller proposal,
> > > > but I suspect that we want those as well eventually. But connectors  
> > > > are
> > > > definitely not devices. In that respect the entity concept of the  
> > > > media
> > > > controller is more abstract than sysfs.
> > > >
> > > > However, for now I think we can safely assume that sub-devices can  
> > > > be made
> > > > visible in sysfs.
> > > >
> > > 
> > > Hans,
> > > 
> > > First of all I'm very sorry that I had not enough time to go through  
> > > your new RFC. I'll checkout right after posting this mail.
> > > 
> > > I think this is a good approach and I also had in my mind that sysfs  
> > > might be a good method if we could control and monitor through this.  
> > > Recalling memory when we had a talk in San Francisco, I was frustrated  
> > > that there is no way to catch events from sort of sub-devices like  
> > > lens actuator (I mean pizeo motors in camera module). As you know lens  
> > > actuator is an extremely slow device in comparison with common v4l2  
> > > devices we are using and we need to know whether it has succeeded or  
> > > not in moving to expected position.
> > > So I considered sysfs and udev as candidates for catching events from  
> > > sub-devices. events like success/failure of lens movement, change of  
> > > status of subdevices.
> > > Does anybody experiencing same issue? I think I've seen a lens  
> > > controller driver in omap3 kernel from TI but not sure how did they  
> > > control that.
> > > 
> > > My point is that we need a kind of framework to give and event to user  
> > > space and catching them properly just like udev does.
> > 
> > Maybe the Kernel event interface could be used for that.
> 
> Are you talking about the input event interface? There is no standard kernel
> way of doing events afaik.

Yes. It is designed for low-latency report of events, like mouse movements,
where you expect that the movement will happen as mouse moves. So, it may work
fine also for servo movements. A closer look on it, plus some tests should be
done to see if it will work fine for such camera events.

Cheers,
Mauro

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-11 22:21 Media controller: sysfs vs ioctl Hans Verkuil
                   ` (2 preceding siblings ...)
  2009-09-13  6:13 ` Nathaniel Kim
@ 2009-09-13 15:54 ` wk
  2009-09-13 16:07   ` Hans Verkuil
  2009-09-13 23:31   ` Mauro Carvalho Chehab
  2009-09-15 14:10 ` Laurent Pinchart
  4 siblings, 2 replies; 25+ messages in thread
From: wk @ 2009-09-13 15:54 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media

Hans Verkuil schrieb:
> Hi all,
>
> I've started this as a new thread to prevent polluting the discussions of the
> media controller as a concept.
>
> First of all, I have no doubt that everything that you can do with an ioctl,
> you can also do with sysfs and vice versa. That's not the problem here.
>
> The problem is deciding which approach is the best.
>
>   

Is it really a good idea to create a dependency to some virtual file 
system which may go away in future?
 From time to time some of those seem to go away, for example devfs.

Is it really unavoidable to have something in sysfs, something which is 
really not possible with ioctls?
And do you really want to depend on sysfs developers?

--Winfried



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-13 15:54 ` wk
@ 2009-09-13 16:07   ` Hans Verkuil
  2009-09-13 23:31   ` Mauro Carvalho Chehab
  1 sibling, 0 replies; 25+ messages in thread
From: Hans Verkuil @ 2009-09-13 16:07 UTC (permalink / raw)
  To: wk; +Cc: linux-media

On Sunday 13 September 2009 17:54:11 wk wrote:
> Hans Verkuil schrieb:
> > Hi all,
> >
> > I've started this as a new thread to prevent polluting the discussions of the
> > media controller as a concept.
> >
> > First of all, I have no doubt that everything that you can do with an ioctl,
> > you can also do with sysfs and vice versa. That's not the problem here.
> >
> > The problem is deciding which approach is the best.
> >
> >   
> 
> Is it really a good idea to create a dependency to some virtual file 
> system which may go away in future?
>  From time to time some of those seem to go away, for example devfs.
> 
> Is it really unavoidable to have something in sysfs, something which is 
> really not possible with ioctls?
> And do you really want to depend on sysfs developers?

One other interesting question is: currently the V4L2 API is also used by BSD
variants for their video drivers. Our V4L2 header is explicitly dual-licensed
to allow this. I don't think that BSD has sysfs. So making the media controller
sysfs-based only would make it very hard for them if they ever want to port
drivers that rely on that to BSD.

Yes, I know that strictly speaking we don't have to care about that, but it
is yet another argument against the use of sysfs as far as I am concerned.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-13 15:54 ` wk
  2009-09-13 16:07   ` Hans Verkuil
@ 2009-09-13 23:31   ` Mauro Carvalho Chehab
  2009-09-14 11:49     ` Karicheri, Muralidharan
  1 sibling, 1 reply; 25+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-13 23:31 UTC (permalink / raw)
  To: wk; +Cc: Hans Verkuil, linux-media

Em Sun, 13 Sep 2009 17:54:11 +0200
wk <handygewinnspiel@gmx.de> escreveu:

> Hans Verkuil schrieb:
> > Hi all,
> >
> > I've started this as a new thread to prevent polluting the discussions of the
> > media controller as a concept.
> >
> > First of all, I have no doubt that everything that you can do with an ioctl,
> > you can also do with sysfs and vice versa. That's not the problem here.
> >
> > The problem is deciding which approach is the best.
> >
> >   
> 
> Is it really a good idea to create a dependency to some virtual file 
> system which may go away in future?
>  From time to time some of those seem to go away, for example devfs.

> Is it really unavoidable to have something in sysfs, something which is 
> really not possible with ioctls?
> And do you really want to depend on sysfs developers?

First of all, both ioctl's and sysfs are part of vfs support.

Second: where did you got the wrong information that sysfs would be deprecated? 

There's no plan to deprecate sysfs, and, since there are lots of
kernel-userspace API's depending on sysfs, you can't just remove it. 

It is completely different from what we had with devfs, where just device names
were created there, on a limited way (for example, no directories were allowed
at devfs). Yet, before devfs removal, sysfs was added to implement the same
features, providing even more functionality.

Removing sysfs is as hard as removing ioctl or procfs support on kernel.
You may change their internal implementation, but not the userspace API. 

Btw, if we'll seek for the last internal changes, among those three API's, the more
recent internal changes were at fs API where ioctl support is. There, the
Kernel big logs were removed. This required a review on all driver locks and changes
on almost all v4l/dvb drivers.

Also, wanting or not, sysfs is called on every kernel driver, so this
dependency already exists.

Cheers,
Mauro

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: Media controller: sysfs vs ioctl
  2009-09-13 23:31   ` Mauro Carvalho Chehab
@ 2009-09-14 11:49     ` Karicheri, Muralidharan
  0 siblings, 0 replies; 25+ messages in thread
From: Karicheri, Muralidharan @ 2009-09-14 11:49 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, wk; +Cc: Hans Verkuil, linux-media

Hi,

In our experience, sysfs was useful for simple control mechanism such as enable/disable or displaying statistics or status. But we had received
customer complaints that with this approach, these functionality will become unavailable when kernel is built without sysfs as part of size
optimization. So if this is really true, I don't think sysfs is the right candidate for MC.  Since sysfs is more string oriented, won't it increase the 
code size when it is used for parsing a lot of variable/value pair to setup device hw configuration ? Besides, most of the application that
is written for TI video drivers are based on ioctl and it would make technical support a nightmare with a different API used for device configuration.

Murali
________________________________________
From: linux-media-owner@vger.kernel.org [linux-media-owner@vger.kernel.org] On Behalf Of Mauro Carvalho Chehab [mchehab@infradead.org]
Sent: Sunday, September 13, 2009 7:31 PM
To: wk
Cc: Hans Verkuil; linux-media@vger.kernel.org
Subject: Re: Media controller: sysfs vs ioctl

Em Sun, 13 Sep 2009 17:54:11 +0200
wk <handygewinnspiel@gmx.de> escreveu:

> Hans Verkuil schrieb:
> > Hi all,
> >
> > I've started this as a new thread to prevent polluting the discussions of the
> > media controller as a concept.
> >
> > First of all, I have no doubt that everything that you can do with an ioctl,
> > you can also do with sysfs and vice versa. That's not the problem here.
> >
> > The problem is deciding which approach is the best.
> >
> >
>
> Is it really a good idea to create a dependency to some virtual file
> system which may go away in future?
>  From time to time some of those seem to go away, for example devfs.

> Is it really unavoidable to have something in sysfs, something which is
> really not possible with ioctls?
> And do you really want to depend on sysfs developers?

First of all, both ioctl's and sysfs are part of vfs support.

Second: where did you got the wrong information that sysfs would be deprecated?

There's no plan to deprecate sysfs, and, since there are lots of
kernel-userspace API's depending on sysfs, you can't just remove it.

It is completely different from what we had with devfs, where just device names
were created there, on a limited way (for example, no directories were allowed
at devfs). Yet, before devfs removal, sysfs was added to implement the same
features, providing even more functionality.

Removing sysfs is as hard as removing ioctl or procfs support on kernel.
You may change their internal implementation, but not the userspace API.

Btw, if we'll seek for the last internal changes, among those three API's, the more
recent internal changes were at fs API where ioctl support is. There, the
Kernel big logs were removed. This required a review on all driver locks and changes
on almost all v4l/dvb drivers.

Also, wanting or not, sysfs is called on every kernel driver, so this
dependency already exists.

Cheers,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-11 22:21 Media controller: sysfs vs ioctl Hans Verkuil
                   ` (3 preceding siblings ...)
  2009-09-13 15:54 ` wk
@ 2009-09-15 14:10 ` Laurent Pinchart
  2009-09-15 15:07   ` Hans Verkuil
  4 siblings, 1 reply; 25+ messages in thread
From: Laurent Pinchart @ 2009-09-15 14:10 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media

On Saturday 12 September 2009 00:21:48 Hans Verkuil wrote:
> Hi all,
> 
> I've started this as a new thread to prevent polluting the discussions of
>  the media controller as a concept.

[snip]
 
> What sort of interaction do we need with sub-devices?
 
[snip]
 
> 2) Private ioctls. Basically a way to set and get data that is hardware
> specific from the sub-device. This can be anything from statistics,
>  histogram information, setting resizer coefficients, configuring
>  colorspace converters, whatever. Furthermore, just like the regular V4L2
>  API it has to be designed with future additions in mind (i.e. it should
>  use something like reserved fields).
> 
> In my opinion ioctls are ideal for this since they are very flexible and
>  easy to use and efficient. Especially because you can combine multiple
>  fields into one unit. So getting histogram data through an ioctl will also
>  provide the timestamp. And you can both write and read data in one atomic
>  system call. Yes, you can combine data in sysfs reads as well. Although an
>  IORW ioctls is hard to model in sysfs. But whereas an ioctl can just copy
>  a struct from kernel to userspace, for sysfs you have to go through a
>  painful process of parsing and formatting.
> 
> And not all this data is small. Histogram data can be 10s of kilobytes.
>  'cat' will typically only read 4 kB at a time, so you will probably have
>  to keep track of how much is read in the read implementation of the
>  attribute. Or does the kernel do that for you?

If I'm not mistaken sysfs binary attributes can't be bigger than 4kB in size.

[snip]

> The final part is how to represent the topology. Device nodes and
>  sub-devices can be exposed to sysfs as discussed earlier. Representing
>  both the possible and current links between them is a lot harder. This is
>  especially true for non-v4l device nodes since we cannot just add
>  attributes there. I think this can be done though by providing that
>  information as attributes of an mc sysfs node. That way it remains under
>  control of the v4l core.

I was a bit concerned (to say the least) when I started catching up with my e-
mails and reading this thread that the discussion would heat up and split 
developers between two sides. While trying to find arguments to convince 
people that my side is better (and of course it is, otherwise I would be on 
the other side :-)) I realized that the whole media controller problem might 
be understood differently by the two sides. I'll try to shed some light on 
this in the hope that it will bring the v4l developers together.

sysfs was designed to expose kernel objects arranged in a tree-like fashion. 
It does that pretty well, although one of its weak points is that it can be 
easily abused.

>From the kernel point of view, (most of) the various sub-devices in a media 
device are arranged in a tree of kernel objects. Most of the time we have an 
I2C controller and various devices sitting on the I2C bus, one or several 
video devices that sit on some internal bus (usually a SoC internal bus for 
the most complex and recent platforms), and possibly SPI and other devices as 
well.

Realizing that, as all those sub-devices are already exposed in sysfs in one 
way or the other, it was tempting to add a few attributes and soft links to 
solve the media controller problem.

However, that solution, even if it might seem simple, misses a very important 
point. Sub-devices are arranged in a tree-like objects structure from the 
kernel point of view, but from the media controller point of view they are 
not. While the kernel cares about kernel objects that are mostly devices on 
busses in parent-children relationships, the media controller cares about how 
video is transferred between sub-devices. And those two concepts are totally 
different.

The sub-devices, from the media controller point of view, make an oriented 
graph of connected nodes. When setting video controls, selecting formats and 
streaming video, what we care about it how the video will flow from its source 
(a sensor, a physical connector, memory, whatever) to its sink (same list of 
possible whatevers). This is what the media controller needs to deal with.

We need to expose a connected graph of nodes to userspace, and let userspace 
access the nodes and the links for various operations. Of course it would be 
possible to handle that through sysfs, as sub-devices are already exposed 
there. But let's face it, it wouldn't be practical, efficient or even clean.

Hans already mentioned several reasons why using sysfs attributes to replace 
all media controller ioctls would be cumbersome. I would add that we need to 
transfer large amounts of aggregated data in some cases (think about 
statistics), and sysfs attributes are simply not designed for that. There's a 
4kB limit for text attributes, and Documentation/filesystems/sysfs.txt states 
that

"Attributes should be ASCII text files, preferably with only one value
per file. It is noted that it may not be efficient to contain only one
value per file, so it is socially acceptable to express an array of
values of the same type.

Mixing types, expressing multiple lines of data, and doing fancy
formatting of data is heavily frowned upon. Doing these things may get
you publically humiliated and your code rewritten without notice."

In the end, I believe that the reason why we need a media controller device 
(which could be called differently of course) and ioctls is exactly the same 
reason why we need ioctls for v4l devices. Would it be possible to replace 
VIDIOC_[GS]_FMT with sysfs attributes ? Yes. Would it be useful, clean and 
efficient ? No.

This discussion isn't meant to convince people of the pros and cons of sysfs. 
Sysfs is useful, it exposes the tree of kobjects to userspace, and thus lets 
userspace knows about the tree of devices that make the platform applications 
run on. This is invaluable and led to things like hal and udev that really 
made a huge difference for Linux. However, sysfs doesn't wash your clothes nor 
does it solve world hunger. No need to be sad about that, it wasn't designed 
for it in the first place. Let's not abuse it and try to make the media 
controller problem fit the sysfs design using hammers and crowbars. In the end 
both the media controller and sysfs would suffer.

The media controller is required to solve a very complex problem brought by 
very complex hardware. The problem has been solved using lots of ugly hacks on 
proprietary platforms, let's show that Linux can solve it cleanly and simply.

-- 
Laurent Pinchart

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-13 14:00       ` Mauro Carvalho Chehab
@ 2009-09-15 14:33         ` Laurent Pinchart
  0 siblings, 0 replies; 25+ messages in thread
From: Laurent Pinchart @ 2009-09-15 14:33 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Hans Verkuil, Nathaniel Kim, linux-media

On Sunday 13 September 2009 16:00:01 Mauro Carvalho Chehab wrote:
> Em Sun, 13 Sep 2009 15:43:02 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > On Sunday 13 September 2009 15:27:57 Mauro Carvalho Chehab wrote:
> > > Em Sun, 13 Sep 2009 15:13:04 +0900
> > > Nathaniel Kim <dongsoo.kim@gmail.com> escreveu:

[snip]

> > > > I think this is a good approach and I also had in my mind that sysfs
> > > > might be a good method if we could control and monitor through this.
> > > > Recalling memory when we had a talk in San Francisco, I was frustrated
> > > > that there is no way to catch events from sort of sub-devices like
> > > > lens actuator (I mean pizeo motors in camera module). As you know lens
> > > > actuator is an extremely slow device in comparison with common v4l2
> > > > devices we are using and we need to know whether it has succeeded or
> > > > not in moving to expected position.
> > > > So I considered sysfs and udev as candidates for catching events from
> > > > sub-devices. events like success/failure of lens movement, change of
> > > > status of subdevices.
> > > > Does anybody experiencing same issue? I think I've seen a lens
> > > > controller driver in omap3 kernel from TI but not sure how did they
> > > > control that.
> > > >
> > > > My point is that we need a kind of framework to give and event to
> > > > user space and catching them properly just like udev does.
> > >
> > > Maybe the Kernel event interface could be used for that.
> >
> > Are you talking about the input event interface? There is no standard
> > kernel way of doing events afaik.
> 
> Yes. It is designed for low-latency report of events, like mouse movements,
> where you expect that the movement will happen as mouse moves. So, it may
> work fine also for servo movements. A closer look on it, plus some tests
> should be done to see if it will work fine for such camera events.

The interface was designed for low-latency report of input events, but it 
doesn't fit the purpose of generic event reporting we need here. Devices need 
to report events such as "statistics are ready", "the video signal is lost", 
"the USB cable has been connected", ... The input event interface wasn't meant 
for that, and using it for such purpose will probably lead to funny results 
when applications expecting button or key presses will start getting such 
events.

Another solution might be to use netlink. That's how generic (non-button) ACPI 
events are reported to userspace. The disadvantage compared to using v4l 
ioctls & select() would be that an application would need to implement netlink 
access in order to get v4l events.

-- 
Laurent Pinchart

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-15 14:10 ` Laurent Pinchart
@ 2009-09-15 15:07   ` Hans Verkuil
  0 siblings, 0 replies; 25+ messages in thread
From: Hans Verkuil @ 2009-09-15 15:07 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: linux-media


> From the kernel point of view, (most of) the various sub-devices in a
> media
> device are arranged in a tree of kernel objects. Most of the time we have
> an
> I2C controller and various devices sitting on the I2C bus, one or several
> video devices that sit on some internal bus (usually a SoC internal bus
> for
> the most complex and recent platforms), and possibly SPI and other devices
> as
> well.
>
> Realizing that, as all those sub-devices are already exposed in sysfs in
> one
> way or the other, it was tempting to add a few attributes and soft links
> to
> solve the media controller problem.

I just wanted to make one clarification: sub-devices are usually, but not
always mapped 1-to-1 to a true kernel device. But sub-devices are an
abstract concept and it is possible to have multiple sub-devices exposed
by a single kernel device. Or even to have one sub-device covering two or
more kernel devices (no such beast exists, but nothing prevents this
technically).

I know of two instances where the sub-device has no sysfs counterpart. And
I expect more will follow.

Regards,

        Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [linux-dvb] Media controller: sysfs vs ioctl
       [not found]           ` <434302664.20090915195043@ntlworld.com>
@ 2009-09-15 19:00             ` Devin Heitmueller
  0 siblings, 0 replies; 25+ messages in thread
From: Devin Heitmueller @ 2009-09-15 19:00 UTC (permalink / raw)
  To: linux-media, david may; +Cc: Mauro Carvalho Chehab, linux-dvb

On Tue, Sep 15, 2009 at 2:50 PM, david may <david.may10@ntlworld.com> wrote:
> OC you do realize that IF the DVB/linuxTV devs cant even make a simple and
> quick LiveCD with a VLC and streaming apps clients and servers connecting to a web sever some were
> for popping into ANY laptop that happens to be around at the time then
> theres a problem with mass credibility somewhere ....
>
> how hard can it be to set up a Multicast VLC 224.0.0.1 tunnel over a
> free IPv6 supplyer to the DVB/LinuxTV servers and Unicast a WVGA or
> better video stream or four to any VLC clients wanting to follow the
> feeds , its just a shame theres not easy way to Add an annotation to
> feeds and/or live transcripts....
>
> or even skip to a section of video
> BBC Iplayer style
>  link directly to your favourite scene
>  http://www.bbc.co.uk/blogs/bbcinternet/2009/07/bbc_iplayer_now_lets_you_link.html
> Steve Hughes in Michael McIntyre's brilliant Comedy Roadshow - see him here: http://bbc.co.uk/i/lbtbg/?t=16m51s
>
>  these are the things we should be seeing in ALL remote participation
>  feeds and More....
> --
> Best regards,
>  david                            mailto:david.may10@ntlworld.com

David,

It really doesn't say anything about the credibility of the
developers, other than that perhaps they are already severely
overloaded and don't have the cycles to create a setup such as the one
you are proposing.

This is unlikely to happen not because it's not technically possible,
but because the developers have other stuff that is more important to
be working on.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-13  9:03   ` Hans Verkuil
@ 2009-09-21 17:23     ` Sakari Ailus
  2009-10-11 22:46       ` Laurent Pinchart
  0 siblings, 1 reply; 25+ messages in thread
From: Sakari Ailus @ 2009-09-21 17:23 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Nathaniel Kim, linux-media, Laurent Pinchart

Hans Verkuil wrote:
> On Sunday 13 September 2009 08:13:04 Nathaniel Kim wrote:
>> 2009. 9. 12., 오전 7:21, Hans Verkuil 작성:
>>
>> Hans,
>>
>> First of all I'm very sorry that I had not enough time to go through  
>> your new RFC. I'll checkout right after posting this mail.
>>
>> I think this is a good approach and I also had in my mind that sysfs  
>> might be a good method if we could control and monitor through this.  
>> Recalling memory when we had a talk in San Francisco, I was frustrated  
>> that there is no way to catch events from sort of sub-devices like  
>> lens actuator (I mean pizeo motors in camera module). As you know lens  
>> actuator is an extremely slow device in comparison with common v4l2  
>> devices we are using and we need to know whether it has succeeded or  
>> not in moving to expected position.
>> So I considered sysfs and udev as candidates for catching events from  
>> sub-devices. events like success/failure of lens movement, change of  
>> status of subdevices.
>> Does anybody experiencing same issue? I think I've seen a lens  
>> controller driver in omap3 kernel from TI but not sure how did they  
>> control that.
>>
>> My point is that we need a kind of framework to give and event to user  
>> space and catching them properly just like udev does.
> 
> When I was talking to Laurent Pinchart and Sakari and his team at Nokia
> we discussed just such a framework. It actually exists already, although
> it is poorly implemented.
> 
> Look at include/linux/dvb/video.h, struct video_event and ioctl VIDEO_GET_EVENT.
> It is used in ivtv (ivtv-ioctl.c, look for VIDEO_GET_EVENT).
> 
> The idea is that you can either call VIDEO_GET_EVENT to wait for an event
> or use select() and wait for an exception to arrive, and then call
> VIDEO_GET_EVENT to find which event it was.
> 
> This is ideal for streaming-related events. In ivtv it is used to report
> VSYNCs and to report when the MPEG decoder stopped (there is a delay between
> stopping sending new data to the decoder and when it actually processed all
> its internal buffers).
> 
> Laurent is going to look into this to clean it up and present it as a new
> proper official V4L2 event mechanism.
> 
> For events completely specific to a subdev I wonder whether it wouldn't be
> a good idea to use the media controller device for that. I like the select()
> mechanism since in an application you can just select() on a whole bunch of
> filehandles. If you can't use select() then you are forced to do awkward coding
> (e.g. make a separate thread just to handle that other event mechanism).

Agree. There's no reasonable way to use video devices here since the 
events may be connected to non-video related issues --- like the statistics.

One possible approach could be allocating a device node for each subdev 
and use them and leave the media controller device with just the media 
controller specific ioctls. Then there would be no need to set current 
subdev nor bind the subdev to file handle either.

Just an idea.

> So with the media controller we can easily let sub-devices notify the media
> controller when an event is ready and the media controller can then generate
> an exception. An application can just select() on the mc filehandle.
> 
> There are two ways of implementing this. One is that the media controller
> keeps a global queue of pending events and subdevices just queue events to
> that when they arrive (with some queue size limit to prevent run-away events).

With the above arrangement, the events could be easily subdev specific. 
The mechanism should be generic still, though.

> So when you call some GET_EVENT type ioctl it should return the ID of the
> subdevice (aka entity) as well. What makes me slightly uncomfortable is that
> you still want to use that same ioctl on a normal video node. And the subdev
> ID has really no meaning there. But making two different ioctls doesn't sit
> well with me either.
> 
> The alternative implementation is that the mc will only wait for events from
> the currently selected sub-device. So if you want to wait on events from
> different sub-devices, then you have to open the mc multiple times, once for
> each subdev that you want to receive events from.
> 
> I think I would probably go for the second implementation because it is
> consistent with the way ioctls are passed to sub-devices. I like the idea that
> you can just pass regular V4L2 ioctls to sub-devices. Not all ioctls make
> sense, obviously (e.g. any of the streaming I/O ioctls), but a surprisingly
> large number of ioctls can be used in that way.

I agree with this. There are just a few ioctls that probably don't make 
sense (e.g. the steaming related ones).

IMO even the format setting ioctls could be nice since the possible 
input and output formats of the subdevs should be enumerable, too.

ENUM_FRAMESIZES and ENUM_FRAMEINTERVALS are missing the v4l2_buf_type, 
but there are reserved fields...

-- 
Sakari Ailus
sakari.ailus@maxwell.research.nokia.com


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Media controller: sysfs vs ioctl
  2009-09-21 17:23     ` Sakari Ailus
@ 2009-10-11 22:46       ` Laurent Pinchart
  0 siblings, 0 replies; 25+ messages in thread
From: Laurent Pinchart @ 2009-10-11 22:46 UTC (permalink / raw)
  To: Sakari Ailus; +Cc: Hans Verkuil, Nathaniel Kim, linux-media

On Monday 21 September 2009 19:23:54 Sakari Ailus wrote:
> Hans Verkuil wrote:
> > On Sunday 13 September 2009 08:13:04 Nathaniel Kim wrote:
> >> 2009. 9. 12., 오전 7:21, Hans Verkuil 작성:
> >>
> >> Hans,
> >>
> >> First of all I'm very sorry that I had not enough time to go through
> >> your new RFC. I'll checkout right after posting this mail.
> >>
> >> I think this is a good approach and I also had in my mind that sysfs
> >> might be a good method if we could control and monitor through this.
> >> Recalling memory when we had a talk in San Francisco, I was frustrated
> >> that there is no way to catch events from sort of sub-devices like
> >> lens actuator (I mean pizeo motors in camera module). As you know lens
> >> actuator is an extremely slow device in comparison with common v4l2
> >> devices we are using and we need to know whether it has succeeded or
> >> not in moving to expected position.
> >> So I considered sysfs and udev as candidates for catching events from
> >> sub-devices. events like success/failure of lens movement, change of
> >> status of subdevices.
> >> Does anybody experiencing same issue? I think I've seen a lens
> >> controller driver in omap3 kernel from TI but not sure how did they
> >> control that.
> >>
> >> My point is that we need a kind of framework to give and event to user
> >> space and catching them properly just like udev does.
> >
> > When I was talking to Laurent Pinchart and Sakari and his team at Nokia
> > we discussed just such a framework. It actually exists already, although
> > it is poorly implemented.
> >
> > Look at include/linux/dvb/video.h, struct video_event and ioctl
> > VIDEO_GET_EVENT. It is used in ivtv (ivtv-ioctl.c, look for
> > VIDEO_GET_EVENT).
> >
> > The idea is that you can either call VIDEO_GET_EVENT to wait for an event
> > or use select() and wait for an exception to arrive, and then call
> > VIDEO_GET_EVENT to find which event it was.
> >
> > This is ideal for streaming-related events. In ivtv it is used to report
> > VSYNCs and to report when the MPEG decoder stopped (there is a delay
> > between stopping sending new data to the decoder and when it actually
> > processed all its internal buffers).
> >
> > Laurent is going to look into this to clean it up and present it as a new
> > proper official V4L2 event mechanism.
> >
> > For events completely specific to a subdev I wonder whether it wouldn't
> > be a good idea to use the media controller device for that. I like the
> > select() mechanism since in an application you can just select() on a
> > whole bunch of filehandles. If you can't use select() then you are forced
> > to do awkward coding (e.g. make a separate thread just to handle that
> > other event mechanism).
> 
> Agree. There's no reasonable way to use video devices here since the
> events may be connected to non-video related issues --- like the
>  statistics.
> 
> One possible approach could be allocating a device node for each subdev
> and use them and leave the media controller device with just the media
> controller specific ioctls. Then there would be no need to set current
> subdev nor bind the subdev to file handle either.
> 
> Just an idea.
> 
> > So with the media controller we can easily let sub-devices notify the
> > media controller when an event is ready and the media controller can then
> > generate an exception. An application can just select() on the mc
> > filehandle.
> >
> > There are two ways of implementing this. One is that the media controller
> > keeps a global queue of pending events and subdevices just queue events
> > to that when they arrive (with some queue size limit to prevent run-away
> > events).
> 
> With the above arrangement, the events could be easily subdev specific.
> The mechanism should be generic still, though.
> 
> > So when you call some GET_EVENT type ioctl it should return the ID of the
> > subdevice (aka entity) as well. What makes me slightly uncomfortable is
> > that you still want to use that same ioctl on a normal video node. And
> > the subdev ID has really no meaning there. But making two different
> > ioctls doesn't sit well with me either.
> >
> > The alternative implementation is that the mc will only wait for events
> > from the currently selected sub-device. So if you want to wait on events
> > from different sub-devices, then you have to open the mc multiple times,
> > once for each subdev that you want to receive events from.
> >
> > I think I would probably go for the second implementation because it is
> > consistent with the way ioctls are passed to sub-devices. I like the idea
> > that you can just pass regular V4L2 ioctls to sub-devices. Not all ioctls
> > make sense, obviously (e.g. any of the streaming I/O ioctls), but a
> > surprisingly large number of ioctls can be used in that way.
> 
> I agree with this. There are just a few ioctls that probably don't make
> sense (e.g. the steaming related ones).
> 
> IMO even the format setting ioctls could be nice since the possible
> input and output formats of the subdevs should be enumerable, too.
> 
> ENUM_FRAMESIZES and ENUM_FRAMEINTERVALS are missing the v4l2_buf_type,
> but there are reserved fields...

Those two ioctls are still marked as experimental. Does that mean we could 
change them in an ABI incompatible way ?

-- 
Laurent Pinchart

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2009-10-11 22:44 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-09-11 22:21 Media controller: sysfs vs ioctl Hans Verkuil
2009-09-11 23:01 ` Andy Walls
2009-09-12  1:37   ` hermann pitton
2009-09-12 13:31 ` Mauro Carvalho Chehab
2009-09-12 13:41   ` Devin Heitmueller
2009-09-12 14:45     ` Mauro Carvalho Chehab
2009-09-12 15:12       ` Hans Verkuil
2009-09-12 15:54         ` Mauro Carvalho Chehab
2009-09-12 18:48           ` Andy Walls
2009-09-13  1:33             ` Mauro Carvalho Chehab
     [not found]           ` <434302664.20090915195043@ntlworld.com>
2009-09-15 19:00             ` [linux-dvb] " Devin Heitmueller
2009-09-13  6:13 ` Nathaniel Kim
2009-09-13  9:03   ` Hans Verkuil
2009-09-21 17:23     ` Sakari Ailus
2009-10-11 22:46       ` Laurent Pinchart
2009-09-13 13:27   ` Mauro Carvalho Chehab
2009-09-13 13:43     ` Hans Verkuil
2009-09-13 14:00       ` Mauro Carvalho Chehab
2009-09-15 14:33         ` Laurent Pinchart
2009-09-13 15:54 ` wk
2009-09-13 16:07   ` Hans Verkuil
2009-09-13 23:31   ` Mauro Carvalho Chehab
2009-09-14 11:49     ` Karicheri, Muralidharan
2009-09-15 14:10 ` Laurent Pinchart
2009-09-15 15:07   ` Hans Verkuil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.