All of lore.kernel.org
 help / color / mirror / Atom feed
* RFCv2: Media controller proposal
@ 2009-09-10  7:13 Hans Verkuil
  2009-09-10 13:01 ` Patrick Boettcher
                   ` (3 more replies)
  0 siblings, 4 replies; 57+ messages in thread
From: Hans Verkuil @ 2009-09-10  7:13 UTC (permalink / raw)
  To: linux-media

Hi all,

Here is the new Media Controller RFC. It is completely rewritten from the
original RFC. This original RFC can be found here:

http://www.archivum.info/video4linux-list%40redhat.com/2008-07/00371/RFC:_Add_support_to_query_and_change_connections_inside_a_media_device

This document will be the basis of the discussions during the Plumbers
Conference in two weeks time.

Open issue #3 is the main unresolved item, but I hope to come up with something
during the weekend.

Regards,

	Hans


RFC: Media controller proposal

Version 2.0

Background
==========

This RFC is a new version of the original RFC that was written in cooperation
with and on behalf of Texas Instruments about a year ago.

Much work has been done in the past year to put the foundation in place to
be able to implement a media controller and now it is time for this updated
version. The intention is to discuss this in more detail during this years
Plumbers Conference.

Although the high-level concepts are the same as in the original RFC, many
of the details have changed based on what was learned over the past year.

This RFC is based on the original discussions with Manjunath Hadli from TI
last year, on discussions during a recent meeting between Laurent Pinchart,
Guennadi Liakhovetski and myself, and on recent discussions with Nokia.
Thanks to Sakari Ailus for doing an initial review of this RFC.

One note regarding terminology: a 'board' is the name I use for the SoC,
PCI or USB device that contains the video hardware. Each board has its own
driver instance and its own v4l2_device struct. Originally I called it
'device', but that name is already used in too many places.


What is a media controller?
===========================

In a nutshell: a media controller is a new v4l device node that can be used
to discover and modify the topology of the board and to give access to the 
low-level nodes (such as previewers, resizers, color space converters, etc.)
that are part of the topology.

It does not do any streaming, that is the exclusive domain of video nodes.
It is meant purely for controlling a board as a whole.


Why do we need one?
===================

There are currently several problems that are impossible to solve within the
current V4L2 API:

1) Discovering the various device nodes that are typically created by a video
board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes, framebuffer
nodes, input nodes (for e.g. webcam button events or IR remotes).

It would be very handy if an application can just open an /dev/v4l/mc0 node
and be able to figure out where all the nodes are, and to be able to figure
out what the capabilities of the board are (e.g. does it support DVB, is the
audio going through a loopback cable or is there an alsa device, can it do
compressed MPEG video, etc. etc.). Currently the end-user has no choice but to
supply the device nodes manually.

2) Some of the newer SoC devices can connect or disconnect internal components
dynamically. As an example, the omap3 can either connect a sensor output to a
CCDC module to a previewer module to a resizer module and finally to a capture
device node. But it is also possible to capture the sensor output directly
after the CCDC module. The previewer can get its input from another video
device node and output either to the resizer or to another video capture
device node. The same is true for the resizer, that too can get its input from
a device node.

So there are lots of connections here that can be modified at will depending
on what the application wants. And in real life there are even more links than
I mentioned here. And it will only get more complicated in the future.

All this requires that there has to be a way to connect and disconnect parts
of the internal topology of a video board at will.

3) There is increasing demand to be able to control e.g. sensors or video
encoders/decoders at a much more precise manner. Currently the V4L2 API
provides only limited support in the form of a set of controls. But when
building a high-end camera the developer of the application controlling it
needs very detailed control of the sensor and image processing devices.
On the other hand, you do not want to have all this polluting the V4L2 API
since there is absolutely no sense in exporting this as part of the existing
controls, or to allow for a large number of private ioctls.

What would be a good solution is to give access to the various components of
the board and allow the application to send component-specific ioctls or
controls to it. Any application that will do this is by default tailored to
that board. In addition, none of these new controls or commands will pollute
the namespace of V4L2.

A media controller can solve all these problems: it will provide a window into
the architecture of the board and all its device nodes. Since it is already
enumerating the nodes and components of the board and how they are linked up,
it is only a small step to also use it to change links and to send commands to
specific components.


Restrictions
============

1) These API additions should not affect existing applications.

2) The new API should not attempt to be too smart. All it should do it to give
the application full control of the board and to provide some initial support
for existing applications. E.g. in the case of omap3 you will have an initial
setup where the sensor is connected through all components to a capture device
node. This will provide sufficient support for a standard webcam application,
but if you want something more advanced then the application will have to set
it up explicitly. It may even be too complicated to use the resizer in this
case, and instead only a few resolutions optimal for the sensor are reported.

3) Provide automatic media controller support for drivers that do not create
one themselves. This new functionality should become available to all drivers,
not just new ones. Otherwise it will take a long time before applications like
MythTV will start to use it.


Implementation
==============

Many of the building blocks needed to implement a media controller already
exist: the v4l core can easily be extended with a media controller type, the
media controller device node can be held by the v4l2_device top-level struct,
and to represent an internal component we have the v4l2_subdev struct.

The core v4l2_subdev ops already has a generic 'ioctl' callback that can be
used by the media controller to pass custom ioctls to the subdev.

What is missing is that device nodes should be registered with struct
v4l2_device. All that is needed to do that is to ensure that when registering
a video node you always pass a pointer to the v4l2_device struct. A lot of
drivers do this already. In addition one should also be able to register
non-video device nodes (alsa, fb, etc.), so that they can be enumerated.

Since sub-devices are already registered with the v4l2_device there is not
much to do there.

Topology
--------

The topology is represented by entities. Each entity has 0 or more inputs and
0 or more outputs. Each input or output can be linked to 0 or more possible
outputs or inputs from other entities. This is either mutually exclusive 
(i.e. an input/output can be connected to only one output/input at a time)
or it can be connected to multiple inputs/outputs at the same time.

A device node is a special kind of entity with just one input (capture node)
or output (video node). It may have both if it does some in-place operation.

Each entity has a unique numerical ID (unique for the board). Each input or
output has a unique numerical ID as well, but that ID is only unique to the
entity. To specify a particular input or output of an entity one would give
an <entity ID, input/output ID> tuple.

When enumerating over entities you will need to retrieve at least the
following information:

- type (subdev or device node)
- entity ID
- entity description (can be quite long)
- subtype (what sort of device node or subdev is it?)
- capabilities (what can the entity do? Specific to the subtype and more
precise than the v4l2_capability struct which only deals with the board
capabilities)
- addition subtype-specific data (union)
- number of inputs and outputs. The input IDs should probably just be a value
of 0 - (#inputs - 1) (ditto for output IDs).

Another ioctl is needed to obtain the list of possible links that can be made
for each input and output.

It is good to realize that most applications will just enumerate e.g. capture
device nodes. Few applications will do a full scan of the whole topology.
Instead they will just specify the unique entity ID and if needed the
input/output ID as well. These IDs are declared in the board or sub-device
specific header.

A full enumeration will typically only be done by some sort of generic
application like v4l2-ctl.

In addition, most entities will have only one or two inputs/outputs at most.
So we might optimize the data structures for this. We probably will have to
see how it goes when we implement it.

We obviously need ioctls to make and break links between entities. It
shouldn't be hard to do this.

Access to sub-devices
---------------------

What is a bit trickier is how to select a sub-device as the target for ioctls.
Normally ioctls like S_CTRL are sent to a /dev/v4l/videoX node and the driver
will figure out which sub-device (or possibly the bridge itself) will receive
it. There is no way of hijacking this mechanism to e.g. specify a specific
entity ID without also having to modify most of the v4l2 structs by adding
such an ID field. But with the media controller we can at least create an
ioctl that specifies a 'target entity' that will receive any non-media
controller ioctl. Note that for now we only support sub-devices as the target
entity.

The idea is this:

// Select a particular target entity
ioctl(mc, VIDIOC_S_SUBDEV, &entityID);
// Send S_FMT directly to that entity
ioctl(mc, VIDIOC_S_FMT, &fmt);
// Send a custom ioctl to that entity
ioctl(mc, VIDIOC_OMAP3_G_HISTOGRAM, &hist);

This requires no API changes and is very easy to implement. One problem is
that this is not thread-safe. We can either supply some sort of locking
mechanism, or just tell the application programmer to do the locking in the
application. I'm not sure what is the correct approach here. A reasonable
compromise would be to store the target entity as part of the filehandle.
So you can open the media controller multiple times and each handle can set
its own target entity.

This also has the advantage that you can have a filehandle 'targeted' at a
resizer and a filehandle 'targeted' at the previewer, etc. If you want to use
the same filehandle from multiple threads, then you have to implement locking
yourself.


Open issues
===========

In no particular order:

1) How to tell the application that this board uses an audio loopback cable
to the PC's audio input?

2) There can be a lot of device nodes in complicated boards. One suggestion
is to only register them when they are linked to an entity (i.e. can be
active). Should we do this or not?

3) Format and bus configuration and enumeration. Sub-devices are connected
together by a bus. These busses can have different configurations that will
influence the list of possible formats that can be received or sent from
device nodes. This was always pretty straightforward, but if you have several
sub-devices such as scalers and colorspace converters in a pipeline then this
becomes very complex indeed. This is already a problem with soc-camera, but
that is only the tip of the iceberg.

How to solve this problem is something that requires a lot more thought.

4) One interesting idea is to create an ioctl with an entity ID as argument
that returns a timestamp of frame (audio or video) it is processing. That
would solve not only sync problems with alsa, but also when reading a stream
in general (read/write doesn't provide for a timestamp as streaming I/O does).

5) I propose that we return -ENOIOCTLCMD when an ioctl isn't supported by the
media controller. Much better than -EINVAL that is currently used in V4L2.

6) For now I think we should leave enumerating input and output connectors
to the bridge drivers (ENUMINPUT/ENUMOUTPUT). But as a future step it would
make sense to also enumerate those in the media controller. However, it is
not entirely clear what the relationship will be between that and the
existing enumeration ioctls.

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10  7:13 RFCv2: Media controller proposal Hans Verkuil
@ 2009-09-10 13:01 ` Patrick Boettcher
  2009-09-10 13:50   ` Hans Verkuil
  2009-09-10 20:20 ` Mauro Carvalho Chehab
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 57+ messages in thread
From: Patrick Boettcher @ 2009-09-10 13:01 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media

Hello Hans,


On Thu, 10 Sep 2009, Hans Verkuil wrote:
> Here is the new Media Controller RFC. It is completely rewritten from the
> original RFC. This original RFC can be found here:
>
> http://www.archivum.info/video4linux-list%40redhat.com/2008-07/00371/RFC:_Add_support_to_query_and_change_connections_inside_a_media_device
>
> This document will be the basis of the discussions during the Plumbers
> Conference in two weeks time.

I wasn't following this RFC during the past year, though I heard you 
talking about this idea at LPC 2008.

I will add some things to discussion (see below) I have in my mind 
regarding similar difficulties we face today with some pure-DTV devices.

>From a first look, it seems media controller could not only unify v4l and 
DVB device abstraction layers, but also a missing features to DTV devices 
which are not present right now.

> [..]
>
> Topology
> --------
>
> The topology is represented by entities. Each entity has 0 or more inputs and
> 0 or more outputs. Each input or output can be linked to 0 or more possible
> outputs or inputs from other entities. This is either mutually exclusive
> (i.e. an input/output can be connected to only one output/input at a time)
> or it can be connected to multiple inputs/outputs at the same time.
>
> A device node is a special kind of entity with just one input (capture node)
> or output (video node). It may have both if it does some in-place operation.
>
> Each entity has a unique numerical ID (unique for the board). Each input or
> output has a unique numerical ID as well, but that ID is only unique to the
> entity. To specify a particular input or output of an entity one would give
> an <entity ID, input/output ID> tuple.
>
> When enumerating over entities you will need to retrieve at least the
> following information:
>
> - type (subdev or device node)
> - entity ID
> - entity description (can be quite long)
> - subtype (what sort of device node or subdev is it?)
> - capabilities (what can the entity do? Specific to the subtype and more
> precise than the v4l2_capability struct which only deals with the board
> capabilities)
> - addition subtype-specific data (union)
> - number of inputs and outputs. The input IDs should probably just be a value
> of 0 - (#inputs - 1) (ditto for output IDs).
>
> Another ioctl is needed to obtain the list of possible links that can be made
> for each input and output.
>
> It is good to realize that most applications will just enumerate e.g. capture
> device nodes. Few applications will do a full scan of the whole topology.
> Instead they will just specify the unique entity ID and if needed the
> input/output ID as well. These IDs are declared in the board or sub-device
> specific header.

Very good this topology-idea!

I can even see this to be continued in user-space in a very smart 
application/library: A software MPEG decoder/rescaler whatever would be 
such an entity for example.

> A full enumeration will typically only be done by some sort of generic
> application like v4l2-ctl.

Hmm... I'm seeing this idea covering other stream-oriented devices. Like 
sound-cards (*ouch*).

> [..]
>
> Open issues
> ===========
>
> In no particular order:
>
> 1) How to tell the application that this board uses an audio loopback cable
> to the PC's audio input?
>
> 2) There can be a lot of device nodes in complicated boards. One suggestion
> is to only register them when they are linked to an entity (i.e. can be
> active). Should we do this or not?

Could entities not be completely addressed (configuration ioctls) through 
the mc-node?

Only entities who have an output/input with is of type 
'user-space-interface' are actually having a node where the user (in 
user-space) can read from/write to?

> 3) Format and bus configuration and enumeration. Sub-devices are connected
> together by a bus. These busses can have different configurations that will
> influence the list of possible formats that can be received or sent from
> device nodes. This was always pretty straightforward, but if you have several
> sub-devices such as scalers and colorspace converters in a pipeline then this
> becomes very complex indeed. This is already a problem with soc-camera, but
> that is only the tip of the iceberg.
>
> How to solve this problem is something that requires a lot more thought.

For me the entities (components) you're describing are having 2 basic 
bus-types: one control bus (which gives register access) and one or more 
data-stream buses.

In your topology I understood that the inputs/outputs are exactly 
representing the data-stream buses.

Depending on the main-type of the media controller a library could give 
some basic-models of how all entities can be connected together. EG:

(I have no clue about webcams, that why I use this as an example :) ):

Webcam: sensor + resize + filtering = picture

WEBCAM model X provides:

2 sensor-types + 3 resizers + 5 filters

one of each of it provides a pictures. By default this first one of each 
is taken.

> [..]

My additional comments for DTV

1) In DTV as of today we can't handle a feature which becomes more and 
more important: diversity. There are boards where you have 2 frontends and 
they can either combined their demodulated data to achieve better 
sensitivity when being tuned to the same frequency or they can deliver two 
MPEG2 transport streams when being tuned to 2 different frequencies. With 
the entity topology this problem would be solved, because we have the 
abstraction of possible inputs.

2) What is today a dvb_frontend could become several entities: I'm seeing 
tuner, demodulator, channel-decoder, amplifiers. IMO, we should not 
hesitate to lower the granularity of entities if possible.

I really, really like this approach as it gives flexibily to user-space 
applications which will ultimatetly improve the quality of the supported 
devices, but I think it has to be assisted by a user-space library and the 
access has to be done exclusively by that library. I'm aware that this 
library-idea could be a hot discussion point.

OTOH, in your approach I see nothing which would block an integration of 
DTV-devices in that media-controller-architecture even if it is not done 
in the first development-period.

regards,
--

Patrick 
http://www.kernellabs.com/

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10 13:01 ` Patrick Boettcher
@ 2009-09-10 13:50   ` Hans Verkuil
  2009-09-10 14:24     ` Patrick Boettcher
  0 siblings, 1 reply; 57+ messages in thread
From: Hans Verkuil @ 2009-09-10 13:50 UTC (permalink / raw)
  To: Patrick Boettcher; +Cc: linux-media

Hi Patrick,

> Hello Hans,
>
>
> On Thu, 10 Sep 2009, Hans Verkuil wrote:
>> Here is the new Media Controller RFC. It is completely rewritten from
>> the
>> original RFC. This original RFC can be found here:
>>
>> http://www.archivum.info/video4linux-list%40redhat.com/2008-07/00371/RFC:_Add_support_to_query_and_change_connections_inside_a_media_device
>>
>> This document will be the basis of the discussions during the Plumbers
>> Conference in two weeks time.
>
> I wasn't following this RFC during the past year, though I heard you
> talking about this idea at LPC 2008.

There were no follow-ups for this RFC in the past year. All the work was
concentrated on the new framework (v4l2_device and v4l2_subdev) which was
in any case needed before we could even think of continuing with this RFC.

Now that this is in we can continue with the next phase and actually think
on how it should be implemented.

>
> I will add some things to discussion (see below) I have in my mind
> regarding similar difficulties we face today with some pure-DTV devices.
>
> From a first look, it seems media controller could not only unify v4l and
> DVB device abstraction layers, but also a missing features to DTV devices
> which are not present right now.

Yes, that's the idea. Currently I am concentrating exclusively on v4l
since we really, really need it there asap. But it is a very generic idea
that makes no assumptions on the hardware. It just gives you an abstract
view of the board and a way to access it.

>
>> [..]
>>
>> Topology
>> --------
>>
>> The topology is represented by entities. Each entity has 0 or more
>> inputs and
>> 0 or more outputs. Each input or output can be linked to 0 or more
>> possible
>> outputs or inputs from other entities. This is either mutually exclusive
>> (i.e. an input/output can be connected to only one output/input at a
>> time)
>> or it can be connected to multiple inputs/outputs at the same time.
>>
>> A device node is a special kind of entity with just one input (capture
>> node)
>> or output (video node). It may have both if it does some in-place
>> operation.
>>
>> Each entity has a unique numerical ID (unique for the board). Each input
>> or
>> output has a unique numerical ID as well, but that ID is only unique to
>> the
>> entity. To specify a particular input or output of an entity one would
>> give
>> an <entity ID, input/output ID> tuple.
>>
>> When enumerating over entities you will need to retrieve at least the
>> following information:
>>
>> - type (subdev or device node)
>> - entity ID
>> - entity description (can be quite long)
>> - subtype (what sort of device node or subdev is it?)
>> - capabilities (what can the entity do? Specific to the subtype and more
>> precise than the v4l2_capability struct which only deals with the board
>> capabilities)
>> - addition subtype-specific data (union)
>> - number of inputs and outputs. The input IDs should probably just be a
>> value
>> of 0 - (#inputs - 1) (ditto for output IDs).
>>
>> Another ioctl is needed to obtain the list of possible links that can be
>> made
>> for each input and output.
>>
>> It is good to realize that most applications will just enumerate e.g.
>> capture
>> device nodes. Few applications will do a full scan of the whole
>> topology.
>> Instead they will just specify the unique entity ID and if needed the
>> input/output ID as well. These IDs are declared in the board or
>> sub-device
>> specific header.
>
> Very good this topology-idea!
>
> I can even see this to be continued in user-space in a very smart
> application/library: A software MPEG decoder/rescaler whatever would be
> such an entity for example.

True, but in practice it will be very hard to make such an app for generic
hardware. You can hide some of the hardware-specific code behind a
library, but the whole point of giving access is to optimally utilize all
the hw-specific bits. On the other hand, having a library that tries to do
a 'best effort' job might be quite feasible.

>> A full enumeration will typically only be done by some sort of generic
>> application like v4l2-ctl.
>
> Hmm... I'm seeing this idea covering other stream-oriented devices. Like
> sound-cards (*ouch*).

I may be mistaken, but I don't believe soundcards have this same
complexity are media board.

>
>> [..]
>>
>> Open issues
>> ===========
>>
>> In no particular order:
>>
>> 1) How to tell the application that this board uses an audio loopback
>> cable
>> to the PC's audio input?
>>
>> 2) There can be a lot of device nodes in complicated boards. One
>> suggestion
>> is to only register them when they are linked to an entity (i.e. can be
>> active). Should we do this or not?
>
> Could entities not be completely addressed (configuration ioctls) through
> the mc-node?

Not sure what you mean.

> Only entities who have an output/input with is of type
> 'user-space-interface' are actually having a node where the user (in
> user-space) can read from/write to?

Yes, each device node (i.e. that can be read from or written to) is
represented by an entity. That makes sense as well, since there usually is
a DMA engine associated with this, which definitely qualifies as something
more than 'just' an input or output from some other block. You may even
want to control this in someway through the media controller (setting up
DMA parameters?).

Inputs and outputs are not meant to represent anything complex. They just
represent pins or busses.

>
>> 3) Format and bus configuration and enumeration. Sub-devices are
>> connected
>> together by a bus. These busses can have different configurations that
>> will
>> influence the list of possible formats that can be received or sent from
>> device nodes. This was always pretty straightforward, but if you have
>> several
>> sub-devices such as scalers and colorspace converters in a pipeline then
>> this
>> becomes very complex indeed. This is already a problem with soc-camera,
>> but
>> that is only the tip of the iceberg.
>>
>> How to solve this problem is something that requires a lot more thought.
>
> For me the entities (components) you're describing are having 2 basic
> bus-types: one control bus (which gives register access) and one or more
> data-stream buses.

Not really a datastream bus, more the DMA engine (or something similar)
associated with a datastream bus. It's really the place where data is
passed to/from userspace. I.e. the bus between a sensor and a resizer is
not an entity. It's probably what you meant in any case.

> In your topology I understood that the inputs/outputs are exactly
> representing the data-stream buses.
>
> Depending on the main-type of the media controller a library could give
> some basic-models of how all entities can be connected together. EG:
>
> (I have no clue about webcams, that why I use this as an example :) ):
>
> Webcam: sensor + resize + filtering = picture
>
> WEBCAM model X provides:
>
> 2 sensor-types + 3 resizers + 5 filters
>
> one of each of it provides a pictures. By default this first one of each
> is taken.

My current idea is that the driver will setup an initial default
configuration that would do something reasonable. In this example it would
setup only one sensor as source, one resizer and the relevant filters. So
you have one default path through the system and a library can just follow
that path.

>
>> [..]
>
> My additional comments for DTV
>
> 1) In DTV as of today we can't handle a feature which becomes more and
> more important: diversity. There are boards where you have 2 frontends and
> they can either combined their demodulated data to achieve better
> sensitivity when being tuned to the same frequency or they can deliver two
> MPEG2 transport streams when being tuned to 2 different frequencies. With
> the entity topology this problem would be solved, because we have the
> abstraction of possible inputs.

Exactly.

> 2) What is today a dvb_frontend could become several entities: I'm seeing
> tuner, demodulator, channel-decoder, amplifiers.

In practice every i2c device will be an entity. If the main bridge IC
contains integrated tuners, demods, etc., then the driver can divide them
up in sub-devices at will.

> IMO, we should not
> hesitate to lower the granularity of entities if possible.

I have actually thought of sub-sub-devices. Some i2c devices can be very,
very complex. It's possible to do and we should probably allow for this to
happen in the future. Although we shouldn't implement this initially.

>
> I really, really like this approach as it gives flexibily to user-space
> applications which will ultimatetly improve the quality of the supported
> devices, but I think it has to be assisted by a user-space library and the
> access has to be done exclusively by that library. I'm aware that this
> library-idea could be a hot discussion point.

I do not see how you can make any generic library for this. You can make
libraries for each specific board (I'm talking SoCs here mostly) that
provide a slightly higher level of abstraction, but making something
generic? I don't see how. You could perhaps do something for specific
use-cases, though.

> OTOH, in your approach I see nothing which would block an integration of
> DTV-devices in that media-controller-architecture even if it is not done
> in the first development-period.

I would love to see that happen. But then dvb should first migrate to the
standard i2c API, and then integrate that into v4l2_subdev (by that time
we should probably rename it to media_subdev).

Not a trivial job, but it would truly integrate the two parts.

Thanks for your review!

        Hans

>
> regards,
> --
>
> Patrick
> http://www.kernellabs.com/
>


-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10 13:50   ` Hans Verkuil
@ 2009-09-10 14:24     ` Patrick Boettcher
  2009-09-10 15:00       ` Hans Verkuil
  0 siblings, 1 reply; 57+ messages in thread
From: Patrick Boettcher @ 2009-09-10 14:24 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Linux Media Mailing List

On Thu, 10 Sep 2009, Hans Verkuil wrote:
> Now that this is in we can continue with the next phase and actually think
> on how it should be implemented.

Sounds logic.

>> Hmm... I'm seeing this idea covering other stream-oriented devices. Like
>> sound-cards (*ouch*).
>
> I may be mistaken, but I don't believe soundcards have this same
> complexity are media board.

When I launch alsa-mixer I see 4 input devices where I can select 4 
difference sources. This gives 16 combinations which is enough for me to 
call it 'complex' .

>> Could entities not be completely addressed (configuration ioctls) through
>> the mc-node?
>
> Not sure what you mean.

Instead of having a device node for each entity, the ioctls for each 
entities are done on the media controller-node address an entity by ID.

>> Only entities who have an output/input with is of type
>> 'user-space-interface' are actually having a node where the user (in
>> user-space) can read from/write to?
>
> Yes, each device node (i.e. that can be read from or written to) is
> represented by an entity. That makes sense as well, since there usually is
> a DMA engine associated with this, which definitely qualifies as something
> more than 'just' an input or output from some other block. You may even
> want to control this in someway through the media controller (setting up
> DMA parameters?).
>
> Inputs and outputs are not meant to represent anything complex. They just
> represent pins or busses.

Or DMA-engines.

When I say bus I meant something which transfer data from a to b, so a bus 
covers DMA engines. Thus a DMA engine or a real bus represents a 
connection of an output and an input.

> Not really a datastream bus, more the DMA engine (or something similar)
> associated with a datastream bus. It's really the place where data is
> passed to/from userspace. I.e. the bus between a sensor and a resizer is
> not an entity. It's probably what you meant in any case.

Yes.

>> 2) What is today a dvb_frontend could become several entities: I'm seeing
>> tuner, demodulator, channel-decoder, amplifiers.
>
> In practice every i2c device will be an entity. If the main bridge IC
> contains integrated tuners, demods, etc., then the driver can divide them
> up in sub-devices at will.
>
> I have actually thought of sub-sub-devices. Some i2c devices can be very,
> very complex. It's possible to do and we should probably allow for this to
> happen in the future. Although we shouldn't implement this initially.

Yes, for me i2c-bus-client-device is not necessarily one media_subdevice.

Even the term i2c is not terminal. Meaning that more and more devices will 
use SPI or SDIO or other busses for communication between components in 
the future. Or at least there will be some.

Also: If we sub-bus is implemented as a subdev other devices are attached 
to that bus can be normal subdevs.

Why is it important to have all devices on one bus? Because of the 
propagation of ioctl? If so, the sub-bus-subdev from above can simply 
forward the ioctls on its bus to it's attached subdevs. No need of 
sub-sub-devs ;) .

>> I really, really like this approach as it gives flexibily to user-space
>> applications which will ultimatetly improve the quality of the supported
>> devices, but I think it has to be assisted by a user-space library and the
>> access has to be done exclusively by that library. I'm aware that this
>> library-idea could be a hot discussion point.
>
> I do not see how you can make any generic library for this. You can make
> libraries for each specific board (I'm talking SoCs here mostly) that
> provide a slightly higher level of abstraction, but making something
> generic? I don't see how. You could perhaps do something for specific
> use-cases, though.

Not a 100% generic library, but a library which has some models inside for 
different types of media controllers. Of course the model of a webcam is 
different as the model of a DTV-device.

Maybe model is not the right word, let's call it template. A template 
defines a possible chain of certain types of entities which provide 
a media-stream at their output.

> I would love to see that happen. But then dvb should first migrate to the
> standard i2c API, and then integrate that into v4l2_subdev (by that time
> we should probably rename it to media_subdev).
>
> Not a trivial job, but it would truly integrate the two parts.

As you state in your initial approach, existing APIs are not broken, so 
it's all about future development.

--

Patrick
http://www.kernellabs.com/

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10 14:24     ` Patrick Boettcher
@ 2009-09-10 15:00       ` Hans Verkuil
  2009-09-10 19:19         ` Karicheri, Muralidharan
  2009-09-15 11:36         ` Laurent Pinchart
  0 siblings, 2 replies; 57+ messages in thread
From: Hans Verkuil @ 2009-09-10 15:00 UTC (permalink / raw)
  To: Patrick Boettcher; +Cc: Linux Media Mailing List


> On Thu, 10 Sep 2009, Hans Verkuil wrote:
>> Now that this is in we can continue with the next phase and actually
>> think
>> on how it should be implemented.
>
> Sounds logic.
>
>>> Hmm... I'm seeing this idea covering other stream-oriented devices.
>>> Like
>>> sound-cards (*ouch*).
>>
>> I may be mistaken, but I don't believe soundcards have this same
>> complexity are media board.
>
> When I launch alsa-mixer I see 4 input devices where I can select 4
> difference sources. This gives 16 combinations which is enough for me to
> call it 'complex' .
>
>>> Could entities not be completely addressed (configuration ioctls)
>>> through
>>> the mc-node?
>>
>> Not sure what you mean.
>
> Instead of having a device node for each entity, the ioctls for each
> entities are done on the media controller-node address an entity by ID.

I definitely don't want to go there. Use device nodes (video, fb, alsa,
dvb, etc) for streaming the actual media as we always did and use the
media controller for controlling the board. It keeps everything nicely
separate and clean.

>
>>> Only entities who have an output/input with is of type
>>> 'user-space-interface' are actually having a node where the user (in
>>> user-space) can read from/write to?
>>
>> Yes, each device node (i.e. that can be read from or written to) is
>> represented by an entity. That makes sense as well, since there usually
>> is
>> a DMA engine associated with this, which definitely qualifies as
>> something
>> more than 'just' an input or output from some other block. You may even
>> want to control this in someway through the media controller (setting up
>> DMA parameters?).
>>
>> Inputs and outputs are not meant to represent anything complex. They
>> just
>> represent pins or busses.
>
> Or DMA-engines.
>
> When I say bus I meant something which transfer data from a to b, so a bus
> covers DMA engines. Thus a DMA engine or a real bus represents a
> connection of an output and an input.

Not quite: a DMA engine transfers the media to or from memory over some
bus. The crucial bit is 'memory'. Anyway, device nodes is where an
application can finally get hold of the data and you need a way to tell
the app where to find those devices and what properties they have. And
that's what a device node entity does.

>
>> Not really a datastream bus, more the DMA engine (or something similar)
>> associated with a datastream bus. It's really the place where data is
>> passed to/from userspace. I.e. the bus between a sensor and a resizer is
>> not an entity. It's probably what you meant in any case.
>
> Yes.
>
>>> 2) What is today a dvb_frontend could become several entities: I'm
>>> seeing
>>> tuner, demodulator, channel-decoder, amplifiers.
>>
>> In practice every i2c device will be an entity. If the main bridge IC
>> contains integrated tuners, demods, etc., then the driver can divide
>> them
>> up in sub-devices at will.
>>
>> I have actually thought of sub-sub-devices. Some i2c devices can be
>> very,
>> very complex. It's possible to do and we should probably allow for this
>> to
>> happen in the future. Although we shouldn't implement this initially.
>
> Yes, for me i2c-bus-client-device is not necessarily one media_subdevice.

It is currently, but I agree, that's something that we may want to make
more generic in the future.

>
> Even the term i2c is not terminal. Meaning that more and more devices will
> use SPI or SDIO or other busses for communication between components in
> the future. Or at least there will be some.

That's no problem, v4l2_subdev is bus-agnostic.

>
> Also: If we sub-bus is implemented as a subdev other devices are attached
> to that bus can be normal subdevs.
>
> Why is it important to have all devices on one bus? Because of the
> propagation of ioctl? If so, the sub-bus-subdev from above can simply
> forward the ioctls on its bus to it's attached subdevs. No need of
> sub-sub-devs ;) .

Sub-devices are registered with the v4l2_device. And that's really all you
need. In the end it is a design issue how many sub-devices you create.

>
>>> I really, really like this approach as it gives flexibily to user-space
>>> applications which will ultimatetly improve the quality of the
>>> supported
>>> devices, but I think it has to be assisted by a user-space library and
>>> the
>>> access has to be done exclusively by that library. I'm aware that this
>>> library-idea could be a hot discussion point.
>>
>> I do not see how you can make any generic library for this. You can make
>> libraries for each specific board (I'm talking SoCs here mostly) that
>> provide a slightly higher level of abstraction, but making something
>> generic? I don't see how. You could perhaps do something for specific
>> use-cases, though.
>
> Not a 100% generic library, but a library which has some models inside for
> different types of media controllers. Of course the model of a webcam is
> different as the model of a DTV-device.
>
> Maybe model is not the right word, let's call it template. A template
> defines a possible chain of certain types of entities which provide
> a media-stream at their output.

That might work, yes.

>> I would love to see that happen. But then dvb should first migrate to
>> the
>> standard i2c API, and then integrate that into v4l2_subdev (by that time
>> we should probably rename it to media_subdev).
>>
>> Not a trivial job, but it would truly integrate the two parts.
>
> As you state in your initial approach, existing APIs are not broken, so
> it's all about future development.

Yup!

          Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom


^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: RFCv2: Media controller proposal
  2009-09-10 15:00       ` Hans Verkuil
@ 2009-09-10 19:19         ` Karicheri, Muralidharan
  2009-09-10 20:27           ` Hans Verkuil
  2009-09-15 11:36         ` Laurent Pinchart
  1 sibling, 1 reply; 57+ messages in thread
From: Karicheri, Muralidharan @ 2009-09-10 19:19 UTC (permalink / raw)
  To: Hans Verkuil, Patrick Boettcher; +Cc: Linux Media Mailing List

Hans,

I haven't gone through the RFC, but thought will respond to the below comment.

Murali Karicheri
Software Design Engineer
Texas Instruments Inc.
Germantown, MD 20874
new phone: 301-407-9583
Old Phone : 301-515-3736 (will be deprecated)
email: m-karicheri2@ti.com

>>>
>>> I may be mistaken, but I don't believe soundcards have this same
>>> complexity are media board.
>>
>> When I launch alsa-mixer I see 4 input devices where I can select 4
>> difference sources. This gives 16 combinations which is enough for me to
>> call it 'complex' .
>>
>>>> Could entities not be completely addressed (configuration ioctls)
>>>> through
>>>> the mc-node?
>>>
>>> Not sure what you mean.
>>
>> Instead of having a device node for each entity, the ioctls for each
>> entities are done on the media controller-node address an entity by ID.
>
>I definitely don't want to go there. Use device nodes (video, fb, alsa,
>dvb, etc) for streaming the actual media as we always did and use the
>media controller for controlling the board. It keeps everything nicely
>separate and clean.
>


What you mean by controlling the board?

We have currently ported DMxxx VPBE display drivers to 2.6.31 (Not submitted yet to mainline). In our current implementation, the output and standard/mode are controlled through sysfs because it is a common functionality affecting both v4l and FBDev framebuffer devices. Traditional applications such x-windows should be able to stream video/graphics to VPBE output. V4l2 applications should be able to stream video. Both these devices needs to know the display parameters such as frame buffer resolution, field etc that are to be configured in the video or osd layers in VPBE to output frames to the encoder that is driving the output. So to stream, first the output and mode/standard are selected using sysfs command and then the application is started. Following scenarios are supported by VPBE display drivers in our internal release:-

1)Traditional FBDev applications (x-window) can be run using OSD device. Allows changing mode/standards at the output using fbset command.

2)v4l2 driver doesn't provide s_output/s_std support since it is done through sysfs. 

3)Applications that requires to stream both graphics and video to the output uses both FBDev and V4l2 devices. So these application first set the output and mode/standard using sysfs, before doing io operations with these devices.

There is an encoder manager to which all available encoders  registers (using internally developed interface) and based on commands received at Fbdev/sysfs interfaces, the current encoder is selected by the encoder manager and current standard is selected. The encoder manager provides API to retrieve current timing information from the current encoder. FBDev and V4L2 drivers uses this API to configure OSD/video layers for streaming.

As you can see, controlling output/mode is a common function required for both v4l2 and FBDev devices. 

One way to do this to modify the encoder manager such that it load up the encoder sub devices. This will allow our customers to migrate to this driver on GIT kernel with minimum effort. If v4l2 display bridge driver load up the sub devices, it will make FBDev driver useless unless media controller has some way to handle this scenario. Any idea if media controller RFC address this? I will go over the RFC in details, but if you have a ready answer, let me know.

Thanks
>To unsubscribe from this list: send the line "unsubscribe linux-media" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10  7:13 RFCv2: Media controller proposal Hans Verkuil
  2009-09-10 13:01 ` Patrick Boettcher
@ 2009-09-10 20:20 ` Mauro Carvalho Chehab
  2009-09-10 20:27   ` Devin Heitmueller
  2009-09-10 21:35   ` Hans Verkuil
  2009-09-10 21:28 ` RFCv2: Media controller proposal Guennadi Liakhovetski
  2009-09-11  6:16 ` Hiremath, Vaibhav
  3 siblings, 2 replies; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-10 20:20 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media

Hi Hans,

Hi Hans,

Em Thu, 10 Sep 2009 09:13:09 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

First of all, a generic comment: you enumerated on your RFC several needs that
you expect to be solved with a media controller, but you didn't mention what
userspace API will be used to solve it (e. g. what ioctls, sysfs interfaces,
etc). As this is missing, I'm adding a few notes about how this can be
implemented. For example, as I've already pointed when you sent the first
proposal and at LPC, sysfs is the proper kernel API for enumerating things.

> Why do we need one?
> ===================
> 
> There are currently several problems that are impossible to solve within the
> current V4L2 API:
> 
> 1) Discovering the various device nodes that are typically created by a video
> board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes, framebuffer
> nodes, input nodes (for e.g. webcam button events or IR remotes).

In fact, this can already be done by using the sysfs interface. the current
version of v4l2-sysfs-path.c already enumerates the associated nodes to
a /dev/video device, by just navigating at the already existing device
description nodes at sysfs. I hadn't tried yet, but I bet that a similar kind
of topology can be obtained from a dvb device (probably, we need to do some
adjustments).

The big missing component is an userspace library that will properly return the
device components to the applications. Maybe we need to do also some
adjustments at the sysfs nodes to represent all that it is needed.

> It would be very handy if an application can just open an /dev/v4l/mc0 node
> and be able to figure out where all the nodes are, and to be able to figure
> out what the capabilities of the board are (e.g. does it support DVB, is the
> audio going through a loopback cable or is there an alsa device, can it do
> compressed MPEG video, etc. etc.). Currently the end-user has no choice but to
> supply the device nodes manually.

The better would be to create a /sys/class/media node, and having the
media controllers above that struct. So, mc0 will be at /sys/class/media/mc0.
 
> 2) Some of the newer SoC devices can connect or disconnect internal components
> dynamically. As an example, the omap3 can either connect a sensor output to a
> CCDC module to a previewer module to a resizer module and finally to a capture
> device node. But it is also possible to capture the sensor output directly
> after the CCDC module. The previewer can get its input from another video
> device node and output either to the resizer or to another video capture
> device node. The same is true for the resizer, that too can get its input from
> a device node.
> 
> So there are lots of connections here that can be modified at will depending
> on what the application wants. And in real life there are even more links than
> I mentioned here. And it will only get more complicated in the future.
> 
> All this requires that there has to be a way to connect and disconnect parts
> of the internal topology of a video board at will.

We should design this with care, since each change at the internal topology may
create/delete devices. If you do such changes at topology, udev will need to
delete the old devices and create the new ones. This will happen on separate
threads and may cause locking issues at the device, especially since you can be
modifying several components at the same time (being even possible to do it on
separate threads).

I've seen some high-end core network routers that implements topology changes
on an interesting way: any changes done are not immediately applied at the
node, but are stored into a file, where the configuration that can be changed
anytime. However, the topology changes only happen after giving a commit
command. After commit, it validates the new config and apply them atomically
(e. g. or all changes are applied or none), to avoid bad effects that
intermediate changes could cause.

As we are at kernelspace, we need to take care to not create a very complex
interface. Yet, the idea of applying the new topology atomically seems
interesting. 

Alsa is facing a similar problem with pinup quirks needed with HD-audio boards.
They are proposing a firmware like interface:
	http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-09/msg03198.html

On their case, they are just using request_firmware() for it, at board probing
time.

IMO, the same approach can be used here.

> 3) There is increasing demand to be able to control e.g. sensors or video
> encoders/decoders at a much more precise manner. Currently the V4L2 API
> provides only limited support in the form of a set of controls. But when
> building a high-end camera the developer of the application controlling it
> needs very detailed control of the sensor and image processing devices.
> On the other hand, you do not want to have all this polluting the V4L2 API
> since there is absolutely no sense in exporting this as part of the existing
> controls, or to allow for a large number of private ioctls.

For those static configs, request_firmware() could also be one alternative for it.

> What would be a good solution is to give access to the various components of
> the board and allow the application to send component-specific ioctls or
> controls to it. Any application that will do this is by default tailored to
> that board. In addition, none of these new controls or commands will pollute
> the namespace of V4L2.

For dynamic configs, I see this a problem: we had already some troubles in
the past where certain webcam drivers works fine only with an specific (closed
source, paid) application, since the driver had a generic interface to allow
raw changes at the registers, and those registers weren't documented. That's
basically why all direct register access are under the advanced debug Kconfig
option. So, no matter how we expose such controls, they need to be properly
documented to allow open source applications to make use of them.

> Topology
> --------
> 
> The topology is represented by entities. Each entity has 0 or more inputs and
> 0 or more outputs. Each input or output can be linked to 0 or more possible
> outputs or inputs from other entities. This is either mutually exclusive 
> (i.e. an input/output can be connected to only one output/input at a time)
> or it can be connected to multiple inputs/outputs at the same time.
> 
> A device node is a special kind of entity with just one input (capture node)
> or output (video node). It may have both if it does some in-place operation.
> 
> Each entity has a unique numerical ID (unique for the board). Each input or
> output has a unique numerical ID as well, but that ID is only unique to the
> entity. To specify a particular input or output of an entity one would give
> an <entity ID, input/output ID> tuple.
> 
> When enumerating over entities you will need to retrieve at least the
> following information:
> 
> - type (subdev or device node)
> - entity ID
> - entity description (can be quite long)
> - subtype (what sort of device node or subdev is it?)
> - capabilities (what can the entity do? Specific to the subtype and more
> precise than the v4l2_capability struct which only deals with the board
> capabilities)
> - addition subtype-specific data (union)
> - number of inputs and outputs. The input IDs should probably just be a value
> of 0 - (#inputs - 1) (ditto for output IDs).
> 
> Another ioctl is needed to obtain the list of possible links that can be made
> for each input and output.

Again, the above seems more appropriate to be used via sysfs, instead of via an ioctl.


> Access to sub-devices
> ---------------------
> 
> What is a bit trickier is how to select a sub-device as the target for ioctls.
> Normally ioctls like S_CTRL are sent to a /dev/v4l/videoX node and the driver
> will figure out which sub-device (or possibly the bridge itself) will receive
> it. There is no way of hijacking this mechanism to e.g. specify a specific
> entity ID without also having to modify most of the v4l2 structs by adding
> such an ID field. But with the media controller we can at least create an
> ioctl that specifies a 'target entity' that will receive any non-media
> controller ioctl. Note that for now we only support sub-devices as the target
> entity.
> 
> The idea is this:
> 
> // Select a particular target entity
> ioctl(mc, VIDIOC_S_SUBDEV, &entityID);
> // Send S_FMT directly to that entity
> ioctl(mc, VIDIOC_S_FMT, &fmt);
> // Send a custom ioctl to that entity
> ioctl(mc, VIDIOC_OMAP3_G_HISTOGRAM, &hist);
> 
> This requires no API changes and is very easy to implement.

Huh? This is an API change. 

Also, in the above particular case, I'm assuming that you want to just change
the format of a subdevice specified at the first ioctl, and call a new ioctl
for it, right?

You'll need to specify the API for the two new ioctls, specify at the API specs
how this is supposed to work and maybe add some new return errors, that will
need to be reflected inside the code. 

Also, on almost all user cases, if you set the v4l device to one video
standard and the v4l subdevice to another one, the device won't work.
So, it may be needed to be protect those actions with permission check flags
(for example, requiring CAP_SYS_ADMIN permissions).

> One problem is
> that this is not thread-safe. We can either supply some sort of locking
> mechanism, or just tell the application programmer to do the locking in the
> application. I'm not sure what is the correct approach here. A reasonable
> compromise would be to store the target entity as part of the filehandle.
> So you can open the media controller multiple times and each handle can set
> its own target entity.

A lock at application won't work, since there's nothing to prevent that another
application will open the device at the same time.

What are the needs in this specific case? If there are just a few ioctls, IMO,
the better is to have an specific set of ioctls for it.

> This also has the advantage that you can have a filehandle 'targeted' at a
> resizer and a filehandle 'targeted' at the previewer, etc. If you want to use
> the same filehandle from multiple threads, then you have to implement locking
> yourself.

I'll later comment the open issues.



Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10 19:19         ` Karicheri, Muralidharan
@ 2009-09-10 20:27           ` Hans Verkuil
  2009-09-10 23:08             ` Karicheri, Muralidharan
  2009-09-11  6:26             ` Hiremath, Vaibhav
  0 siblings, 2 replies; 57+ messages in thread
From: Hans Verkuil @ 2009-09-10 20:27 UTC (permalink / raw)
  To: Karicheri, Muralidharan; +Cc: Patrick Boettcher, Linux Media Mailing List

On Thursday 10 September 2009 21:19:25 Karicheri, Muralidharan wrote:
> Hans,
> 
> I haven't gone through the RFC, but thought will respond to the below comment.
> 
> Murali Karicheri
> Software Design Engineer
> Texas Instruments Inc.
> Germantown, MD 20874
> new phone: 301-407-9583
> Old Phone : 301-515-3736 (will be deprecated)
> email: m-karicheri2@ti.com
> 
> >>>
> >>> I may be mistaken, but I don't believe soundcards have this same
> >>> complexity are media board.
> >>
> >> When I launch alsa-mixer I see 4 input devices where I can select 4
> >> difference sources. This gives 16 combinations which is enough for me to
> >> call it 'complex' .
> >>
> >>>> Could entities not be completely addressed (configuration ioctls)
> >>>> through
> >>>> the mc-node?
> >>>
> >>> Not sure what you mean.
> >>
> >> Instead of having a device node for each entity, the ioctls for each
> >> entities are done on the media controller-node address an entity by ID.
> >
> >I definitely don't want to go there. Use device nodes (video, fb, alsa,
> >dvb, etc) for streaming the actual media as we always did and use the
> >media controller for controlling the board. It keeps everything nicely
> >separate and clean.
> >
> 
> 
> What you mean by controlling the board?

In general: the media controller can do anything except streaming. However,
that is an extreme position and in practice all the usual ioctls should
remain supported by the video device nodes.

> We have currently ported DMxxx VPBE display drivers to 2.6.31 (Not submitted yet to mainline). In our current implementation, the output and standard/mode are controlled through sysfs because it is a common functionality affecting both v4l and FBDev framebuffer devices. Traditional applications such x-windows should be able to stream video/graphics to VPBE output. V4l2 applications should be able to stream video. Both these devices needs to know the display parameters such as frame buffer resolution, field etc that are to be configured in the video or osd layers in VPBE to output frames to the encoder that is driving the output. So to stream, first the output and mode/standard are selected using sysfs command and then the application is started. Following scenarios are supported by VPBE display drivers in our internal release:-
> 
> 1)Traditional FBDev applications (x-window) can be run using OSD device. Allows changing mode/standards at the output using fbset command.
> 
> 2)v4l2 driver doesn't provide s_output/s_std support since it is done through sysfs. 
> 
> 3)Applications that requires to stream both graphics and video to the output uses both FBDev and V4l2 devices. So these application first set the output and mode/standard using sysfs, before doing io operations with these devices.

I don't understand this approach. I'm no expert on the fb API but as far as I
know the V4L2 API allows a lot more precision over the video timings (esp. with
the new API you are working on). Furthermore, I assume it is possible to use
the DMxxx without an OSD, right?

This is very similar to the ivtv and ivtvfb drivers: if the framebuffer is in
use, then you cannot change the output standard (you'll get an EBUSY error)
through a video device node.

That's exactly what you would expect. If the framebuffer isn't used, then you
can just use the normal V4L2 API to change the output standard.

In practice, I think that you can only change the resolution in the FB API.
Not things like the framerate, let alone precise pixelclock, porch and sync
widths.

Much better to let the two cooperate: you can use both APIs, but you can't
change the resolution in the fb if streaming is going on, and you can't
change the output standard of a video device node if that changes the
resolution while the framebuffer is in used.

No need for additional sysfs entries.

> 
> There is an encoder manager to which all available encoders  registers (using internally developed interface) and based on commands received at Fbdev/sysfs interfaces, the current encoder is selected by the encoder manager and current standard is selected. The encoder manager provides API to retrieve current timing information from the current encoder. FBDev and V4L2 drivers uses this API to configure OSD/video layers for streaming.
> 
> As you can see, controlling output/mode is a common function required for both v4l2 and FBDev devices. 
> 
> One way to do this to modify the encoder manager such that it load up the encoder sub devices. This will allow our customers to migrate to this driver on GIT kernel with minimum effort. If v4l2 display bridge driver load up the sub devices, it will make FBDev driver useless unless media controller has some way to handle this scenario. Any idea if media controller RFC address this? I will go over the RFC in details, but if you have a ready answer, let me know.

I don't think this has anything to do with the media controller. It sounds
more like a driver design issue to me.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10 20:20 ` Mauro Carvalho Chehab
@ 2009-09-10 20:27   ` Devin Heitmueller
  2009-09-11 12:59     ` Mauro Carvalho Chehab
  2009-09-10 21:35   ` Hans Verkuil
  1 sibling, 1 reply; 57+ messages in thread
From: Devin Heitmueller @ 2009-09-10 20:27 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Hans Verkuil, linux-media

On Thu, Sep 10, 2009 at 4:20 PM, Mauro Carvalho
Chehab<mchehab@infradead.org> wrote:
> In fact, this can already be done by using the sysfs interface. the current
> version of v4l2-sysfs-path.c already enumerates the associated nodes to
> a /dev/video device, by just navigating at the already existing device
> description nodes at sysfs. I hadn't tried yet, but I bet that a similar kind
> of topology can be obtained from a dvb device (probably, we need to do some
> adjustments).

For the audio case, I did some digging into this a bit and It's worth
noting that this behavior varies by driver (at least on USB).  In some
cases, the parent points to the USB device, in other cases it points
to the USB interface.  My original thought was to pick one or the
other and make the various drivers consistent, but even that is a
challenge since in some cases the audio device was provided by
snd-usb-audio (which has no knowledge of the v4l subsystem).

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10  7:13 RFCv2: Media controller proposal Hans Verkuil
  2009-09-10 13:01 ` Patrick Boettcher
  2009-09-10 20:20 ` Mauro Carvalho Chehab
@ 2009-09-10 21:28 ` Guennadi Liakhovetski
  2009-09-10 21:59   ` Hans Verkuil
  2009-09-11  6:16 ` Hiremath, Vaibhav
  3 siblings, 1 reply; 57+ messages in thread
From: Guennadi Liakhovetski @ 2009-09-10 21:28 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media

Hi Hans

a couple of comments / questions from the first glance

On Thu, 10 Sep 2009, Hans Verkuil wrote:

[snip]

> Topology
> --------
> 
> The topology is represented by entities. Each entity has 0 or more inputs and
> 0 or more outputs. Each input or output can be linked to 0 or more possible
> outputs or inputs from other entities. This is either mutually exclusive 
> (i.e. an input/output can be connected to only one output/input at a time)
> or it can be connected to multiple inputs/outputs at the same time.
> 
> A device node is a special kind of entity with just one input (capture node)
> or output (video node). It may have both if it does some in-place operation.
> 
> Each entity has a unique numerical ID (unique for the board). Each input or
> output has a unique numerical ID as well, but that ID is only unique to the
> entity. To specify a particular input or output of an entity one would give
> an <entity ID, input/output ID> tuple.
> 
> When enumerating over entities you will need to retrieve at least the
> following information:
> 
> - type (subdev or device node)
> - entity ID
> - entity description (can be quite long)
> - subtype (what sort of device node or subdev is it?)
> - capabilities (what can the entity do? Specific to the subtype and more
> precise than the v4l2_capability struct which only deals with the board
> capabilities)
> - addition subtype-specific data (union)
> - number of inputs and outputs. The input IDs should probably just be a value
> of 0 - (#inputs - 1) (ditto for output IDs).
> 
> Another ioctl is needed to obtain the list of possible links that can be made
> for each input and output.

Shall we not just let the user try? and return an error if the requested 
connection is impossible? Remember, media-controller users are 
board-tailored, so, they will not be very dynamic.

> It is good to realize that most applications will just enumerate e.g. capture
> device nodes. Few applications will do a full scan of the whole topology.
> Instead they will just specify the unique entity ID and if needed the
> input/output ID as well. These IDs are declared in the board or sub-device
> specific header.
> 
> A full enumeration will typically only be done by some sort of generic
> application like v4l2-ctl.

Well, is this the reason why you wanted to enumerate possible connections? 
Should v4l2-ctrl be able to manipulate those connections? What is it for 
actually?

> In addition, most entities will have only one or two inputs/outputs at most.
> So we might optimize the data structures for this. We probably will have to
> see how it goes when we implement it.
> 
> We obviously need ioctls to make and break links between entities. It
> shouldn't be hard to do this.
> 
> Access to sub-devices
> ---------------------
> 
> What is a bit trickier is how to select a sub-device as the target for ioctls.
> Normally ioctls like S_CTRL are sent to a /dev/v4l/videoX node and the driver
> will figure out which sub-device (or possibly the bridge itself) will receive
> it. There is no way of hijacking this mechanism to e.g. specify a specific
> entity ID without also having to modify most of the v4l2 structs by adding
> such an ID field. But with the media controller we can at least create an
> ioctl that specifies a 'target entity' that will receive any non-media
> controller ioctl. Note that for now we only support sub-devices as the target
> entity.
> 
> The idea is this:
> 
> // Select a particular target entity
> ioctl(mc, VIDIOC_S_SUBDEV, &entityID);
> // Send S_FMT directly to that entity
> ioctl(mc, VIDIOC_S_FMT, &fmt);

is this really a "mc" fd or the respective video-devive fd?

> // Send a custom ioctl to that entity
> ioctl(mc, VIDIOC_OMAP3_G_HISTOGRAM, &hist);
> 
> This requires no API changes and is very easy to implement. One problem is
> that this is not thread-safe. We can either supply some sort of locking
> mechanism, or just tell the application programmer to do the locking in the
> application. I'm not sure what is the correct approach here. A reasonable
> compromise would be to store the target entity as part of the filehandle.
> So you can open the media controller multiple times and each handle can set
> its own target entity.
> 
> This also has the advantage that you can have a filehandle 'targeted' at a
> resizer and a filehandle 'targeted' at the previewer, etc. If you want to use
> the same filehandle from multiple threads, then you have to implement locking
> yourself.

You mean the driver should only care about internal consistency, and the 
user is allowed to otherwise shoot herself in the foot? Makes sense to 
me:-)

> 
> 
> Open issues
> ===========
> 
> In no particular order:
> 
> 1) How to tell the application that this board uses an audio loopback cable
> to the PC's audio input?
> 
> 2) There can be a lot of device nodes in complicated boards. One suggestion
> is to only register them when they are linked to an entity (i.e. can be
> active). Should we do this or not?

Really a lot of device nodes? not sub-devices? What can this be? Isn't the 
decision when to register them board-specific?

> 
> 3) Format and bus configuration and enumeration. Sub-devices are connected
> together by a bus. These busses can have different configurations that will
> influence the list of possible formats that can be received or sent from
> device nodes. This was always pretty straightforward, but if you have several
> sub-devices such as scalers and colorspace converters in a pipeline then this
> becomes very complex indeed. This is already a problem with soc-camera, but
> that is only the tip of the iceberg.
> 
> How to solve this problem is something that requires a lot more thought.
> 
> 4) One interesting idea is to create an ioctl with an entity ID as argument
> that returns a timestamp of frame (audio or video) it is processing. That
> would solve not only sync problems with alsa, but also when reading a stream
> in general (read/write doesn't provide for a timestamp as streaming I/O does).
> 
> 5) I propose that we return -ENOIOCTLCMD when an ioctl isn't supported by the
> media controller. Much better than -EINVAL that is currently used in V4L2.
> 
> 6) For now I think we should leave enumerating input and output connectors
> to the bridge drivers (ENUMINPUT/ENUMOUTPUT). But as a future step it would
> make sense to also enumerate those in the media controller. However, it is
> not entirely clear what the relationship will be between that and the
> existing enumeration ioctls.

Why should a bridge driver care? This isn't board-specific, is it?

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10 20:20 ` Mauro Carvalho Chehab
  2009-09-10 20:27   ` Devin Heitmueller
@ 2009-09-10 21:35   ` Hans Verkuil
  2009-09-11 15:13     ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 57+ messages in thread
From: Hans Verkuil @ 2009-09-10 21:35 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: linux-media

On Thursday 10 September 2009 22:20:13 Mauro Carvalho Chehab wrote:
> Hi Hans,
> 
> Hi Hans,
> 
> Em Thu, 10 Sep 2009 09:13:09 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
> First of all, a generic comment: you enumerated on your RFC several needs that
> you expect to be solved with a media controller, but you didn't mention what
> userspace API will be used to solve it (e. g. what ioctls, sysfs interfaces,
> etc). As this is missing, I'm adding a few notes about how this can be
> implemented. For example, as I've already pointed when you sent the first
> proposal and at LPC, sysfs is the proper kernel API for enumerating things.

I hate sysfs with a passion. All of the V4L2 API is designed around ioctls,
and so is the media controller.

Note that I did not go into too much implementation detail in this RFC. The
best way to do that is by trying to implement it. Only after implementing it
for a few drivers will you get a real feel of what works and what doesn't.

Of course, whether to use sysfs or ioctls is something that has to be designed
beforehand.

> 
> > Why do we need one?
> > ===================
> > 
> > There are currently several problems that are impossible to solve within the
> > current V4L2 API:
> > 
> > 1) Discovering the various device nodes that are typically created by a video
> > board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes, framebuffer
> > nodes, input nodes (for e.g. webcam button events or IR remotes).
> 
> In fact, this can already be done by using the sysfs interface. the current
> version of v4l2-sysfs-path.c already enumerates the associated nodes to
> a /dev/video device, by just navigating at the already existing device
> description nodes at sysfs. I hadn't tried yet, but I bet that a similar kind
> of topology can be obtained from a dvb device (probably, we need to do some
> adjustments).

sysfs is crap. It's a poorly documented public API that is hell to use. Take
a device node entity as enumerated by the media controller: I want to provide
the application with information like the sort of node (alsa, fb, v4l, etc),
how to access it (alsa card nr or major/minor), a description ("Captured MPEG
stream"), possibly some capabilities and addition data. With an ENUM ioctl
you can just call it. With sysfs you have to open/read/close files for each of
these properties, walk through the tree to find related alsa/v4l/fb devices,
and in drivers you must write a hell of a lot of code just to make those sysfs
nodes. It's an uncontrollable mess.

Basically you're just writing a lot of bloat for no reason. And even worse is
that this would introduce a completely different type of API compared to what
we already have.

> The big missing component is an userspace library that will properly return the
> device components to the applications. Maybe we need to do also some
> adjustments at the sysfs nodes to represent all that it is needed.

So we write a userspace library that collects all that information? So that
has to:

1) walk through the sysfs tree trying to find all the related parts of the
media board.
2) open the property that we are interested in.
3) attempt to read the property's value.
4) the driver will then copy that value into a buffer that is returned to the
application, usually through a sprintf() call.
5) the library than uses atol() to convert the string back to an integer and
stores the result in a struct.
6) repeat for all properties.

Isn't that the same as calling an enum ioctl() with a struct pointer? Except
a zillion times slower and more obfuscated?

There are certain areas where sysfs is suitable, but this isn't one of them.

> 
> > It would be very handy if an application can just open an /dev/v4l/mc0 node
> > and be able to figure out where all the nodes are, and to be able to figure
> > out what the capabilities of the board are (e.g. does it support DVB, is the
> > audio going through a loopback cable or is there an alsa device, can it do
> > compressed MPEG video, etc. etc.). Currently the end-user has no choice but to
> > supply the device nodes manually.
> 
> The better would be to create a /sys/class/media node, and having the
> media controllers above that struct. So, mc0 will be at /sys/class/media/mc0.

Why? It's a device. Devices belong in /dev. That's where applications and users
look for devices. Not in sysfs. You should be able to use this even without
sysfs being mounted (on e.g. an embedded system). Another reason BTW not to use
sysfs, BTW.

>  
> > 2) Some of the newer SoC devices can connect or disconnect internal components
> > dynamically. As an example, the omap3 can either connect a sensor output to a
> > CCDC module to a previewer module to a resizer module and finally to a capture
> > device node. But it is also possible to capture the sensor output directly
> > after the CCDC module. The previewer can get its input from another video
> > device node and output either to the resizer or to another video capture
> > device node. The same is true for the resizer, that too can get its input from
> > a device node.
> > 
> > So there are lots of connections here that can be modified at will depending
> > on what the application wants. And in real life there are even more links than
> > I mentioned here. And it will only get more complicated in the future.
> > 
> > All this requires that there has to be a way to connect and disconnect parts
> > of the internal topology of a video board at will.
> 
> We should design this with care, since each change at the internal topology may
> create/delete devices.

No, devices aren't created or deleted. Only links between devices.

> If you do such changes at topology, udev will need to 
> delete the old devices and create the new ones. 

udev is not involved at all. Exception: open issue #2 suggests that we
dynamically register device nodes when they are first linked to some source
or sink. That would involve udev.

All devices are setup when the board is configured. But the links between
them can be changed. This is nothing more than bringing the board's block
diagram to life: each square of the diagram (video device node, resizer, video
encoder or decoder) is a v4l2-subdev with inputs and outputs. And in some cases
you can change links dynamically (in effect this will change a mutex register).

> This will happen on separate 
> threads and may cause locking issues at the device, especially since you can be
> modifying several components at the same time (being even possible to do it on
> separate threads).

This is definitely not something that should be allowed while streaming. I
would like to hear from e.g. TI whether this could be a problem or not. I
suspect that it isn't a problem unless streaming is in progress.

> I've seen some high-end core network routers that implements topology changes
> on an interesting way: any changes done are not immediately applied at the
> node, but are stored into a file, where the configuration that can be changed
> anytime. However, the topology changes only happen after giving a commit
> command. After commit, it validates the new config and apply them atomically
> (e. g. or all changes are applied or none), to avoid bad effects that
> intermediate changes could cause.
> 
> As we are at kernelspace, we need to take care to not create a very complex
> interface. Yet, the idea of applying the new topology atomically seems
> interesting. 

I see no need for it. At least, not for any of the current or forthcoming
devices that I am aware of. Should it ever be needed, then we can introduce a
'shadow topology' in the future. You can change the shadow links and when done
commit it. That wouldn't be too difficult and we can easily prepare for that
eventuality (e.g. have some 'flags' field available where you can set a SHADOW
flag in the future).
 
> Alsa is facing a similar problem with pinup quirks needed with HD-audio boards.
> They are proposing a firmware like interface:
> 	http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-09/msg03198.html
> 
> On their case, they are just using request_firmware() for it, at board probing
> time.

That seems to be a one-time setup. We need this while the system is up and
running.
 
> IMO, the same approach can be used here.
> 
> > 3) There is increasing demand to be able to control e.g. sensors or video
> > encoders/decoders at a much more precise manner. Currently the V4L2 API
> > provides only limited support in the form of a set of controls. But when
> > building a high-end camera the developer of the application controlling it
> > needs very detailed control of the sensor and image processing devices.
> > On the other hand, you do not want to have all this polluting the V4L2 API
> > since there is absolutely no sense in exporting this as part of the existing
> > controls, or to allow for a large number of private ioctls.
> 
> For those static configs, request_firmware() could also be one alternative for it.

It's not static.
 
> > What would be a good solution is to give access to the various components of
> > the board and allow the application to send component-specific ioctls or
> > controls to it. Any application that will do this is by default tailored to
> > that board. In addition, none of these new controls or commands will pollute
> > the namespace of V4L2.
> 
> For dynamic configs, I see this a problem: we had already some troubles in
> the past where certain webcam drivers works fine only with an specific (closed
> source, paid) application, since the driver had a generic interface to allow
> raw changes at the registers, and those registers weren't documented. That's
> basically why all direct register access are under the advanced debug Kconfig
> option. So, no matter how we expose such controls, they need to be properly
> documented to allow open source applications to make use of them.

Absolutely. I need to clearly state that in my RFC. All the rules still apply:
no direct register access and all the APIs specific to a particular sub-device
must be documented properly in the corresponding public header. Everyone must
be able to use it, not just closed source applications.

> 
> > Topology
> > --------
> > 
> > The topology is represented by entities. Each entity has 0 or more inputs and
> > 0 or more outputs. Each input or output can be linked to 0 or more possible
> > outputs or inputs from other entities. This is either mutually exclusive 
> > (i.e. an input/output can be connected to only one output/input at a time)
> > or it can be connected to multiple inputs/outputs at the same time.
> > 
> > A device node is a special kind of entity with just one input (capture node)
> > or output (video node). It may have both if it does some in-place operation.
> > 
> > Each entity has a unique numerical ID (unique for the board). Each input or
> > output has a unique numerical ID as well, but that ID is only unique to the
> > entity. To specify a particular input or output of an entity one would give
> > an <entity ID, input/output ID> tuple.
> > 
> > When enumerating over entities you will need to retrieve at least the
> > following information:
> > 
> > - type (subdev or device node)
> > - entity ID
> > - entity description (can be quite long)
> > - subtype (what sort of device node or subdev is it?)
> > - capabilities (what can the entity do? Specific to the subtype and more
> > precise than the v4l2_capability struct which only deals with the board
> > capabilities)
> > - addition subtype-specific data (union)
> > - number of inputs and outputs. The input IDs should probably just be a value
> > of 0 - (#inputs - 1) (ditto for output IDs).
> > 
> > Another ioctl is needed to obtain the list of possible links that can be made
> > for each input and output.
> 
> Again, the above seems more appropriate to be used via sysfs, instead of via an ioctl.

Again, see my argumentation at the beginning against this.
 
> > Access to sub-devices
> > ---------------------
> > 
> > What is a bit trickier is how to select a sub-device as the target for ioctls.
> > Normally ioctls like S_CTRL are sent to a /dev/v4l/videoX node and the driver
> > will figure out which sub-device (or possibly the bridge itself) will receive
> > it. There is no way of hijacking this mechanism to e.g. specify a specific
> > entity ID without also having to modify most of the v4l2 structs by adding
> > such an ID field. But with the media controller we can at least create an
> > ioctl that specifies a 'target entity' that will receive any non-media
> > controller ioctl. Note that for now we only support sub-devices as the target
> > entity.
> > 
> > The idea is this:
> > 
> > // Select a particular target entity
> > ioctl(mc, VIDIOC_S_SUBDEV, &entityID);
> > // Send S_FMT directly to that entity
> > ioctl(mc, VIDIOC_S_FMT, &fmt);
> > // Send a custom ioctl to that entity
> > ioctl(mc, VIDIOC_OMAP3_G_HISTOGRAM, &hist);
> > 
> > This requires no API changes and is very easy to implement.
> 
> Huh? This is an API change. 

No, this all goes through the media controller, which does not affect the
existing API that goes through a v4l device node.
 
> Also, in the above particular case, I'm assuming that you want to just change
> the format of a subdevice specified at the first ioctl, and call a new ioctl
> for it, right?

Hmm, I knew I should have made a more realistic example. I just took a random
ioctl and S_FMT isn't the best one to pick. Forget that one, I've removed it
from the RFC.

> You'll need to specify the API for the two new ioctls, specify at the API specs
> how this is supposed to work and maybe add some new return errors, that will
> need to be reflected inside the code. 

VIDIOC_S_SUBDEV is part of the new media controller API, but
VIDIOC_OMAP3_G_HISTOGRAM would be an ioctl that is specific to the omap3
histogram sub-device and would typically be defined and documented in a public
header in e.g. linux/include/linux/omap3-histogram.h. These ioctls are highly
specific to particular hardware and impossible to make generic.

> Also, on almost all user cases, if you set the v4l device to one video
> standard and the v4l subdevice to another one, the device won't work.
> So, it may be needed to be protect those actions with permission check flags
> (for example, requiring CAP_SYS_ADMIN permissions).

Again, just ignore that S_FMT. It was a bogus example.
 
> > One problem is
> > that this is not thread-safe. We can either supply some sort of locking
> > mechanism, or just tell the application programmer to do the locking in the
> > application. I'm not sure what is the correct approach here. A reasonable
> > compromise would be to store the target entity as part of the filehandle.
> > So you can open the media controller multiple times and each handle can set
> > its own target entity.
> 
> A lock at application won't work, since there's nothing to prevent that another
> application will open the device at the same time.

True.
 
> What are the needs in this specific case? If there are just a few ioctls, IMO,
> the better is to have an specific set of ioctls for it.

I don't follow you. If you are talking about sub-device specific ioctls: you
can expect to see a lot of them. Statistics gathering, histograms, colorspace
converters, image processing pipelines, and all of them very difficult to
generalize. Some things like a colorspace conversion matrix might actually be
fairly standard, so we could standardize some ioctls. But then only for use
with colorspace conversion sub-devices accessed through the media controller.

Why? You will most likely have multiple CSC blocks and the media controller
is the only way you have to access a specific sub-device. And even if you
hadn't, there is no way a general program like MythTV would ever want to use
it. It's just too dependent on the device.

That is the big change: until now all almost the drivers that we have cater to
the consumer market where you want to have just a few knobs that you can turn
but otherwise the complexity should be hidden inside the driver.

But with these embedded devices the custom-made applications need to have full
access to the intricacies of the device. Now it is that application that will
hide the complexity from the user, and no longer the driver. The media
controller will give it that access without comprimising the existing drivers.

> > This also has the advantage that you can have a filehandle 'targeted' at a
> > resizer and a filehandle 'targeted' at the previewer, etc. If you want to use
> > the same filehandle from multiple threads, then you have to implement locking
> > yourself.

I think we should take this approach. Different apps will always have different
filehandles. Only if you have multiple threads within an application that share
the same filehandle will you have a problem. And then you can do the locking in
the application.

> 
> I'll later comment the open issues.

Thanks!

	Hans


-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10 21:28 ` RFCv2: Media controller proposal Guennadi Liakhovetski
@ 2009-09-10 21:59   ` Hans Verkuil
  2009-09-15 12:28     ` Laurent Pinchart
  0 siblings, 1 reply; 57+ messages in thread
From: Hans Verkuil @ 2009-09-10 21:59 UTC (permalink / raw)
  To: Guennadi Liakhovetski; +Cc: linux-media

On Thursday 10 September 2009 23:28:40 Guennadi Liakhovetski wrote:
> Hi Hans
> 
> a couple of comments / questions from the first glance
> 
> On Thu, 10 Sep 2009, Hans Verkuil wrote:
> 
> [snip]
> 
> > Topology
> > --------
> > 
> > The topology is represented by entities. Each entity has 0 or more inputs and
> > 0 or more outputs. Each input or output can be linked to 0 or more possible
> > outputs or inputs from other entities. This is either mutually exclusive 
> > (i.e. an input/output can be connected to only one output/input at a time)
> > or it can be connected to multiple inputs/outputs at the same time.
> > 
> > A device node is a special kind of entity with just one input (capture node)
> > or output (video node). It may have both if it does some in-place operation.
> > 
> > Each entity has a unique numerical ID (unique for the board). Each input or
> > output has a unique numerical ID as well, but that ID is only unique to the
> > entity. To specify a particular input or output of an entity one would give
> > an <entity ID, input/output ID> tuple.
> > 
> > When enumerating over entities you will need to retrieve at least the
> > following information:
> > 
> > - type (subdev or device node)
> > - entity ID
> > - entity description (can be quite long)
> > - subtype (what sort of device node or subdev is it?)
> > - capabilities (what can the entity do? Specific to the subtype and more
> > precise than the v4l2_capability struct which only deals with the board
> > capabilities)
> > - addition subtype-specific data (union)
> > - number of inputs and outputs. The input IDs should probably just be a value
> > of 0 - (#inputs - 1) (ditto for output IDs).
> > 
> > Another ioctl is needed to obtain the list of possible links that can be made
> > for each input and output.
> 
> Shall we not just let the user try? and return an error if the requested 
> connection is impossible? Remember, media-controller users are 
> board-tailored, so, they will not be very dynamic.

True, but it will be sooo nice to make a GUI that will visualize all these
connections and allow the developer to change then dynamically. And I expect
this to be nothing more than simple static const arrays.

What I didn't mention in the RFC, but probably should, is that my intention
is that the driver will just pass some data structure to the v4l core with
all these connections, and that the core will handle all the enumerations etc.
Only making or breaking links will involve a call to the driver (probably
through v4l2_device).

> > It is good to realize that most applications will just enumerate e.g. capture
> > device nodes. Few applications will do a full scan of the whole topology.
> > Instead they will just specify the unique entity ID and if needed the
> > input/output ID as well. These IDs are declared in the board or sub-device
> > specific header.
> > 
> > A full enumeration will typically only be done by some sort of generic
> > application like v4l2-ctl.
> 
> Well, is this the reason why you wanted to enumerate possible connections? 
> Should v4l2-ctrl be able to manipulate those connections? What is it for 
> actually?

Yes, v4l2-ctl should be able to change connections.
 
> > In addition, most entities will have only one or two inputs/outputs at most.
> > So we might optimize the data structures for this. We probably will have to
> > see how it goes when we implement it.
> > 
> > We obviously need ioctls to make and break links between entities. It
> > shouldn't be hard to do this.
> > 
> > Access to sub-devices
> > ---------------------
> > 
> > What is a bit trickier is how to select a sub-device as the target for ioctls.
> > Normally ioctls like S_CTRL are sent to a /dev/v4l/videoX node and the driver
> > will figure out which sub-device (or possibly the bridge itself) will receive
> > it. There is no way of hijacking this mechanism to e.g. specify a specific
> > entity ID without also having to modify most of the v4l2 structs by adding
> > such an ID field. But with the media controller we can at least create an
> > ioctl that specifies a 'target entity' that will receive any non-media
> > controller ioctl. Note that for now we only support sub-devices as the target
> > entity.
> > 
> > The idea is this:
> > 
> > // Select a particular target entity
> > ioctl(mc, VIDIOC_S_SUBDEV, &entityID);
> > // Send S_FMT directly to that entity
> > ioctl(mc, VIDIOC_S_FMT, &fmt);
> 
> is this really a "mc" fd or the respective video-devive fd?

It's a filehandle to the media controller. I've added an explicit open()
to make this clear.

> > // Send a custom ioctl to that entity
> > ioctl(mc, VIDIOC_OMAP3_G_HISTOGRAM, &hist);
> > 
> > This requires no API changes and is very easy to implement. One problem is
> > that this is not thread-safe. We can either supply some sort of locking
> > mechanism, or just tell the application programmer to do the locking in the
> > application. I'm not sure what is the correct approach here. A reasonable
> > compromise would be to store the target entity as part of the filehandle.
> > So you can open the media controller multiple times and each handle can set
> > its own target entity.
> > 
> > This also has the advantage that you can have a filehandle 'targeted' at a
> > resizer and a filehandle 'targeted' at the previewer, etc. If you want to use
> > the same filehandle from multiple threads, then you have to implement locking
> > yourself.
> 
> You mean the driver should only care about internal consistency, and the 
> user is allowed to otherwise shoot herself in the foot? Makes sense to 
> me:-)

Basically, yes :-)

You can easily make something like a VIDIOC_MC_LOCK and VIDIOC_MC_UNLOCK ioctl
that can be used to get exclusive access to the MC. Or we could reuse the
G/S_PRIORITY ioctls. The first just feels like a big hack to me, the second has
some merit, I think.

> 
> > 
> > 
> > Open issues
> > ===========
> > 
> > In no particular order:
> > 
> > 1) How to tell the application that this board uses an audio loopback cable
> > to the PC's audio input?
> > 
> > 2) There can be a lot of device nodes in complicated boards. One suggestion
> > is to only register them when they are linked to an entity (i.e. can be
> > active). Should we do this or not?
> 
> Really a lot of device nodes? not sub-devices? What can this be? Isn't the 
> decision when to register them board-specific?

Sub-devices do not in general have device nodes (note that i2c sub-devices
will have an i2c device node, of course).

When to register device nodes is in the end driver-specific, but what to do
when enumerating input device nodes and the device node doesn't exist yet?

I can't put my finger on it, but my intuition says that doing this is
dangerous. I can't oversee all the consequences.

> 
> > 
> > 3) Format and bus configuration and enumeration. Sub-devices are connected
> > together by a bus. These busses can have different configurations that will
> > influence the list of possible formats that can be received or sent from
> > device nodes. This was always pretty straightforward, but if you have several
> > sub-devices such as scalers and colorspace converters in a pipeline then this
> > becomes very complex indeed. This is already a problem with soc-camera, but
> > that is only the tip of the iceberg.
> > 
> > How to solve this problem is something that requires a lot more thought.
> > 
> > 4) One interesting idea is to create an ioctl with an entity ID as argument
> > that returns a timestamp of frame (audio or video) it is processing. That
> > would solve not only sync problems with alsa, but also when reading a stream
> > in general (read/write doesn't provide for a timestamp as streaming I/O does).
> > 
> > 5) I propose that we return -ENOIOCTLCMD when an ioctl isn't supported by the
> > media controller. Much better than -EINVAL that is currently used in V4L2.
> > 
> > 6) For now I think we should leave enumerating input and output connectors
> > to the bridge drivers (ENUMINPUT/ENUMOUTPUT). But as a future step it would
> > make sense to also enumerate those in the media controller. However, it is
> > not entirely clear what the relationship will be between that and the
> > existing enumeration ioctls.
> 
> Why should a bridge driver care? This isn't board-specific, is it?

I don't follow you. What input and output connectors a board has is by
definition board specific. If you can enumerate them through the media
controller, then you can be more precise how they are hooked up. E.g. an
antenna input is connected to a tuner sub-device, while the composite video-in
is connected to a video decoder and the audio inputs to an audio mixer
sub-device. All things that cannot be represented by ENUMINPUT. But do we
really care about that?

My opinion is that we should leave this alone for now. There is enough to do
and we can always add it later.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: RFCv2: Media controller proposal
  2009-09-10 20:27           ` Hans Verkuil
@ 2009-09-10 23:08             ` Karicheri, Muralidharan
  2009-09-11  6:20               ` Hans Verkuil
  2009-09-11  6:26             ` Hiremath, Vaibhav
  1 sibling, 1 reply; 57+ messages in thread
From: Karicheri, Muralidharan @ 2009-09-10 23:08 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Patrick Boettcher, Linux Media Mailing List


Hans,

Thanks for your reply..
>>
>>
>> What you mean by controlling the board?
>
>In general: the media controller can do anything except streaming. However,
>that is an extreme position and in practice all the usual ioctls should
>remain supported by the video device nodes.
>
>> We have currently ported DMxxx VPBE display drivers to 2.6.31 (Not
>submitted yet to mainline). In our current implementation, the output and
>standard/mode are controlled through sysfs because it is a common
>functionality affecting both v4l and FBDev framebuffer devices. Traditional
>applications such x-windows should be able to stream video/graphics to VPBE
>output. V4l2 applications should be able to stream video. Both these
>devices needs to know the display parameters such as frame buffer
>resolution, field etc that are to be configured in the video or osd layers
>in VPBE to output frames to the encoder that is driving the output. So to
>stream, first the output and mode/standard are selected using sysfs command
>and then the application is started. Following scenarios are supported by
>VPBE display drivers in our internal release:-
>>
>> 1)Traditional FBDev applications (x-window) can be run using OSD device.
>Allows changing mode/standards at the output using fbset command.
>>
>> 2)v4l2 driver doesn't provide s_output/s_std support since it is done
>through sysfs.
>>
>> 3)Applications that requires to stream both graphics and video to the
>output uses both FBDev and V4l2 devices. So these application first set the
>output and mode/standard using sysfs, before doing io operations with these
>devices.
>
>I don't understand this approach. I'm no expert on the fb API but as far as
>I
>know the V4L2 API allows a lot more precision over the video timings (esp.
>with
>the new API you are working on). Furthermore, I assume it is possible to
>use
>the DMxxx without an OSD, right?


Right. That case (2 above) is easily taken care by v4l2 device driver. We used FBDev driver to drive OSD Layer because that way VPBE can be used by user space applications like x-windows? What is the alternative for this?
Is there a example v4l2 device using OSD like hardware and running x-windows or other traditional graphics application? I am not aware of any and the solution seems to be the right one here.

So the solution we used (case 3)involves FBDev to drive the OSD layers and V4L2 to drive the video layer.

>
>This is very similar to the ivtv and ivtvfb drivers: if the framebuffer is
>in
>use, then you cannot change the output standard (you'll get an EBUSY error)
>through a video device node.
>

Does the ivtvfb and ivtv works with the same set of v4l2 sub devices for output? In our case, VPBE can work with any sub device that can accept a BT.656/BT1120/RGB bus interface. So the FBDev device and V4L2 device( either as standalone device or as co-existent device) should work with the same set of sub devices. So the question is, how both these bridge device can work on the same sub device? If both can work with the same sub device, then what you say is true and can be handled. That is the reason we used the sysfs/Encoder manager as explained in my earlier email.

>That's exactly what you would expect. If the framebuffer isn't used, then
>you
>can just use the normal V4L2 API to change the output standard.
>
>In practice, I think that you can only change the resolution in the FB API.
>Not things like the framerate, let alone precise pixelclock, porch and sync
>widths.


There are 3 use cases 

1) Pure FBDev device driving graphics to VPBE OSD layers -> sub devices -> Display (LCD/TV)

	This would require FBDev loading a required v4l2 the sub device (Not sure if FBDev community like this approach) and using it to drive the output. We will not be able to change the output. But output resolutions and timing can be controlled through fbset command which allow you to change pixel clock, porch, sync etc.

2)Pure V4L2 device driving video to VPBE video layers -> sub devices 
->Display (LCD/TV)
	- No issues here

3)v4l2 and FBDev nodes co-exists. V4l2 drives video and FBDev drives OSD layers and the combined out ->VPBE ->sub devices -> Display (LCD/TV)
	- Not sure which bridge device should load up and manage the sub devices. If V4l2 manages the sub devices, how FBDev driver can set the timings in the current sub device since it has no knowledge of the v4l2 device and the sub device it owns/manages.	 

>
>Much better to let the two cooperate: you can use both APIs, but you can't
>change the resolution in the fb if streaming is going on, and you can't
>change the output standard of a video device node if that changes the
>resolution while the framebuffer is in used.
That is what I mean by use case 3). We can live with the restriction. But sub device model currently is v4l2 specific and I am not sure if there is a way same sub device can be accessed by both bridge devices. Any help here is appreciated.

>
>No need for additional sysfs entries.
>

If we can use sub devices framework, we wouldn't need sysfs

>>
>> There is an encoder manager to which all available encoders  registers
>(using internally developed interface) and based on commands received at
>Fbdev/sysfs interfaces, the current encoder is selected by the encoder
>manager and current standard is selected. The encoder manager provides API
>to retrieve current timing information from the current encoder. FBDev and
>V4L2 drivers uses this API to configure OSD/video layers for streaming.
>>
>> As you can see, controlling output/mode is a common function required for
>both v4l2 and FBDev devices.
>>
>> One way to do this to modify the encoder manager such that it load up the
>encoder sub devices. This will allow our customers to migrate to this
>driver on GIT kernel with minimum effort. If v4l2 display bridge driver
>load up the sub devices, it will make FBDev driver useless unless media
>controller has some way to handle this scenario. Any idea if media
>controller RFC address this? I will go over the RFC in details, but if you
>have a ready answer, let me know.
>
>I don't think this has anything to do with the media controller. It sounds
>more like a driver design issue to me.
>
Not really :( 

When we talk about media controller, it should allow multiple
streaming nodes to work with the same output or multiple outputs right? Here an entity can have FB device and V4L2 device nodes and both should be able
to stream onto the same output managed by a sub device.

/dev/fb0    --> VPBE-OSD0 ->|-> VPBE ->analog output (NTSC/PAL/1080i/720P)
/dev/fb2    --> VPBE-OSD1 ->|        ->digital LCD port
				    |		       ->BT.656/BT.1120/VGA/VISA->
				    |                    -> encoders
/dev/video2 --> VPBE-VID0 ->|                                        
/dev/video3 --> VPBE-VID1 ->|

Current sub device frame work doesn't seems to work with this configuration.
Or Am I missing something? Is there an example of this implementation in ivtv or other platforms?

Murali
>Regards,
>
>	Hans
>
>--
>Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom


^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: RFCv2: Media controller proposal
  2009-09-10  7:13 RFCv2: Media controller proposal Hans Verkuil
                   ` (2 preceding siblings ...)
  2009-09-10 21:28 ` RFCv2: Media controller proposal Guennadi Liakhovetski
@ 2009-09-11  6:16 ` Hiremath, Vaibhav
  2009-09-11  6:35   ` Hans Verkuil
  3 siblings, 1 reply; 57+ messages in thread
From: Hiremath, Vaibhav @ 2009-09-11  6:16 UTC (permalink / raw)
  To: Hans Verkuil, linux-media


> -----Original Message-----
> From: linux-media-owner@vger.kernel.org [mailto:linux-media-
> owner@vger.kernel.org] On Behalf Of Hans Verkuil
> Sent: Thursday, September 10, 2009 12:43 PM
> To: linux-media@vger.kernel.org
> Subject: RFCv2: Media controller proposal
>
> Hi all,
>
> Here is the new Media Controller RFC. It is completely rewritten
> from the
> original RFC. This original RFC can be found here:
>
> http://www.archivum.info/video4linux-list%40redhat.com/2008-
> 07/00371/RFC:_Add_support_to_query_and_change_connections_inside_a_m
> edia_device
>
[Hiremath, Vaibhav] I could see implementation has changed/evolved a lot here from last RFC.

I added some quick comments below, try to provide more during weekend.

> This document will be the basis of the discussions during the
> Plumbers
> Conference in two weeks time.
>
> Open issue #3 is the main unresolved item, but I hope to come up
> with something
> during the weekend.
>
> Regards,
>
>       Hans
>
>
> RFC: Media controller proposal
>
> Version 2.0
>
> Background
> ==========
>
> This RFC is a new version of the original RFC that was written in
> cooperation
> with and on behalf of Texas Instruments about a year ago.
>
> Much work has been done in the past year to put the foundation in
> place to
> be able to implement a media controller and now it is time for this
> updated
> version. The intention is to discuss this in more detail during this
> years
> Plumbers Conference.
>
> Although the high-level concepts are the same as in the original
> RFC, many
> of the details have changed based on what was learned over the past
> year.
>
> This RFC is based on the original discussions with Manjunath Hadli
> from TI
> last year, on discussions during a recent meeting between Laurent
> Pinchart,
> Guennadi Liakhovetski and myself, and on recent discussions with
> Nokia.
> Thanks to Sakari Ailus for doing an initial review of this RFC.
>
> One note regarding terminology: a 'board' is the name I use for the
> SoC,
> PCI or USB device that contains the video hardware. Each board has
> its own
> driver instance and its own v4l2_device struct. Originally I called
> it
> 'device', but that name is already used in too many places.
>
>
> What is a media controller?
> ===========================
>
> In a nutshell: a media controller is a new v4l device node that can
> be used
> to discover and modify the topology of the board and to give access
> to the
> low-level nodes (such as previewers, resizers, color space
> converters, etc.)
> that are part of the topology.
>
> It does not do any streaming, that is the exclusive domain of video
> nodes.
> It is meant purely for controlling a board as a whole.
>
>
> Why do we need one?
> ===================
>
> There are currently several problems that are impossible to solve
> within the
> current V4L2 API:
>
> 1) Discovering the various device nodes that are typically created
> by a video
> board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes,
> framebuffer
> nodes, input nodes (for e.g. webcam button events or IR remotes).
>
> It would be very handy if an application can just open an
> /dev/v4l/mc0 node
> and be able to figure out where all the nodes are, and to be able to
> figure
> out what the capabilities of the board are (e.g. does it support
> DVB, is the
> audio going through a loopback cable or is there an alsa device, can
> it do
> compressed MPEG video, etc. etc.). Currently the end-user has no
> choice but to
> supply the device nodes manually.
>
[Hiremath, Vaibhav] I am still confused here, Can we take one common use case, for example say video board has one /de/fb0 and /dev/video0 along with that we have one node for media controller /dev/v4l/mc0

How are we interacting or talking to /dev/fb0 through media controller node?

I looked into your presentation you created for LPC I guess, but still I am not clear on this.

> 2) Some of the newer SoC devices can connect or disconnect internal
> components
> dynamically. As an example, the omap3 can either connect a sensor
> output to a
> CCDC module to a previewer module to a resizer module and finally to
> a capture
> device node. But it is also possible to capture the sensor output
> directly
> after the CCDC module. The previewer can get its input from another
> video
> device node and output either to the resizer or to another video
> capture
> device node. The same is true for the resizer, that too can get its
> input from
> a device node.
>
> So there are lots of connections here that can be modified at will
> depending
> on what the application wants. And in real life there are even more
> links than
> I mentioned here. And it will only get more complicated in the
> future.
>
> All this requires that there has to be a way to connect and
> disconnect parts
> of the internal topology of a video board at will.
>
> 3) There is increasing demand to be able to control e.g. sensors or
> video
> encoders/decoders at a much more precise manner. Currently the V4L2
> API
> provides only limited support in the form of a set of controls. But
> when
> building a high-end camera the developer of the application
> controlling it
> needs very detailed control of the sensor and image processing
> devices.
> On the other hand, you do not want to have all this polluting the
> V4L2 API
> since there is absolutely no sense in exporting this as part of the
> existing
> controls, or to allow for a large number of private ioctls.
>
> What would be a good solution is to give access to the various
> components of
> the board and allow the application to send component-specific
> ioctls or
> controls to it. Any application that will do this is by default
> tailored to
> that board. In addition, none of these new controls or commands will
> pollute
> the namespace of V4L2.
>
> A media controller can solve all these problems: it will provide a
> window into
> the architecture of the board and all its device nodes. Since it is
> already
> enumerating the nodes and components of the board and how they are
> linked up,
> it is only a small step to also use it to change links and to send
> commands to
> specific components.
>
>
> Restrictions
> ============
>
> 1) These API additions should not affect existing applications.
>
> 2) The new API should not attempt to be too smart. All it should do
> it to give
> the application full control of the board and to provide some
> initial support
> for existing applications. E.g. in the case of omap3 you will have
> an initial
> setup where the sensor is connected through all components to a
> capture device
> node. This will provide sufficient support for a standard webcam
> application,
> but if you want something more advanced then the application will
> have to set
> it up explicitly. It may even be too complicated to use the resizer
> in this
> case, and instead only a few resolutions optimal for the sensor are
> reported.
>
> 3) Provide automatic media controller support for drivers that do
> not create
> one themselves. This new functionality should become available to
> all drivers,
> not just new ones. Otherwise it will take a long time before
> applications like
> MythTV will start to use it.
>
>
> Implementation
> ==============
>
> Many of the building blocks needed to implement a media controller
> already
> exist: the v4l core can easily be extended with a media controller
> type, the
> media controller device node can be held by the v4l2_device top-
> level struct,
> and to represent an internal component we have the v4l2_subdev
> struct.
>
> The core v4l2_subdev ops already has a generic 'ioctl' callback that
> can be
> used by the media controller to pass custom ioctls to the subdev.
>
> What is missing is that device nodes should be registered with
> struct
> v4l2_device. All that is needed to do that is to ensure that when
> registering
> a video node you always pass a pointer to the v4l2_device struct. A
> lot of
> drivers do this already. In addition one should also be able to
> register
> non-video device nodes (alsa, fb, etc.), so that they can be
> enumerated.
>
> Since sub-devices are already registered with the v4l2_device there
> is not
> much to do there.
>
> Topology
> --------
>
> The topology is represented by entities. Each entity has 0 or more
> inputs and
> 0 or more outputs. Each input or output can be linked to 0 or more
> possible
> outputs or inputs from other entities. This is either mutually
> exclusive
> (i.e. an input/output can be connected to only one output/input at a
> time)
> or it can be connected to multiple inputs/outputs at the same time.
>
> A device node is a special kind of entity with just one input
> (capture node)
> or output (video node). It may have both if it does some in-place
> operation.
>
> Each entity has a unique numerical ID (unique for the board). Each
> input or
> output has a unique numerical ID as well, but that ID is only unique
> to the
> entity. To specify a particular input or output of an entity one
> would give
> an <entity ID, input/output ID> tuple.
>
> When enumerating over entities you will need to retrieve at least
> the
> following information:
>
> - type (subdev or device node)
> - entity ID
> - entity description (can be quite long)
> - subtype (what sort of device node or subdev is it?)
> - capabilities (what can the entity do? Specific to the subtype and
> more
> precise than the v4l2_capability struct which only deals with the
> board
> capabilities)
> - addition subtype-specific data (union)
> - number of inputs and outputs. The input IDs should probably just
> be a value
> of 0 - (#inputs - 1) (ditto for output IDs).
>
> Another ioctl is needed to obtain the list of possible links that
> can be made
> for each input and output.
>
> It is good to realize that most applications will just enumerate
> e.g. capture
> device nodes. Few applications will do a full scan of the whole
> topology.
> Instead they will just specify the unique entity ID and if needed
> the
> input/output ID as well. These IDs are declared in the board or sub-
> device
> specific header.
>
> A full enumeration will typically only be done by some sort of
> generic
> application like v4l2-ctl.
>
> In addition, most entities will have only one or two inputs/outputs
> at most.
> So we might optimize the data structures for this. We probably will
> have to
> see how it goes when we implement it.
>
> We obviously need ioctls to make and break links between entities.
> It
> shouldn't be hard to do this.
>
> Access to sub-devices
> ---------------------
>
> What is a bit trickier is how to select a sub-device as the target
> for ioctls.
> Normally ioctls like S_CTRL are sent to a /dev/v4l/videoX node and
> the driver
> will figure out which sub-device (or possibly the bridge itself)
> will receive
> it. There is no way of hijacking this mechanism to e.g. specify a
> specific
> entity ID without also having to modify most of the v4l2 structs by
> adding
> such an ID field. But with the media controller we can at least
> create an
> ioctl that specifies a 'target entity' that will receive any non-
> media
> controller ioctl. Note that for now we only support sub-devices as
> the target
> entity.
>
> The idea is this:
>
> // Select a particular target entity
> ioctl(mc, VIDIOC_S_SUBDEV, &entityID);
> // Send S_FMT directly to that entity
> ioctl(mc, VIDIOC_S_FMT, &fmt);
> // Send a custom ioctl to that entity
> ioctl(mc, VIDIOC_OMAP3_G_HISTOGRAM, &hist);
>
> This requires no API changes and is very easy to implement. One
> problem is
> that this is not thread-safe. We can either supply some sort of
> locking
> mechanism, or just tell the application programmer to do the locking
> in the
> application. I'm not sure what is the correct approach here. A
> reasonable
> compromise would be to store the target entity as part of the
> filehandle.
> So you can open the media controller multiple times and each handle
> can set
> its own target entity.
[Hiremath, Vaibhav] I am not sure whether you thought of this or not, but I am trying to adjust this topology to fit standalone memory to memory drivers which we were talking some time back.

I believe with file handle approach we can achieve this functionality, for example -

User will open media controller and will set the target entity, and then with the help of custom IOCTL any way we are configuring al required parameters.

If you could remember we had stuck to some of the custom configuration parameters like coefficients and stuff, how to configure them? Can we standardize it?

I think here we can make use of custom IOCTL which is going through media controller to sub-device directly. for example -

// Send a custom ioctl to that entity
ioctl(mc, VIDIOC_OMAP3_S_FILTCOEFF, &coeff);

Is that correct?

All rest of the configuration like, input/output pixel configuration will still be happening through standard IOCTL.

We can achieve this by adding one more link,

One of original possible link -

                                        Media
                                        Processor
                                        |           |
output device --> Decoder --> | Reszier | --> memory/encoder
                                        |           |

New link required to be supported -

                                    Media
                                    Processor
                                   |           |
output device (memory) --> | Reszier | --> memory
                                   |           |

Please let me know your thoughts here.

Thanks,
Vaibhav

>
> This also has the advantage that you can have a filehandle
> 'targeted' at a
> resizer and a filehandle 'targeted' at the previewer, etc. If you
> want to use
> the same filehandle from multiple threads, then you have to
> implement locking
> yourself.
>
>
> Open issues
> ===========
>
> In no particular order:
>
> 1) How to tell the application that this board uses an audio
> loopback cable
> to the PC's audio input?
>
> 2) There can be a lot of device nodes in complicated boards. One
> suggestion
> is to only register them when they are linked to an entity (i.e. can
> be
> active). Should we do this or not?
>
> 3) Format and bus configuration and enumeration. Sub-devices are
> connected
> together by a bus. These busses can have different configurations
> that will
> influence the list of possible formats that can be received or sent
> from
> device nodes. This was always pretty straightforward, but if you
> have several
> sub-devices such as scalers and colorspace converters in a pipeline
> then this
> becomes very complex indeed. This is already a problem with soc-
> camera, but
> that is only the tip of the iceberg.
>
> How to solve this problem is something that requires a lot more
> thought.
>
> 4) One interesting idea is to create an ioctl with an entity ID as
> argument
> that returns a timestamp of frame (audio or video) it is processing.
> That
> would solve not only sync problems with alsa, but also when reading
> a stream
> in general (read/write doesn't provide for a timestamp as streaming
> I/O does).
>
> 5) I propose that we return -ENOIOCTLCMD when an ioctl isn't
> supported by the
> media controller. Much better than -EINVAL that is currently used in
> V4L2.
>
> 6) For now I think we should leave enumerating input and output
> connectors
> to the bridge drivers (ENUMINPUT/ENUMOUTPUT). But as a future step
> it would
> make sense to also enumerate those in the media controller. However,
> it is
> not entirely clear what the relationship will be between that and
> the
> existing enumeration ioctls.
>
> --
> Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
> --
> To unsubscribe from this list: send the line "unsubscribe linux-
> media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10 23:08             ` Karicheri, Muralidharan
@ 2009-09-11  6:20               ` Hans Verkuil
  2009-09-11  6:29                 ` Hiremath, Vaibhav
  0 siblings, 1 reply; 57+ messages in thread
From: Hans Verkuil @ 2009-09-11  6:20 UTC (permalink / raw)
  To: Karicheri, Muralidharan; +Cc: Patrick Boettcher, Linux Media Mailing List

On Friday 11 September 2009 01:08:30 Karicheri, Muralidharan wrote:
> 
> Hans,
> 
> Thanks for your reply..
> >>
> >>
> >> What you mean by controlling the board?
> >
> >In general: the media controller can do anything except streaming. However,
> >that is an extreme position and in practice all the usual ioctls should
> >remain supported by the video device nodes.
> >
> >> We have currently ported DMxxx VPBE display drivers to 2.6.31 (Not
> >submitted yet to mainline). In our current implementation, the output and
> >standard/mode are controlled through sysfs because it is a common
> >functionality affecting both v4l and FBDev framebuffer devices. Traditional
> >applications such x-windows should be able to stream video/graphics to VPBE
> >output. V4l2 applications should be able to stream video. Both these
> >devices needs to know the display parameters such as frame buffer
> >resolution, field etc that are to be configured in the video or osd layers
> >in VPBE to output frames to the encoder that is driving the output. So to
> >stream, first the output and mode/standard are selected using sysfs command
> >and then the application is started. Following scenarios are supported by
> >VPBE display drivers in our internal release:-
> >>
> >> 1)Traditional FBDev applications (x-window) can be run using OSD device.
> >Allows changing mode/standards at the output using fbset command.
> >>
> >> 2)v4l2 driver doesn't provide s_output/s_std support since it is done
> >through sysfs.
> >>
> >> 3)Applications that requires to stream both graphics and video to the
> >output uses both FBDev and V4l2 devices. So these application first set the
> >output and mode/standard using sysfs, before doing io operations with these
> >devices.
> >
> >I don't understand this approach. I'm no expert on the fb API but as far as
> >I
> >know the V4L2 API allows a lot more precision over the video timings (esp.
> >with
> >the new API you are working on). Furthermore, I assume it is possible to
> >use
> >the DMxxx without an OSD, right?
> 
> 
> Right. That case (2 above) is easily taken care by v4l2 device driver. We used FBDev driver to drive OSD Layer because that way VPBE can be used by user space applications like x-windows? What is the alternative for this?
> Is there a example v4l2 device using OSD like hardware and running x-windows or other traditional graphics application? I am not aware of any and the solution seems to be the right one here.
> 
> So the solution we used (case 3)involves FBDev to drive the OSD layers and V4L2 to drive the video layer.

As usual, ivtv is doing all that. The ivtv driver is the main controller of
the hardware. The ivtvfb driver provides the FB API towards the OSD. The
X driver for the OSD is available here:

http://dl.ivtvdriver.org/xf86-video-ivtv/archive/1.0.x/xf86-video-ivtv-1.0.2.tar.gz

This is the way to handle it.

> 
> >
> >This is very similar to the ivtv and ivtvfb drivers: if the framebuffer is
> >in
> >use, then you cannot change the output standard (you'll get an EBUSY error)
> >through a video device node.
> >
> 
> Does the ivtvfb and ivtv works with the same set of v4l2 sub devices for output? In our case, VPBE can work with any sub device that can accept a BT.656/BT1120/RGB bus interface. So the FBDev device and V4L2 device( either as standalone device or as co-existent device) should work with the same set of sub devices. So the question is, how both these bridge device can work on the same sub device? If both can work with the same sub device, then what you say is true and can be handled. That is the reason we used the sysfs/Encoder manager as explained in my earlier email.

Look at ivtvfb.c (it's in media/video/ivtv). The ivtvfb_init function will just
find any ivtv driver instances and register itself with them. Most of the
hard work is actually done by ivtv and ivtvfb is just the front-end that
implements the FB API. The video and OSD hardware is usually if not always
so intertwined that it should be controlled by one driver, not two.

This way ivtv keeps full control over the sub-devices as well and all output
changes will go to the same encoder, regardless of whether they originated
from the fb or a video device node.

> 
> >That's exactly what you would expect. If the framebuffer isn't used, then
> >you
> >can just use the normal V4L2 API to change the output standard.
> >
> >In practice, I think that you can only change the resolution in the FB API.
> >Not things like the framerate, let alone precise pixelclock, porch and sync
> >widths.
> 
> 
> There are 3 use cases 
> 
> 1) Pure FBDev device driving graphics to VPBE OSD layers -> sub devices -> Display (LCD/TV)
> 
> 	This would require FBDev loading a required v4l2 the sub device (Not sure if FBDev community like this approach) and using it to drive the output. We will not be able to change the output. But output resolutions and timing can be controlled through fbset command which allow you to change pixel clock, porch, sync etc.

Bad idea. The fb API and framework is not really able to deal with the
complexity of combined video and OSD devices. The v4l2 framework can (esp.
when we have a media controller).
 
> 2)Pure V4L2 device driving video to VPBE video layers -> sub devices 
> ->Display (LCD/TV)
> 	- No issues here
> 
> 3)v4l2 and FBDev nodes co-exists. V4l2 drives video and FBDev drives OSD layers and the combined out ->VPBE ->sub devices -> Display (LCD/TV)
> 	- Not sure which bridge device should load up and manage the sub devices. If V4l2 manages the sub devices, how FBDev driver can set the timings in the current sub device since it has no knowledge of the v4l2 device and the sub device it owns/manages.

You should not attempt to artificially separate the two. You can't since both
v4l and fb share the same hardware. You need one v4l driver that will take
care of both and the FB driver just delegates the core OSD low-level work to
the v4l driver.

> 
> >
> >Much better to let the two cooperate: you can use both APIs, but you can't
> >change the resolution in the fb if streaming is going on, and you can't
> >change the output standard of a video device node if that changes the
> >resolution while the framebuffer is in used.
> That is what I mean by use case 3). We can live with the restriction. But sub device model currently is v4l2 specific and I am not sure if there is a way same sub device can be accessed by both bridge devices. Any help here is appreciated.
> 
> >
> >No need for additional sysfs entries.
> >
> 
> If we can use sub devices framework, we wouldn't need sysfs
> 
> >>
> >> There is an encoder manager to which all available encoders  registers
> >(using internally developed interface) and based on commands received at
> >Fbdev/sysfs interfaces, the current encoder is selected by the encoder
> >manager and current standard is selected. The encoder manager provides API
> >to retrieve current timing information from the current encoder. FBDev and
> >V4L2 drivers uses this API to configure OSD/video layers for streaming.
> >>
> >> As you can see, controlling output/mode is a common function required for
> >both v4l2 and FBDev devices.
> >>
> >> One way to do this to modify the encoder manager such that it load up the
> >encoder sub devices. This will allow our customers to migrate to this
> >driver on GIT kernel with minimum effort. If v4l2 display bridge driver
> >load up the sub devices, it will make FBDev driver useless unless media
> >controller has some way to handle this scenario. Any idea if media
> >controller RFC address this? I will go over the RFC in details, but if you
> >have a ready answer, let me know.
> >
> >I don't think this has anything to do with the media controller. It sounds
> >more like a driver design issue to me.
> >
> Not really :( 
> 
> When we talk about media controller, it should allow multiple
> streaming nodes to work with the same output or multiple outputs right? Here an entity can have FB device and V4L2 device nodes and both should be able
> to stream onto the same output managed by a sub device.
> 
> /dev/fb0    --> VPBE-OSD0 ->|-> VPBE ->analog output (NTSC/PAL/1080i/720P)
> /dev/fb2    --> VPBE-OSD1 ->|        ->digital LCD port
> 				    |		       ->BT.656/BT.1120/VGA/VISA->
> 				    |                    -> encoders
> /dev/video2 --> VPBE-VID0 ->|                                        
> /dev/video3 --> VPBE-VID1 ->|
> 
> Current sub device frame work doesn't seems to work with this configuration.
> Or Am I missing something? Is there an example of this implementation in ivtv or other platforms?

It's no problem as long as the v4l driver remains in control of both the
video and OSD.

Regards,

	Hans

> 
> Murali
> >Regards,
> >
> >	Hans
> >
> >--
> >Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 



-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: RFCv2: Media controller proposal
  2009-09-10 20:27           ` Hans Verkuil
  2009-09-10 23:08             ` Karicheri, Muralidharan
@ 2009-09-11  6:26             ` Hiremath, Vaibhav
  1 sibling, 0 replies; 57+ messages in thread
From: Hiremath, Vaibhav @ 2009-09-11  6:26 UTC (permalink / raw)
  To: Hans Verkuil, Karicheri, Muralidharan
  Cc: Patrick Boettcher, Linux Media Mailing List


> -----Original Message-----
> From: linux-media-owner@vger.kernel.org [mailto:linux-media-
> owner@vger.kernel.org] On Behalf Of Hans Verkuil
> Sent: Friday, September 11, 2009 1:57 AM
> To: Karicheri, Muralidharan
> Cc: Patrick Boettcher; Linux Media Mailing List
> Subject: Re: RFCv2: Media controller proposal
> 
> On Thursday 10 September 2009 21:19:25 Karicheri, Muralidharan
> wrote:
> > Hans,
> >
> > I haven't gone through the RFC, but thought will respond to the
> below comment.
> >
> > Murali Karicheri
> > Software Design Engineer
> > Texas Instruments Inc.
> > Germantown, MD 20874
> > new phone: 301-407-9583
> > Old Phone : 301-515-3736 (will be deprecated)
> > email: m-karicheri2@ti.com
> >
> > >>>
> > >>> I may be mistaken, but I don't believe soundcards have this
> same
> > >>> complexity are media board.
> > >>
> > >> When I launch alsa-mixer I see 4 input devices where I can
> select 4
> > >> difference sources. This gives 16 combinations which is enough
> for me to
> > >> call it 'complex' .
> > >>
> > >>>> Could entities not be completely addressed (configuration
> ioctls)
> > >>>> through
> > >>>> the mc-node?
> > >>>
> > >>> Not sure what you mean.
> > >>
> > >> Instead of having a device node for each entity, the ioctls for
> each
> > >> entities are done on the media controller-node address an
> entity by ID.
> > >
> > >I definitely don't want to go there. Use device nodes (video, fb,
> alsa,
> > >dvb, etc) for streaming the actual media as we always did and use
> the
> > >media controller for controlling the board. It keeps everything
> nicely
> > >separate and clean.
> > >
> >
> >
> > What you mean by controlling the board?
> 
> In general: the media controller can do anything except streaming.
> However,
> that is an extreme position and in practice all the usual ioctls
> should
> remain supported by the video device nodes.
> 
> > We have currently ported DMxxx VPBE display drivers to 2.6.31 (Not
> submitted yet to mainline). In our current implementation, the
> output and standard/mode are controlled through sysfs because it is
> a common functionality affecting both v4l and FBDev framebuffer
> devices. Traditional applications such x-windows should be able to
> stream video/graphics to VPBE output. V4l2 applications should be
> able to stream video. Both these devices needs to know the display
> parameters such as frame buffer resolution, field etc that are to be
> configured in the video or osd layers in VPBE to output frames to
> the encoder that is driving the output. So to stream, first the
> output and mode/standard are selected using sysfs command and then
> the application is started. Following scenarios are supported by
> VPBE display drivers in our internal release:-
> >
> > 1)Traditional FBDev applications (x-window) can be run using OSD
> device. Allows changing mode/standards at the output using fbset
> command.
> >
> > 2)v4l2 driver doesn't provide s_output/s_std support since it is
> done through sysfs.
> >
> > 3)Applications that requires to stream both graphics and video to
> the output uses both FBDev and V4l2 devices. So these application
> first set the output and mode/standard using sysfs, before doing io
> operations with these devices.
> 
> I don't understand this approach. I'm no expert on the fb API but as
> far as I
> know the V4L2 API allows a lot more precision over the video timings
> (esp. with
> the new API you are working on). Furthermore, I assume it is
> possible to use
> the DMxxx without an OSD, right?
> 
> This is very similar to the ivtv and ivtvfb drivers: if the
> framebuffer is in
> use, then you cannot change the output standard (you'll get an EBUSY
> error)
> through a video device node.
> 
[Hiremath, Vaibhav] Framebuffer always be in use till the point you don't call FBIO_BLANK ioctl.

> That's exactly what you would expect. If the framebuffer isn't used,
> then you
> can just use the normal V4L2 API to change the output standard.
> 
> In practice, I think that you can only change the resolution in the
> FB API.
> Not things like the framerate, let alone precise pixelclock, porch
> and sync
> widths.
> 
> Much better to let the two cooperate: you can use both APIs, but you
> can't
> change the resolution in the fb if streaming is going on, and you
> can't
> change the output standard of a video device node if that changes
> the
> resolution while the framebuffer is in used.
> 
[Hiremath, Vaibhav] To overcome this we brought in or rely on SYSFS interface, same is applicable to OMAP devices.

We are using SYSFS interface for all common features like Standard/output selection, etc...

I believe media controller will play some role here.

Thanks,
Vaibhav 

> No need for additional sysfs entries.
> 
> >
> > There is an encoder manager to which all available encoders
> registers (using internally developed interface) and based on
> commands received at Fbdev/sysfs interfaces, the current encoder is
> selected by the encoder manager and current standard is selected.
> The encoder manager provides API to retrieve current timing
> information from the current encoder. FBDev and V4L2 drivers uses
> this API to configure OSD/video layers for streaming.
> >
> > As you can see, controlling output/mode is a common function
> required for both v4l2 and FBDev devices.
> >
> > One way to do this to modify the encoder manager such that it load
> up the encoder sub devices. This will allow our customers to migrate
> to this driver on GIT kernel with minimum effort. If v4l2 display
> bridge driver load up the sub devices, it will make FBDev driver
> useless unless media controller has some way to handle this
> scenario. Any idea if media controller RFC address this? I will go
> over the RFC in details, but if you have a ready answer, let me
> know.
> 
> I don't think this has anything to do with the media controller. It
> sounds
> more like a driver design issue to me.
> 
> Regards,
> 
> 	Hans
> 
> --
> Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
> --
> To unsubscribe from this list: send the line "unsubscribe linux-
> media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: RFCv2: Media controller proposal
  2009-09-11  6:20               ` Hans Verkuil
@ 2009-09-11  6:29                 ` Hiremath, Vaibhav
  0 siblings, 0 replies; 57+ messages in thread
From: Hiremath, Vaibhav @ 2009-09-11  6:29 UTC (permalink / raw)
  To: Hans Verkuil, Karicheri, Muralidharan
  Cc: Patrick Boettcher, Linux Media Mailing List


> -----Original Message-----
> From: linux-media-owner@vger.kernel.org [mailto:linux-media-
> owner@vger.kernel.org] On Behalf Of Hans Verkuil
> Sent: Friday, September 11, 2009 11:51 AM
> To: Karicheri, Muralidharan
> Cc: Patrick Boettcher; Linux Media Mailing List
> Subject: Re: RFCv2: Media controller proposal
> 
> On Friday 11 September 2009 01:08:30 Karicheri, Muralidharan wrote:
> >
> > Hans,
> >
> > Thanks for your reply..
> > >>
> > >>
<snip>

> >
> > Right. That case (2 above) is easily taken care by v4l2 device
> driver. We used FBDev driver to drive OSD Layer because that way
> VPBE can be used by user space applications like x-windows? What is
> the alternative for this?
> > Is there a example v4l2 device using OSD like hardware and running
> x-windows or other traditional graphics application? I am not aware
> of any and the solution seems to be the right one here.
> >
> > So the solution we used (case 3)involves FBDev to drive the OSD
> layers and V4L2 to drive the video layer.
> 
> As usual, ivtv is doing all that. The ivtv driver is the main
> controller of
> the hardware. The ivtvfb driver provides the FB API towards the OSD.
> The
> X driver for the OSD is available here:
> 
> http://dl.ivtvdriver.org/xf86-video-ivtv/archive/1.0.x/xf86-video-
> ivtv-1.0.2.tar.gz
> 
> This is the way to handle it.
> 
> >
> > >
> > >This is very similar to the ivtv and ivtvfb drivers: if the
> framebuffer is
> > >in
> > >use, then you cannot change the output standard (you'll get an
> EBUSY error)
> > >through a video device node.
> > >
> >
> > Does the ivtvfb and ivtv works with the same set of v4l2 sub
> devices for output? In our case, VPBE can work with any sub device
> that can accept a BT.656/BT1120/RGB bus interface. So the FBDev
> device and V4L2 device( either as standalone device or as co-
> existent device) should work with the same set of sub devices. So
> the question is, how both these bridge device can work on the same
> sub device? If both can work with the same sub device, then what you
> say is true and can be handled. That is the reason we used the
> sysfs/Encoder manager as explained in my earlier email.
> 
> Look at ivtvfb.c (it's in media/video/ivtv). The ivtvfb_init
> function will just
[Hiremath, Vaibhav] I think our mail crossed each other.

Interesting, and is something new for me. Let me understand the implementation here first then I can provide some comments on this.

Thanks,
Vaibhav

> find any ivtv driver instances and register itself with them. Most
> of the
> hard work is actually done by ivtv and ivtvfb is just the front-end
> that
> implements the FB API. The video and OSD hardware is usually if not
> always
> so intertwined that it should be controlled by one driver, not two.
> 
> This way ivtv keeps full control over the sub-devices as well and
> all output
> changes will go to the same encoder, regardless of whether they
> originated
> from the fb or a video device node.
> 
> >
> > >That's exactly what you would expect. If the framebuffer isn't
> used, then
> > >you
> > >can just use the normal V4L2 API to change the output standard.
> > >
> > >In practice, I think that you can only change the resolution in
> the FB API.
> > >Not things like the framerate, let alone precise pixelclock,
> porch and sync
> > >widths.
> >
> >
> > There are 3 use cases
> >
> > 1) Pure FBDev device driving graphics to VPBE OSD layers -> sub
> devices -> Display (LCD/TV)
> >
> > 	This would require FBDev loading a required v4l2 the sub
> device (Not sure if FBDev community like this approach) and using it
> to drive the output. We will not be able to change the output. But
> output resolutions and timing can be controlled through fbset
> command which allow you to change pixel clock, porch, sync etc.
> 
> Bad idea. The fb API and framework is not really able to deal with
> the
> complexity of combined video and OSD devices. The v4l2 framework can
> (esp.
> when we have a media controller).
> 
> > 2)Pure V4L2 device driving video to VPBE video layers -> sub
> devices
> > ->Display (LCD/TV)
> > 	- No issues here
> >
> > 3)v4l2 and FBDev nodes co-exists. V4l2 drives video and FBDev
> drives OSD layers and the combined out ->VPBE ->sub devices ->
> Display (LCD/TV)
> > 	- Not sure which bridge device should load up and manage the
> sub devices. If V4l2 manages the sub devices, how FBDev driver can
> set the timings in the current sub device since it has no knowledge
> of the v4l2 device and the sub device it owns/manages.
> 
> You should not attempt to artificially separate the two. You can't
> since both
> v4l and fb share the same hardware. You need one v4l driver that
> will take
> care of both and the FB driver just delegates the core OSD low-level
> work to
> the v4l driver.
> 
> >
> > >
> > >Much better to let the two cooperate: you can use both APIs, but
> you can't
> > >change the resolution in the fb if streaming is going on, and you
> can't
> > >change the output standard of a video device node if that changes
> the
> > >resolution while the framebuffer is in used.
> > That is what I mean by use case 3). We can live with the
> restriction. But sub device model currently is v4l2 specific and I
> am not sure if there is a way same sub device can be accessed by
> both bridge devices. Any help here is appreciated.
> >
> > >
> > >No need for additional sysfs entries.
> > >
> >
> > If we can use sub devices framework, we wouldn't need sysfs
> >
> > >>
> > >> There is an encoder manager to which all available encoders
> registers
> > >(using internally developed interface) and based on commands
> received at
> > >Fbdev/sysfs interfaces, the current encoder is selected by the
> encoder
> > >manager and current standard is selected. The encoder manager
> provides API
> > >to retrieve current timing information from the current encoder.
> FBDev and
> > >V4L2 drivers uses this API to configure OSD/video layers for
> streaming.
> > >>
> > >> As you can see, controlling output/mode is a common function
> required for
> > >both v4l2 and FBDev devices.
> > >>
> > >> One way to do this to modify the encoder manager such that it
> load up the
> > >encoder sub devices. This will allow our customers to migrate to
> this
> > >driver on GIT kernel with minimum effort. If v4l2 display bridge
> driver
> > >load up the sub devices, it will make FBDev driver useless unless
> media
> > >controller has some way to handle this scenario. Any idea if
> media
> > >controller RFC address this? I will go over the RFC in details,
> but if you
> > >have a ready answer, let me know.
> > >
> > >I don't think this has anything to do with the media controller.
> It sounds
> > >more like a driver design issue to me.
> > >
> > Not really :(
> >
> > When we talk about media controller, it should allow multiple
> > streaming nodes to work with the same output or multiple outputs
> right? Here an entity can have FB device and V4L2 device nodes and
> both should be able
> > to stream onto the same output managed by a sub device.
> >
> > /dev/fb0    --> VPBE-OSD0 ->|-> VPBE ->analog output
> (NTSC/PAL/1080i/720P)
> > /dev/fb2    --> VPBE-OSD1 ->|        ->digital LCD port
> > 				    |		       -
> >BT.656/BT.1120/VGA/VISA->
> > 				    |                    -> encoders
> > /dev/video2 --> VPBE-VID0 ->|
> > /dev/video3 --> VPBE-VID1 ->|
> >
> > Current sub device frame work doesn't seems to work with this
> configuration.
> > Or Am I missing something? Is there an example of this
> implementation in ivtv or other platforms?
> 
> It's no problem as long as the v4l driver remains in control of both
> the
> video and OSD.
> 
> Regards,
> 
> 	Hans
> 
> >
> > Murali
> > >Regards,
> > >
> > >	Hans
> > >
> > >--
> > >Hans Verkuil - video4linux developer - sponsored by TANDBERG
> Telecom
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-
> media" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> 
> 
> 
> --
> Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
> --
> To unsubscribe from this list: send the line "unsubscribe linux-
> media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11  6:16 ` Hiremath, Vaibhav
@ 2009-09-11  6:35   ` Hans Verkuil
  0 siblings, 0 replies; 57+ messages in thread
From: Hans Verkuil @ 2009-09-11  6:35 UTC (permalink / raw)
  To: Hiremath, Vaibhav; +Cc: linux-media

On Friday 11 September 2009 08:16:34 Hiremath, Vaibhav wrote:
> 
> > -----Original Message-----
> > From: linux-media-owner@vger.kernel.org [mailto:linux-media-
> > owner@vger.kernel.org] On Behalf Of Hans Verkuil
> > Sent: Thursday, September 10, 2009 12:43 PM
> > To: linux-media@vger.kernel.org
> > Subject: RFCv2: Media controller proposal
> >
> > Hi all,
> >
> > Here is the new Media Controller RFC. It is completely rewritten
> > from the
> > original RFC. This original RFC can be found here:
> >
> > http://www.archivum.info/video4linux-list%40redhat.com/2008-
> > 07/00371/RFC:_Add_support_to_query_and_change_connections_inside_a_m
> > edia_device
> >
> [Hiremath, Vaibhav] I could see implementation has changed/evolved a lot here from last RFC.

Yes it has. The global idea remains the same, but at the time we didn't have
sub-devices and that is (not entirely accidentally) a perfect match for what
we need here.

> I added some quick comments below, try to provide more during weekend.
> 
> > This document will be the basis of the discussions during the
> > Plumbers
> > Conference in two weeks time.
> >
> > Open issue #3 is the main unresolved item, but I hope to come up
> > with something
> > during the weekend.
> >
> > Regards,
> >
> >       Hans
> >
> >
> > RFC: Media controller proposal
> >
> > Version 2.0
> >
> > Background
> > ==========
> >
> > This RFC is a new version of the original RFC that was written in
> > cooperation
> > with and on behalf of Texas Instruments about a year ago.
> >
> > Much work has been done in the past year to put the foundation in
> > place to
> > be able to implement a media controller and now it is time for this
> > updated
> > version. The intention is to discuss this in more detail during this
> > years
> > Plumbers Conference.
> >
> > Although the high-level concepts are the same as in the original
> > RFC, many
> > of the details have changed based on what was learned over the past
> > year.
> >
> > This RFC is based on the original discussions with Manjunath Hadli
> > from TI
> > last year, on discussions during a recent meeting between Laurent
> > Pinchart,
> > Guennadi Liakhovetski and myself, and on recent discussions with
> > Nokia.
> > Thanks to Sakari Ailus for doing an initial review of this RFC.
> >
> > One note regarding terminology: a 'board' is the name I use for the
> > SoC,
> > PCI or USB device that contains the video hardware. Each board has
> > its own
> > driver instance and its own v4l2_device struct. Originally I called
> > it
> > 'device', but that name is already used in too many places.
> >
> >
> > What is a media controller?
> > ===========================
> >
> > In a nutshell: a media controller is a new v4l device node that can
> > be used
> > to discover and modify the topology of the board and to give access
> > to the
> > low-level nodes (such as previewers, resizers, color space
> > converters, etc.)
> > that are part of the topology.
> >
> > It does not do any streaming, that is the exclusive domain of video
> > nodes.
> > It is meant purely for controlling a board as a whole.
> >
> >
> > Why do we need one?
> > ===================
> >
> > There are currently several problems that are impossible to solve
> > within the
> > current V4L2 API:
> >
> > 1) Discovering the various device nodes that are typically created
> > by a video
> > board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes,
> > framebuffer
> > nodes, input nodes (for e.g. webcam button events or IR remotes).
> >
> > It would be very handy if an application can just open an
> > /dev/v4l/mc0 node
> > and be able to figure out where all the nodes are, and to be able to
> > figure
> > out what the capabilities of the board are (e.g. does it support
> > DVB, is the
> > audio going through a loopback cable or is there an alsa device, can
> > it do
> > compressed MPEG video, etc. etc.). Currently the end-user has no
> > choice but to
> > supply the device nodes manually.
> >
> [Hiremath, Vaibhav] I am still confused here, Can we take one common use case, for example say video board has one /de/fb0 and /dev/video0 along with that we have one node for media controller /dev/v4l/mc0
> 
> How are we interacting or talking to /dev/fb0 through media controller node?
> 
> I looked into your presentation you created for LPC I guess, but still I am not clear on this.

The media controller will just tell the application that there is a framebuffer
device and where that node can be found in /dev. In addition, it will show how
it is connected to some sub-device and possibly you can dynamically connect it
to another sub-device instead.

To access the actual framebuffer you still need to go to fbX. That will never
change. The media controller provides the high-level control you need to hook
an OSD up to different outputs for example.

This also means that the v4l driver should have knowledge of (and probably
implement) the OSD. See also the RFC thread with Murali.
 

<snip>

> > The idea is this:
> >
> > // Select a particular target entity
> > ioctl(mc, VIDIOC_S_SUBDEV, &entityID);
> > // Send S_FMT directly to that entity
> > ioctl(mc, VIDIOC_S_FMT, &fmt);
> > // Send a custom ioctl to that entity
> > ioctl(mc, VIDIOC_OMAP3_G_HISTOGRAM, &hist);
> >
> > This requires no API changes and is very easy to implement. One
> > problem is
> > that this is not thread-safe. We can either supply some sort of
> > locking
> > mechanism, or just tell the application programmer to do the locking
> > in the
> > application. I'm not sure what is the correct approach here. A
> > reasonable
> > compromise would be to store the target entity as part of the
> > filehandle.
> > So you can open the media controller multiple times and each handle
> > can set
> > its own target entity.

> [Hiremath, Vaibhav] I am not sure whether you thought of this or not, but I am trying to adjust this topology to fit standalone memory to memory drivers which we were talking some time back.
> 
> I believe with file handle approach we can achieve this functionality, for example -
> 
> User will open media controller and will set the target entity, and then with the help of custom IOCTL any way we are configuring al required parameters.
> 
> If you could remember we had stuck to some of the custom configuration parameters like coefficients and stuff, how to configure them? Can we standardize it?
> 
> I think here we can make use of custom IOCTL which is going through media controller to sub-device directly. for example -
> 
> // Send a custom ioctl to that entity
> ioctl(mc, VIDIOC_OMAP3_S_FILTCOEFF, &coeff);
> 
> Is that correct?

That's exactly the idea.

> All rest of the configuration like, input/output pixel configuration will still be happening through standard IOCTL.
> 
> We can achieve this by adding one more link,
> 
> One of original possible link -
> 
>                                         Media
>                                         Processor
>                                         |           |
> output device --> Decoder --> | Reszier | --> memory/encoder
>                                         |           |
> 
> New link required to be supported -
> 
>                                     Media
>                                     Processor
>                                    |           |
> output device (memory) --> | Reszier | --> memory
>                                    |           |
> 
> Please let me know your thoughts here.

Yes, that's correct. Basically you can hook up the output of the resizer to
either another sub-device (e.g. a colorspace converter) or to a video device
node to send the output to memory. It's just a matter of manipulating the
links.

Note that the media processor concept is no longer there. It has been replaced
by the sub-device concept. Pretty much the same purpose in the end, though.

Regards,

	Hans


-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10 20:27   ` Devin Heitmueller
@ 2009-09-11 12:59     ` Mauro Carvalho Chehab
  0 siblings, 0 replies; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-11 12:59 UTC (permalink / raw)
  To: Devin Heitmueller; +Cc: Hans Verkuil, linux-media

Em Thu, 10 Sep 2009 16:27:20 -0400
Devin Heitmueller <dheitmueller@kernellabs.com> escreveu:

> On Thu, Sep 10, 2009 at 4:20 PM, Mauro Carvalho
> Chehab<mchehab@infradead.org> wrote:
> > In fact, this can already be done by using the sysfs interface. the current
> > version of v4l2-sysfs-path.c already enumerates the associated nodes to
> > a /dev/video device, by just navigating at the already existing device
> > description nodes at sysfs. I hadn't tried yet, but I bet that a similar kind
> > of topology can be obtained from a dvb device (probably, we need to do some
> > adjustments).
> 
> For the audio case, I did some digging into this a bit and It's worth
> noting that this behavior varies by driver (at least on USB).  In some
> cases, the parent points to the USB device, in other cases it points
> to the USB interface.  My original thought was to pick one or the
> other and make the various drivers consistent, but even that is a
> challenge since in some cases the audio device was provided by
> snd-usb-audio (which has no knowledge of the v4l subsystem).

We may consider adding a quick at snd-usb-audio for em28xx devices, in order
to create the proper sysfs nodes.

Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10 21:35   ` Hans Verkuil
@ 2009-09-11 15:13     ` Mauro Carvalho Chehab
  2009-09-11 15:46       ` Devin Heitmueller
  2009-09-11 19:08       ` Hans Verkuil
  0 siblings, 2 replies; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-11 15:13 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media

Em Thu, 10 Sep 2009 23:35:52 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> > First of all, a generic comment: you enumerated on your RFC several needs that
> > you expect to be solved with a media controller, but you didn't mention what
> > userspace API will be used to solve it (e. g. what ioctls, sysfs interfaces,
> > etc). As this is missing, I'm adding a few notes about how this can be
> > implemented. For example, as I've already pointed when you sent the first
> > proposal and at LPC, sysfs is the proper kernel API for enumerating things.
> 
> I hate sysfs with a passion. All of the V4L2 API is designed around ioctls,
> and so is the media controller.
> 
> Note that I did not go into too much implementation detail in this RFC. The
> best way to do that is by trying to implement it. Only after implementing it
> for a few drivers will you get a real feel of what works and what doesn't.
> 
> Of course, whether to use sysfs or ioctls is something that has to be designed
> beforehand.

> > > 1) Discovering the various device nodes that are typically created by a video
> > > board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes, framebuffer
> > > nodes, input nodes (for e.g. webcam button events or IR remotes).
> > 
> > In fact, this can already be done by using the sysfs interface. the current
> > version of v4l2-sysfs-path.c already enumerates the associated nodes to
> > a /dev/video device, by just navigating at the already existing device
> > description nodes at sysfs. I hadn't tried yet, but I bet that a similar kind
> > of topology can be obtained from a dvb device (probably, we need to do some
> > adjustments).
> 
> sysfs is crap. It's a poorly documented public API that is hell to use. Take
> a device node entity as enumerated by the media controller: I want to provide
> the application with information like the sort of node (alsa, fb, v4l, etc),
> how to access it (alsa card nr or major/minor), a description ("Captured MPEG
> stream"), possibly some capabilities and addition data. With an ENUM ioctl
> you can just call it. With sysfs you have to open/read/close files for each of
> these properties, walk through the tree to find related alsa/v4l/fb devices,

sysfs is an hierarchical description of the kernel objects, used to describe
devices, buses, sub-devices, etc. navigating on it, reading, etc is very fast,
since it is done in ram, as described in:

	http://lwn.net/Articles/31185/

Unfortunately, it were designed after V4L2 API, otherwise, probably several
things at the API would be different.

Of course, we need to properly document the media controller sysfs nodes at V4L2.

> and in drivers you must write a hell of a lot of code just to make those sysfs
> nodes. It's an uncontrollable mess.

Huh? How much sysfs code is currently present at the drivers? Nothing. Yet, you
can already enumerate several things as shown with v4l2-sysfs-path, since V4L2 core
already has the code for implementing it. Of course if you want to have a
customized set of nodes for changing some attributes, you'll need to say to
sysfs the name of the attribute, and have a get/set pair of methods. Nothing
different from what we currently have. In a matter of fact, it is even simpler,
since you don't need to add an enum method.

So, it is the proper Kernel API for the objectives you described. Doing it via
ioctl will duplicate things, since the sysfs stuff will still be there, and
will use a wrong API.

So, we should use sysfs for media controller.

> > The big missing component is an userspace library that will properly return the
> > device components to the applications. Maybe we need to do also some
> > adjustments at the sysfs nodes to represent all that it is needed.
> 
> So we write a userspace library that collects all that information? So that
> has to:
> 
> 1) walk through the sysfs tree trying to find all the related parts of the
> media board.
> 2) open the property that we are interested in.
> 3) attempt to read the property's value.
> 4) the driver will then copy that value into a buffer that is returned to the
> application, usually through a sprintf() call.
> 5) the library than uses atol() to convert the string back to an integer and
> stores the result in a struct.
> 6) repeat for all properties.
> 
> Isn't that the same as calling an enum ioctl() with a struct pointer? Except
> a zillion times slower and more obfuscated?

You'll need a similar process with enum, to get each value. Also, by using
sysfs, it is easy to write udev rules that, once a new sysfs node is created,
some action will be started, like for example the action of setting the board
into the needed configuration.

> > The better would be to create a /sys/class/media node, and having the
> > media controllers above that struct. So, mc0 will be at /sys/class/media/mc0.
> 
> Why? It's a device. Devices belong in /dev. That's where applications and users
> look for devices. Not in sysfs.

A device is something that does some sort of input/output transfer,
not something that controls the config of other devices. A media controller is
not a device. It is just a convenient kernel object to describe a group of
devices. So, we should never create a /dev for it.

> You should be able to use this even without sysfs being mounted (on e.g. an
> embedded system). Another reason BTW not to use sysfs, BTW.

If the embedded system needs it, it will need to mount it. What's wrong on that?

> > > All this requires that there has to be a way to connect and disconnect parts
> > > of the internal topology of a video board at will.
> > 
> > We should design this with care, since each change at the internal topology may
> > create/delete devices.
> 
> No, devices aren't created or deleted. Only links between devices.

I think that there are some cases where devices are created/deleted. For
example, on some hardware, you have some blocks that allow you to have either 4 SD
video inputs or 1 HD video input. So, if you change the type of input, you'll
end by creating or deleting devices.

> > If you do such changes at topology, udev will need to 
> > delete the old devices and create the new ones. 
> 
> udev is not involved at all. Exception: open issue #2 suggests that we
> dynamically register device nodes when they are first linked to some source
> or sink. That would involve udev.
> 
> All devices are setup when the board is configured. But the links between
> them can be changed. This is nothing more than bringing the board's block
> diagram to life: each square of the diagram (video device node, resizer, video
> encoder or decoder) is a v4l2-subdev with inputs and outputs. And in some cases
> you can change links dynamically (in effect this will change a mutex register).

See above. If you're grouping 4 A/D blocks into one A/D for handling HD, you're
doing more than just changing links, since the HD device will be just one
device: one STD, one video input mux, one audio input mux, etc.

> > This will happen on separate 
> > threads and may cause locking issues at the device, especially since you can be
> > modifying several components at the same time (being even possible to do it on
> > separate threads).
> 
> This is definitely not something that should be allowed while streaming. I
> would like to hear from e.g. TI whether this could be a problem or not. I
> suspect that it isn't a problem unless streaming is in progress.

Even when streaming, providing that you don't touch at the used IC blocks, it
should be possible to reconfigure the unused parts. It is just a matter of
having the right resource locks at the driver.

> > I've seen some high-end core network routers that implements topology changes
> > on an interesting way: any changes done are not immediately applied at the
> > node, but are stored into a file, where the configuration that can be changed
> > anytime. However, the topology changes only happen after giving a commit
> > command. After commit, it validates the new config and apply them atomically
> > (e. g. or all changes are applied or none), to avoid bad effects that
> > intermediate changes could cause.
> > 
> > As we are at kernelspace, we need to take care to not create a very complex
> > interface. Yet, the idea of applying the new topology atomically seems
> > interesting. 
> 
> I see no need for it. At least, not for any of the current or forthcoming
> devices that I am aware of. Should it ever be needed, then we can introduce a
> 'shadow topology' in the future. You can change the shadow links and when done
> commit it. That wouldn't be too difficult and we can easily prepare for that
> eventuality (e.g. have some 'flags' field available where you can set a SHADOW
> flag in the future).

> > Alsa is facing a similar problem with pinup quirks needed with HD-audio boards.
> > They are proposing a firmware like interface:
> > 	http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-09/msg03198.html
> > 
> > On their case, they are just using request_firmware() for it, at board probing
> > time.
> 
> That seems to be a one-time setup. We need this while the system is up and
> running.

It would be easy to implement something like:

	echo 1 >/sys/class/media/mc0/config_reload

to call request_firmware() and load the new topology. This is enough to have an
atomic operation, and we don't need to implement a shadow config.

> > > What would be a good solution is to give access to the various components of
> > > the board and allow the application to send component-specific ioctls or
> > > controls to it. Any application that will do this is by default tailored to
> > > that board. In addition, none of these new controls or commands will pollute
> > > the namespace of V4L2.
> > 
> > For dynamic configs, I see this a problem: we had already some troubles in
> > the past where certain webcam drivers works fine only with an specific (closed
> > source, paid) application, since the driver had a generic interface to allow
> > raw changes at the registers, and those registers weren't documented. That's
> > basically why all direct register access are under the advanced debug Kconfig
> > option. So, no matter how we expose such controls, they need to be properly
> > documented to allow open source applications to make use of them.
> 
> Absolutely. I need to clearly state that in my RFC. All the rules still apply:
> no direct register access and all the APIs specific to a particular sub-device
> must be documented properly in the corresponding public header. Everyone must
> be able to use it, not just closed source applications.

Fine.

> > > The idea is this:
> > > 
> > > // Select a particular target entity
> > > ioctl(mc, VIDIOC_S_SUBDEV, &entityID);
> > > // Send S_FMT directly to that entity
> > > ioctl(mc, VIDIOC_S_FMT, &fmt);
> > > // Send a custom ioctl to that entity
> > > ioctl(mc, VIDIOC_OMAP3_G_HISTOGRAM, &hist);
> > > 
> > > This requires no API changes and is very easy to implement.
> > 
> > Huh? This is an API change. 
> 
> No, this all goes through the media controller, which does not affect the
> existing API that goes through a v4l device node.

Ok. Yet, you're creating a new API (the mc API). so, it is a V4L2 API change ;)

As I said before, sysfs is the way a media controller should be coded, since it
is not really a device, but a kernel object.

So, if you need to enable an histogram at an omap3 video device, you could do something like:

echo 1 >/sys/class/media/mc0/video:vin0/histogram

> > Also, in the above particular case, I'm assuming that you want to just change
> > the format of a subdevice specified at the first ioctl, and call a new ioctl
> > for it, right?
> 
> Hmm, I knew I should have made a more realistic example. I just took a random
> ioctl and S_FMT isn't the best one to pick. Forget that one, I've removed it
> from the RFC.

Ok.

> > You'll need to specify the API for the two new ioctls, specify at the API specs
> > how this is supposed to work and maybe add some new return errors, that will
> > need to be reflected inside the code. 
> 
> VIDIOC_S_SUBDEV is part of the new media controller API, but
> VIDIOC_OMAP3_G_HISTOGRAM would be an ioctl that is specific to the omap3
> histogram sub-device and would typically be defined and documented in a public
> header in e.g. linux/include/linux/omap3-histogram.h. These ioctls are highly
> specific to particular hardware and impossible to make generic.

It is OK to have (properly documented) specific attributes for some boards. If
this will be commanded via media controller, that means that it will be a sysfs
node. It should be equally ok to implement it as an ioctl at the device node
(/dev/video[0-9]+).

> > What are the needs in this specific case? If there are just a few ioctls, IMO,
> > the better is to have an specific set of ioctls for it.
> 
> I don't follow you. If you are talking about sub-device specific ioctls: you
> can expect to see a lot of them. Statistics gathering, histograms, colorspace
> converters, image processing pipelines, and all of them very difficult to
> generalize. Some things like a colorspace conversion matrix might actually be
> fairly standard, so we could standardize some ioctls. But then only for use
> with colorspace conversion sub-devices accessed through the media controller.

I was talking not about specific attributes, but about the V4L2 API controls
that you may eventually need to "hijack" (using that context-sensitive
thread-unsafe approach you described).

Anyway, by using sysfs, you won't have any thread issues, since you'll be able
to address each sub-device individually:

echo 1 >/sys/class/media/mc0/video:dsp0/enable_stats



Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 15:13     ` Mauro Carvalho Chehab
@ 2009-09-11 15:46       ` Devin Heitmueller
  2009-09-11 15:53         ` Hiremath, Vaibhav
  2009-09-11 19:08       ` Hans Verkuil
  1 sibling, 1 reply; 57+ messages in thread
From: Devin Heitmueller @ 2009-09-11 15:46 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Hans Verkuil, linux-media

On Fri, Sep 11, 2009 at 11:13 AM, Mauro Carvalho Chehab
<mchehab@infradead.org> wrote:
> Em Thu, 10 Sep 2009 23:35:52 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
>
>> > First of all, a generic comment: you enumerated on your RFC several needs that
>> > you expect to be solved with a media controller, but you didn't mention what
>> > userspace API will be used to solve it (e. g. what ioctls, sysfs interfaces,
>> > etc). As this is missing, I'm adding a few notes about how this can be
>> > implemented. For example, as I've already pointed when you sent the first
>> > proposal and at LPC, sysfs is the proper kernel API for enumerating things.
>>
>> I hate sysfs with a passion. All of the V4L2 API is designed around ioctls,
>> and so is the media controller.
>>
>> Note that I did not go into too much implementation detail in this RFC. The
>> best way to do that is by trying to implement it. Only after implementing it
>> for a few drivers will you get a real feel of what works and what doesn't.
>>
>> Of course, whether to use sysfs or ioctls is something that has to be designed
>> beforehand.
>
>> > > 1) Discovering the various device nodes that are typically created by a video
>> > > board, such as: video nodes, vbi nodes, dvb nodes, alsa nodes, framebuffer
>> > > nodes, input nodes (for e.g. webcam button events or IR remotes).
>> >
>> > In fact, this can already be done by using the sysfs interface. the current
>> > version of v4l2-sysfs-path.c already enumerates the associated nodes to
>> > a /dev/video device, by just navigating at the already existing device
>> > description nodes at sysfs. I hadn't tried yet, but I bet that a similar kind
>> > of topology can be obtained from a dvb device (probably, we need to do some
>> > adjustments).
>>
>> sysfs is crap. It's a poorly documented public API that is hell to use. Take
>> a device node entity as enumerated by the media controller: I want to provide
>> the application with information like the sort of node (alsa, fb, v4l, etc),
>> how to access it (alsa card nr or major/minor), a description ("Captured MPEG
>> stream"), possibly some capabilities and addition data. With an ENUM ioctl
>> you can just call it. With sysfs you have to open/read/close files for each of
>> these properties, walk through the tree to find related alsa/v4l/fb devices,
>
> sysfs is an hierarchical description of the kernel objects, used to describe
> devices, buses, sub-devices, etc. navigating on it, reading, etc is very fast,
> since it is done in ram, as described in:
>
>        http://lwn.net/Articles/31185/
>
> Unfortunately, it were designed after V4L2 API, otherwise, probably several
> things at the API would be different.
>
> Of course, we need to properly document the media controller sysfs nodes at V4L2.
>
>> and in drivers you must write a hell of a lot of code just to make those sysfs
>> nodes. It's an uncontrollable mess.
>
> Huh? How much sysfs code is currently present at the drivers? Nothing. Yet, you
> can already enumerate several things as shown with v4l2-sysfs-path, since V4L2 core
> already has the code for implementing it. Of course if you want to have a
> customized set of nodes for changing some attributes, you'll need to say to
> sysfs the name of the attribute, and have a get/set pair of methods. Nothing
> different from what we currently have. In a matter of fact, it is even simpler,
> since you don't need to add an enum method.
>
> So, it is the proper Kernel API for the objectives you described. Doing it via
> ioctl will duplicate things, since the sysfs stuff will still be there, and
> will use a wrong API.
>
> So, we should use sysfs for media controller.
>
>> > The big missing component is an userspace library that will properly return the
>> > device components to the applications. Maybe we need to do also some
>> > adjustments at the sysfs nodes to represent all that it is needed.
>>
>> So we write a userspace library that collects all that information? So that
>> has to:
>>
>> 1) walk through the sysfs tree trying to find all the related parts of the
>> media board.
>> 2) open the property that we are interested in.
>> 3) attempt to read the property's value.
>> 4) the driver will then copy that value into a buffer that is returned to the
>> application, usually through a sprintf() call.
>> 5) the library than uses atol() to convert the string back to an integer and
>> stores the result in a struct.
>> 6) repeat for all properties.
>>
>> Isn't that the same as calling an enum ioctl() with a struct pointer? Except
>> a zillion times slower and more obfuscated?
>
> You'll need a similar process with enum, to get each value. Also, by using
> sysfs, it is easy to write udev rules that, once a new sysfs node is created,
> some action will be started, like for example the action of setting the board
> into the needed configuration.
>
>> > The better would be to create a /sys/class/media node, and having the
>> > media controllers above that struct. So, mc0 will be at /sys/class/media/mc0.
>>
>> Why? It's a device. Devices belong in /dev. That's where applications and users
>> look for devices. Not in sysfs.
>
> A device is something that does some sort of input/output transfer,
> not something that controls the config of other devices. A media controller is
> not a device. It is just a convenient kernel object to describe a group of
> devices. So, we should never create a /dev for it.
>
>> You should be able to use this even without sysfs being mounted (on e.g. an
>> embedded system). Another reason BTW not to use sysfs, BTW.
>
> If the embedded system needs it, it will need to mount it. What's wrong on that?
>
>> > > All this requires that there has to be a way to connect and disconnect parts
>> > > of the internal topology of a video board at will.
>> >
>> > We should design this with care, since each change at the internal topology may
>> > create/delete devices.
>>
>> No, devices aren't created or deleted. Only links between devices.
>
> I think that there are some cases where devices are created/deleted. For
> example, on some hardware, you have some blocks that allow you to have either 4 SD
> video inputs or 1 HD video input. So, if you change the type of input, you'll
> end by creating or deleting devices.
>
>> > If you do such changes at topology, udev will need to
>> > delete the old devices and create the new ones.
>>
>> udev is not involved at all. Exception: open issue #2 suggests that we
>> dynamically register device nodes when they are first linked to some source
>> or sink. That would involve udev.
>>
>> All devices are setup when the board is configured. But the links between
>> them can be changed. This is nothing more than bringing the board's block
>> diagram to life: each square of the diagram (video device node, resizer, video
>> encoder or decoder) is a v4l2-subdev with inputs and outputs. And in some cases
>> you can change links dynamically (in effect this will change a mutex register).
>
> See above. If you're grouping 4 A/D blocks into one A/D for handling HD, you're
> doing more than just changing links, since the HD device will be just one
> device: one STD, one video input mux, one audio input mux, etc.
>
>> > This will happen on separate
>> > threads and may cause locking issues at the device, especially since you can be
>> > modifying several components at the same time (being even possible to do it on
>> > separate threads).
>>
>> This is definitely not something that should be allowed while streaming. I
>> would like to hear from e.g. TI whether this could be a problem or not. I
>> suspect that it isn't a problem unless streaming is in progress.
>
> Even when streaming, providing that you don't touch at the used IC blocks, it
> should be possible to reconfigure the unused parts. It is just a matter of
> having the right resource locks at the driver.
>
>> > I've seen some high-end core network routers that implements topology changes
>> > on an interesting way: any changes done are not immediately applied at the
>> > node, but are stored into a file, where the configuration that can be changed
>> > anytime. However, the topology changes only happen after giving a commit
>> > command. After commit, it validates the new config and apply them atomically
>> > (e. g. or all changes are applied or none), to avoid bad effects that
>> > intermediate changes could cause.
>> >
>> > As we are at kernelspace, we need to take care to not create a very complex
>> > interface. Yet, the idea of applying the new topology atomically seems
>> > interesting.
>>
>> I see no need for it. At least, not for any of the current or forthcoming
>> devices that I am aware of. Should it ever be needed, then we can introduce a
>> 'shadow topology' in the future. You can change the shadow links and when done
>> commit it. That wouldn't be too difficult and we can easily prepare for that
>> eventuality (e.g. have some 'flags' field available where you can set a SHADOW
>> flag in the future).
>
>> > Alsa is facing a similar problem with pinup quirks needed with HD-audio boards.
>> > They are proposing a firmware like interface:
>> >     http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-09/msg03198.html
>> >
>> > On their case, they are just using request_firmware() for it, at board probing
>> > time.
>>
>> That seems to be a one-time setup. We need this while the system is up and
>> running.
>
> It would be easy to implement something like:
>
>        echo 1 >/sys/class/media/mc0/config_reload
>
> to call request_firmware() and load the new topology. This is enough to have an
> atomic operation, and we don't need to implement a shadow config.
>
>> > > What would be a good solution is to give access to the various components of
>> > > the board and allow the application to send component-specific ioctls or
>> > > controls to it. Any application that will do this is by default tailored to
>> > > that board. In addition, none of these new controls or commands will pollute
>> > > the namespace of V4L2.
>> >
>> > For dynamic configs, I see this a problem: we had already some troubles in
>> > the past where certain webcam drivers works fine only with an specific (closed
>> > source, paid) application, since the driver had a generic interface to allow
>> > raw changes at the registers, and those registers weren't documented. That's
>> > basically why all direct register access are under the advanced debug Kconfig
>> > option. So, no matter how we expose such controls, they need to be properly
>> > documented to allow open source applications to make use of them.
>>
>> Absolutely. I need to clearly state that in my RFC. All the rules still apply:
>> no direct register access and all the APIs specific to a particular sub-device
>> must be documented properly in the corresponding public header. Everyone must
>> be able to use it, not just closed source applications.
>
> Fine.
>
>> > > The idea is this:
>> > >
>> > > // Select a particular target entity
>> > > ioctl(mc, VIDIOC_S_SUBDEV, &entityID);
>> > > // Send S_FMT directly to that entity
>> > > ioctl(mc, VIDIOC_S_FMT, &fmt);
>> > > // Send a custom ioctl to that entity
>> > > ioctl(mc, VIDIOC_OMAP3_G_HISTOGRAM, &hist);
>> > >
>> > > This requires no API changes and is very easy to implement.
>> >
>> > Huh? This is an API change.
>>
>> No, this all goes through the media controller, which does not affect the
>> existing API that goes through a v4l device node.
>
> Ok. Yet, you're creating a new API (the mc API). so, it is a V4L2 API change ;)
>
> As I said before, sysfs is the way a media controller should be coded, since it
> is not really a device, but a kernel object.
>
> So, if you need to enable an histogram at an omap3 video device, you could do something like:
>
> echo 1 >/sys/class/media/mc0/video:vin0/histogram
>
>> > Also, in the above particular case, I'm assuming that you want to just change
>> > the format of a subdevice specified at the first ioctl, and call a new ioctl
>> > for it, right?
>>
>> Hmm, I knew I should have made a more realistic example. I just took a random
>> ioctl and S_FMT isn't the best one to pick. Forget that one, I've removed it
>> from the RFC.
>
> Ok.
>
>> > You'll need to specify the API for the two new ioctls, specify at the API specs
>> > how this is supposed to work and maybe add some new return errors, that will
>> > need to be reflected inside the code.
>>
>> VIDIOC_S_SUBDEV is part of the new media controller API, but
>> VIDIOC_OMAP3_G_HISTOGRAM would be an ioctl that is specific to the omap3
>> histogram sub-device and would typically be defined and documented in a public
>> header in e.g. linux/include/linux/omap3-histogram.h. These ioctls are highly
>> specific to particular hardware and impossible to make generic.
>
> It is OK to have (properly documented) specific attributes for some boards. If
> this will be commanded via media controller, that means that it will be a sysfs
> node. It should be equally ok to implement it as an ioctl at the device node
> (/dev/video[0-9]+).
>
>> > What are the needs in this specific case? If there are just a few ioctls, IMO,
>> > the better is to have an specific set of ioctls for it.
>>
>> I don't follow you. If you are talking about sub-device specific ioctls: you
>> can expect to see a lot of them. Statistics gathering, histograms, colorspace
>> converters, image processing pipelines, and all of them very difficult to
>> generalize. Some things like a colorspace conversion matrix might actually be
>> fairly standard, so we could standardize some ioctls. But then only for use
>> with colorspace conversion sub-devices accessed through the media controller.
>
> I was talking not about specific attributes, but about the V4L2 API controls
> that you may eventually need to "hijack" (using that context-sensitive
> thread-unsafe approach you described).
>
> Anyway, by using sysfs, you won't have any thread issues, since you'll be able
> to address each sub-device individually:
>
> echo 1 >/sys/class/media/mc0/video:dsp0/enable_stats
>
>
>
> Cheers,
> Mauro

Mauro,

Please, *seriously* reconsider the notion of making sysfs a dependency
of V4L.  While sysfs is great for a developer who wants to poke around
at various properties from a command line during debugging, it is an
absolute nightmare for any developer who wants to write an application
in C that is expected to actually use the interface.  The amount of
extra code for all the string parsing alone would be ridiculous (think
of how many calls you're going to have to make to sscanf or atoi).
It's so much more straightforward to be able to have ioctl() calls
that can return an actual struct with nice things like enumeration
data types etc.

Just my opinion, of course.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: RFCv2: Media controller proposal
  2009-09-11 15:46       ` Devin Heitmueller
@ 2009-09-11 15:53         ` Hiremath, Vaibhav
  2009-09-11 17:03           ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 57+ messages in thread
From: Hiremath, Vaibhav @ 2009-09-11 15:53 UTC (permalink / raw)
  To: Devin Heitmueller, Mauro Carvalho Chehab; +Cc: Hans Verkuil, linux-media

> -----Original Message-----
> From: linux-media-owner@vger.kernel.org [mailto:linux-media-
> owner@vger.kernel.org] On Behalf Of Devin Heitmueller
> Sent: Friday, September 11, 2009 9:16 PM
> To: Mauro Carvalho Chehab
> Cc: Hans Verkuil; linux-media@vger.kernel.org
> Subject: Re: RFCv2: Media controller proposal
> 
> On Fri, Sep 11, 2009 at 11:13 AM, Mauro Carvalho Chehab
> <mchehab@infradead.org> wrote:
> > Em Thu, 10 Sep 2009 23:35:52 +0200
> > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> >
<snip>
> >
> > I was talking not about specific attributes, but about the V4L2
> API controls
> > that you may eventually need to "hijack" (using that context-
> sensitive
> > thread-unsafe approach you described).
> >
> > Anyway, by using sysfs, you won't have any thread issues, since
> you'll be able
> > to address each sub-device individually:
> >
> > echo 1 >/sys/class/media/mc0/video:dsp0/enable_stats
> >
> >
> >
> > Cheers,
> > Mauro
> 
> Mauro,
> 
> Please, *seriously* reconsider the notion of making sysfs a
> dependency
> of V4L.  While sysfs is great for a developer who wants to poke
> around
> at various properties from a command line during debugging, it is an
> absolute nightmare for any developer who wants to write an
> application
> in C that is expected to actually use the interface.  The amount of
> extra code for all the string parsing alone would be ridiculous
> (think
> of how many calls you're going to have to make to sscanf or atoi).
> It's so much more straightforward to be able to have ioctl() calls
> that can return an actual struct with nice things like enumeration
> data types etc.
> 
> Just my opinion, of course.
> 
[Hiremath, Vaibhav] Mauro,

Definitely SYSFS interface is a nightmare for the application developer, and again we have not thought of backward compatibility here.

How application would know/decide on which node is exist and stuff? Every video board will have his separate way of notions for creating SYSFS nodes and maintaining standard between them would be really mess. 

There has to be enumeration kind of interface to make standard application work seamlessly.

Thanks,
Vaibhav

> Devin
> 
> --
> Devin J. Heitmueller - Kernel Labs
> http://www.kernellabs.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-
> media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 15:53         ` Hiremath, Vaibhav
@ 2009-09-11 17:03           ` Mauro Carvalho Chehab
  2009-09-11 17:34             ` Hiremath, Vaibhav
  0 siblings, 1 reply; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-11 17:03 UTC (permalink / raw)
  To: Hiremath, Vaibhav; +Cc: Devin Heitmueller, Hans Verkuil, linux-media

Em Fri, 11 Sep 2009 21:23:50 +0530
"Hiremath, Vaibhav" <hvaibhav@ti.com> escreveu:

> > -----Original Message-----
> > From: linux-media-owner@vger.kernel.org [mailto:linux-media-
> > owner@vger.kernel.org] On Behalf Of Devin Heitmueller
> > Sent: Friday, September 11, 2009 9:16 PM
> > To: Mauro Carvalho Chehab
> > Cc: Hans Verkuil; linux-media@vger.kernel.org
> > Subject: Re: RFCv2: Media controller proposal
> > 
> > On Fri, Sep 11, 2009 at 11:13 AM, Mauro Carvalho Chehab
> > <mchehab@infradead.org> wrote:
> > > Em Thu, 10 Sep 2009 23:35:52 +0200
> > > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > >
> <snip>
> > >
> > > I was talking not about specific attributes, but about the V4L2
> > API controls
> > > that you may eventually need to "hijack" (using that context-
> > sensitive
> > > thread-unsafe approach you described).
> > >
> > > Anyway, by using sysfs, you won't have any thread issues, since
> > you'll be able
> > > to address each sub-device individually:
> > >
> > > echo 1 >/sys/class/media/mc0/video:dsp0/enable_stats
> > >
> > >
> > >
> > > Cheers,
> > > Mauro
> > 
> > Mauro,
> > 
> > Please, *seriously* reconsider the notion of making sysfs a
> > dependency
> > of V4L.  While sysfs is great for a developer who wants to poke
> > around
> > at various properties from a command line during debugging, it is an
> > absolute nightmare for any developer who wants to write an
> > application
> > in C that is expected to actually use the interface.  The amount of
> > extra code for all the string parsing alone would be ridiculous
> > (think
> > of how many calls you're going to have to make to sscanf or atoi).
> > It's so much more straightforward to be able to have ioctl() calls
> > that can return an actual struct with nice things like enumeration
> > data types etc.

The complexity of the interface will greatly depend on the way things will be
mapped there, and the number of tree levels will be used. Also, as sysfs
accepts soft links, we may have the same node pointed on different places.
This can be useful to ek speed.

In order to have something optimized for application, we can imagine having,
for example, under /sys/class/media/mc0/subdevs, links to all the several subdevs,
like:

	video:vin0
	video:vin1
	audio:audio0
	audio:audio1
	dsp:dsp0
	dsp:dsp0
	dvb:adapter0
	i2c:vin0:tvp5150
	...

each of them being a link to some specific sysfs node, all of this created by
V4L2 core, to be sure that all devices will implement it at the standard way.

If some parameter should be bind, for example at the video input device 0, you
just need to write to a node like:
	/sys/class/media/mc0/subdevs/attr/<atribute>

(all the above names are just examples - we'll need to properly define the
sysfs tree we need to fulfill the requirements).

Also, it should be noticed that you'll need to use sysfs anyway, to get subdev's
major/minor numbers and to associate them with a file name under /dev.

> > 
> > Just my opinion, of course.
> > 
> [Hiremath, Vaibhav] Mauro,
> 
> Definitely SYSFS interface is a nightmare for the application developer, and again we have not thought of backward compatibility here.

What do you mean by backward compatibility? An application using the standard
V4L2 API will keep working, but if they'll use the media controller sysfs, they'll have
extra functionality.

I'm not saying that we should use what we currently have, but to use sysfs to
create standard classes (and/or buses) that fulfill the needs for media
controller to match the RFC requirements.

> How application would know/decide on which node is exist and stuff? Every video board will have his separate way of notions for creating SYSFS nodes and maintaining standard between them would be really mess. 

Yes, but none currently have a media controller node. As sysfs provides links,
we can link the media controller to the old nodes or vice versa (for the few
devices that already have their proper nodes).

> There has to be enumeration kind of interface to make standard application work seamlessly.

That's for sure.



Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: RFCv2: Media controller proposal
  2009-09-11 17:03           ` Mauro Carvalho Chehab
@ 2009-09-11 17:34             ` Hiremath, Vaibhav
  2009-09-11 18:52               ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 57+ messages in thread
From: Hiremath, Vaibhav @ 2009-09-11 17:34 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Devin Heitmueller, Hans Verkuil, linux-media


> -----Original Message-----
> From: Mauro Carvalho Chehab [mailto:mchehab@infradead.org]
> Sent: Friday, September 11, 2009 10:34 PM
> To: Hiremath, Vaibhav
> Cc: Devin Heitmueller; Hans Verkuil; linux-media@vger.kernel.org
> Subject: Re: RFCv2: Media controller proposal
> 
> Em Fri, 11 Sep 2009 21:23:50 +0530
> "Hiremath, Vaibhav" <hvaibhav@ti.com> escreveu:
> 
> > > -----Original Message-----
> > > From: linux-media-owner@vger.kernel.org [mailto:linux-media-
> > > owner@vger.kernel.org] On Behalf Of Devin Heitmueller
> > > Sent: Friday, September 11, 2009 9:16 PM
> > > To: Mauro Carvalho Chehab
> > > Cc: Hans Verkuil; linux-media@vger.kernel.org
> > > Subject: Re: RFCv2: Media controller proposal
> > >
> > > On Fri, Sep 11, 2009 at 11:13 AM, Mauro Carvalho Chehab
> > > <mchehab@infradead.org> wrote:
> > > > Em Thu, 10 Sep 2009 23:35:52 +0200
> > > > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > > >
> > <snip>
> > > >
> > > > I was talking not about specific attributes, but about the
> V4L2
> > > API controls
> > > > that you may eventually need to "hijack" (using that context-
> > > sensitive
> > > > thread-unsafe approach you described).
> > > >
> > > > Anyway, by using sysfs, you won't have any thread issues,
> since
> > > you'll be able
> > > > to address each sub-device individually:
> > > >
> > > > echo 1 >/sys/class/media/mc0/video:dsp0/enable_stats
> > > >
> > > >
> > > >
> > > > Cheers,
> > > > Mauro
> > >
> > > Mauro,
> > >
> > > Please, *seriously* reconsider the notion of making sysfs a
> > > dependency
> > > of V4L.  While sysfs is great for a developer who wants to poke
> > > around
> > > at various properties from a command line during debugging, it
> is an
> > > absolute nightmare for any developer who wants to write an
> > > application
> > > in C that is expected to actually use the interface.  The amount
> of
> > > extra code for all the string parsing alone would be ridiculous
> > > (think
> > > of how many calls you're going to have to make to sscanf or
> atoi).
> > > It's so much more straightforward to be able to have ioctl()
> calls
> > > that can return an actual struct with nice things like
> enumeration
> > > data types etc.
> 
> The complexity of the interface will greatly depend on the way
> things will be
> mapped there, and the number of tree levels will be used. Also, as
> sysfs
> accepts soft links, we may have the same node pointed on different
> places.
> This can be useful to ek speed.
> 
> In order to have something optimized for application, we can imagine
> having,
> for example, under /sys/class/media/mc0/subdevs, links to all the
> several subdevs,
> like:
> 
> 	video:vin0
> 	video:vin1
> 	audio:audio0
> 	audio:audio1
> 	dsp:dsp0
> 	dsp:dsp0
> 	dvb:adapter0
> 	i2c:vin0:tvp5150
> 	...
> 
> each of them being a link to some specific sysfs node, all of this
> created by
> V4L2 core, to be sure that all devices will implement it at the
> standard way.
> 
> If some parameter should be bind, for example at the video input
> device 0, you
> just need to write to a node like:
> 	/sys/class/media/mc0/subdevs/attr/<atribute>
> 
> (all the above names are just examples - we'll need to properly
> define the
> sysfs tree we need to fulfill the requirements).
> 
> Also, it should be noticed that you'll need to use sysfs anyway, to
> get subdev's
> major/minor numbers and to associate them with a file name under
> /dev.
> 
> > >
> > > Just my opinion, of course.
> > >
> > [Hiremath, Vaibhav] Mauro,
> >
> > Definitely SYSFS interface is a nightmare for the application
> developer, and again we have not thought of backward compatibility
> here.
> 
> What do you mean by backward compatibility? An application using the
> standard
> V4L2 API will keep working, but if they'll use the media controller
> sysfs, they'll have
> extra functionality.
> 
[Hiremath, Vaibhav] I was referring to standard V4L2 interface; I was referring to backward compatibility between Media controller devices itself.

Have you thought of custom parameter configuration? For example H3A(20)/Resizer(64) sub-device will have coeff. Which is non-standard (we had some discussion in the past) -

With SYSFS approach it is really difficult to pass big parameter to sub-device, which we can easily achieve using IOCTL.

Thanks,
Vaibhav
> I'm not saying that we should use what we currently have, but to use
> sysfs to
> create standard classes (and/or buses) that fulfill the needs for
> media
> controller to match the RFC requirements.
> 
> > How application would know/decide on which node is exist and
> stuff? Every video board will have his separate way of notions for
> creating SYSFS nodes and maintaining standard between them would be
> really mess.
> 
> Yes, but none currently have a media controller node. As sysfs
> provides links,
> we can link the media controller to the old nodes or vice versa (for
> the few
> devices that already have their proper nodes).
> 
> > There has to be enumeration kind of interface to make standard
> application work seamlessly.
> 
> That's for sure.
> 
> 
> 
> Cheers,
> Mauro


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 17:34             ` Hiremath, Vaibhav
@ 2009-09-11 18:52               ` Mauro Carvalho Chehab
  2009-09-11 19:23                 ` Hans Verkuil
  0 siblings, 1 reply; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-11 18:52 UTC (permalink / raw)
  To: Hiremath, Vaibhav; +Cc: Devin Heitmueller, Hans Verkuil, linux-media

Em Fri, 11 Sep 2009 23:04:13 +0530
"Hiremath, Vaibhav" <hvaibhav@ti.com> escreveu:

> [Hiremath, Vaibhav] I was referring to standard V4L2 interface; I was referring to backward compatibility between Media controller devices itself.

Huh? There's no media controller concept implemented yet. Hans proposal is to
add a new API to enumerate devices, not to replace what currently exists.
> 
> Have you thought of custom parameter configuration? For example H3A(20)/Resizer(64) sub-device will have coeff. Which is non-standard (we had some discussion in the past) -
> 

I'm not saying that all new features should be implemented via sysfs. I'm just
saying that sysfs is the way Linux Kernel uses to describe device topology,
and, due to that, this is is the interface that applies at under the "media
controller" proposal.

In the case of resizer, I don't see why this can't be implemented as an ioctl
over /dev/video device.

> With SYSFS approach it is really difficult to pass big parameter to sub-device, which we can easily achieve using IOCTL.

I didn't get you point here. With sysfs, you can pass everything, even a mix of
strings and numbers, since get operation can be parsed via sscanf and generated
set uses sprintf (this doesn't mean that this is the recommended way to use it).

For example, on kernel 2.6.31, we have the complete hda audio driver pinup by
reading to just one var:

# cat /sys/class/sound/hwC0D0/init_pin_configs
0x11 0x02214040
0x12 0x01014010
0x13 0x991301f0
0x14 0x02a19020
0x15 0x01813030
0x16 0x413301f0
0x17 0x41a601f0
0x18 0x41a601f0
0x1a 0x41f301f0
0x1b 0x414511f0
0x1c 0x41a190f0

If you want to alter PIN 0x15 output config, all you need to do is:

# echo "0x15 0x02214040" >/sys/class/sound/hwC0D0/user_pin_configs
(or open /sys/class/sound/hwC0D0/init_pin_configs and write "0x15 0x02214040" to it)

And to reset to init config:
# echo 1 >/sys/class/sound/hwC0D0/clear

One big advantage is that you can have a shell script to do the needed setup,
automatically called by some udev rule, without needing to write a single line
of code. So, for those advanced configuration parameters that doesn't change
(for example board xtal speeds), you don't need to code it on your application.
Yet, you can do there, if needed.

Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 15:13     ` Mauro Carvalho Chehab
  2009-09-11 15:46       ` Devin Heitmueller
@ 2009-09-11 19:08       ` Hans Verkuil
  2009-09-11 19:54         ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 57+ messages in thread
From: Hans Verkuil @ 2009-09-11 19:08 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: linux-media

Mauro,

I am going to move the ioctl vs sysfs discussion to a separate thread. I'll
post an analysis of that later today or tomorrow.

See below for my comments on some misunderstandings for non-sysfs issues.

On Friday 11 September 2009 17:13:42 Mauro Carvalho Chehab wrote:

<snip>

> > > > All this requires that there has to be a way to connect and disconnect parts
> > > > of the internal topology of a video board at will.
> > > 
> > > We should design this with care, since each change at the internal topology may
> > > create/delete devices.
> > 
> > No, devices aren't created or deleted. Only links between devices.
> 
> I think that there are some cases where devices are created/deleted. For
> example, on some hardware, you have some blocks that allow you to have either 4 SD
> video inputs or 1 HD video input. So, if you change the type of input, you'll
> end by creating or deleting devices.

Normally you will create four device nodes, but if you switch to HD mode,
then only one is active and attempting to use the others will return EBUSY
or something. That's what the davinci driver does.

Creating and deleting device nodes depending on the mode makes a driver very
complex and the application as well. Say you are in SD mode and you have nodes
video[0-3], now you switch to HD mode and you have only node video0. You go
back to SD mode and you may end up with nodes video0 and video[2-4] if in the
meantime the user connected a USB webcam which took up video1.

Just create them upfront. You know beforehand what the maximum number of video
nodes is since that is determined by the hardware. Let's keep things simple.
Media boards are getting very, very complex and we should keep away from adding
unnecessary further complications.

And yes, I too can generate hypothetical situations where this might be needed.
But that's something we can tackle when it arrives.

> 
> > > If you do such changes at topology, udev will need to 
> > > delete the old devices and create the new ones. 
> > 
> > udev is not involved at all. Exception: open issue #2 suggests that we
> > dynamically register device nodes when they are first linked to some source
> > or sink. That would involve udev.
> > 
> > All devices are setup when the board is configured. But the links between
> > them can be changed. This is nothing more than bringing the board's block
> > diagram to life: each square of the diagram (video device node, resizer, video
> > encoder or decoder) is a v4l2-subdev with inputs and outputs. And in some cases
> > you can change links dynamically (in effect this will change a mutex register).
> 
> See above. If you're grouping 4 A/D blocks into one A/D for handling HD, you're
> doing more than just changing links, since the HD device will be just one
> device: one STD, one video input mux, one audio input mux, etc.

So? You will just deactivate three SD device nodes. I don't see a problem with
that, and that concept has already been proven to work in the davinci driver.
 
> > > This will happen on separate 
> > > threads and may cause locking issues at the device, especially since you can be
> > > modifying several components at the same time (being even possible to do it on
> > > separate threads).
> > 
> > This is definitely not something that should be allowed while streaming. I
> > would like to hear from e.g. TI whether this could be a problem or not. I
> > suspect that it isn't a problem unless streaming is in progress.
> 
> Even when streaming, providing that you don't touch at the used IC blocks, it
> should be possible to reconfigure the unused parts. It is just a matter of
> having the right resource locks at the driver.

As you say, this will depend on the driver. Some may be able to do this,
others may just return -EBUSY. I would do the latter, personally, since
allowing this would just make the driver more complicated for IMHO little
gain.
 
> > > I've seen some high-end core network routers that implements topology changes
> > > on an interesting way: any changes done are not immediately applied at the
> > > node, but are stored into a file, where the configuration that can be changed
> > > anytime. However, the topology changes only happen after giving a commit
> > > command. After commit, it validates the new config and apply them atomically
> > > (e. g. or all changes are applied or none), to avoid bad effects that
> > > intermediate changes could cause.
> > > 
> > > As we are at kernelspace, we need to take care to not create a very complex
> > > interface. Yet, the idea of applying the new topology atomically seems
> > > interesting. 
> > 
> > I see no need for it. At least, not for any of the current or forthcoming
> > devices that I am aware of. Should it ever be needed, then we can introduce a
> > 'shadow topology' in the future. You can change the shadow links and when done
> > commit it. That wouldn't be too difficult and we can easily prepare for that
> > eventuality (e.g. have some 'flags' field available where you can set a SHADOW
> > flag in the future).
> 
> > > Alsa is facing a similar problem with pinup quirks needed with HD-audio boards.
> > > They are proposing a firmware like interface:
> > > 	http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-09/msg03198.html
> > > 
> > > On their case, they are just using request_firmware() for it, at board probing
> > > time.
> > 
> > That seems to be a one-time setup. We need this while the system is up and
> > running.
> 
> It would be easy to implement something like:
> 
> 	echo 1 >/sys/class/media/mc0/config_reload
> 
> to call request_firmware() and load the new topology. This is enough to have an
> atomic operation, and we don't need to implement a shadow config.

OK, so instead we require an application to construct a file containing a new
topology, write something to a sysfs file, require code in the v4l core to load
and parse that file, then find out which links have changed (since you really
don't want to set all the links: there can be many, many links, believe me on
that), and finally call the driver to tell it to change those links.

I don't think so. Just call ioctl(mc, VIDIOC_S_LINK, &link). Should we ever
need to do this atomically, then that can be done through a simple double
buffering technique at minimal cost.

The media controller as I see it is something that can be implemented with
very little effort. Drivers provide a bunch of mostly static const structs
that define the topology (the only non-static things are which links are active
and device node specifications). The core uses that info to provide the
enumeration services and any non-media controller ioctl calls are passed
straight to the target subdev.

No parsers, no complex locking schemes (although drivers are free to implement
that if they need it), no complex sysfs attributes.

Keep things as simple as possible. Complexity is the greatest danger to v4l2
development.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 18:52               ` Mauro Carvalho Chehab
@ 2009-09-11 19:23                 ` Hans Verkuil
  2009-09-11 19:59                   ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 57+ messages in thread
From: Hans Verkuil @ 2009-09-11 19:23 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Hiremath, Vaibhav, Devin Heitmueller, linux-media

On Friday 11 September 2009 20:52:17 Mauro Carvalho Chehab wrote:
> Em Fri, 11 Sep 2009 23:04:13 +0530
> "Hiremath, Vaibhav" <hvaibhav@ti.com> escreveu:
> 
> > [Hiremath, Vaibhav] I was referring to standard V4L2 interface; I was referring to backward compatibility between Media controller devices itself.
> 
> Huh? There's no media controller concept implemented yet. Hans proposal is to
> add a new API to enumerate devices, not to replace what currently exists.
> > 
> > Have you thought of custom parameter configuration? For example H3A(20)/Resizer(64) sub-device will have coeff. Which is non-standard (we had some discussion in the past) -
> > 
> 
> I'm not saying that all new features should be implemented via sysfs. I'm just
> saying that sysfs is the way Linux Kernel uses to describe device topology,
> and, due to that, this is is the interface that applies at under the "media
> controller" proposal.
> 
> In the case of resizer, I don't see why this can't be implemented as an ioctl
> over /dev/video device.

Well, no. Not in general. There are two problems. The first problem occurs if
you have multiple instances of a resizer (OK, not likely, but you *can* have
multiple video encoders or decoders or sensors). If all you have is the
streaming device node, then you cannot select to which resizer (or video
encoder) the ioctl should go. The media controller allows you to select the
recipient of the ioctl explicitly. Thus providing the control that these
applications need.

The second problem is that this will pollute the 'namespace' of a v4l device
node. Device drivers need to pass all those private ioctls to the right
sub-device. But they shouldn't have to care about that. If someone wants to
tweak the resizer (e.g. scaling coefficients), then pass it straight to the
resizer component.

Regards,

	Hans

> 
> > With SYSFS approach it is really difficult to pass big parameter to sub-device, which we can easily achieve using IOCTL.
> 
> I didn't get you point here. With sysfs, you can pass everything, even a mix of
> strings and numbers, since get operation can be parsed via sscanf and generated
> set uses sprintf (this doesn't mean that this is the recommended way to use it).
> 
> For example, on kernel 2.6.31, we have the complete hda audio driver pinup by
> reading to just one var:
> 
> # cat /sys/class/sound/hwC0D0/init_pin_configs
> 0x11 0x02214040
> 0x12 0x01014010
> 0x13 0x991301f0
> 0x14 0x02a19020
> 0x15 0x01813030
> 0x16 0x413301f0
> 0x17 0x41a601f0
> 0x18 0x41a601f0
> 0x1a 0x41f301f0
> 0x1b 0x414511f0
> 0x1c 0x41a190f0
> 
> If you want to alter PIN 0x15 output config, all you need to do is:
> 
> # echo "0x15 0x02214040" >/sys/class/sound/hwC0D0/user_pin_configs
> (or open /sys/class/sound/hwC0D0/init_pin_configs and write "0x15 0x02214040" to it)
> 
> And to reset to init config:
> # echo 1 >/sys/class/sound/hwC0D0/clear
> 
> One big advantage is that you can have a shell script to do the needed setup,
> automatically called by some udev rule, without needing to write a single line
> of code. So, for those advanced configuration parameters that doesn't change
> (for example board xtal speeds), you don't need to code it on your application.
> Yet, you can do there, if needed.
> 
> Cheers,
> Mauro
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 



-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 19:08       ` Hans Verkuil
@ 2009-09-11 19:54         ` Mauro Carvalho Chehab
  2009-09-11 20:29           ` Hans Verkuil
  0 siblings, 1 reply; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-11 19:54 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media

Em Fri, 11 Sep 2009 21:08:13 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> > > No, devices aren't created or deleted. Only links between devices.
> > 
> > I think that there are some cases where devices are created/deleted. For
> > example, on some hardware, you have some blocks that allow you to have either 4 SD
> > video inputs or 1 HD video input. So, if you change the type of input, you'll
> > end by creating or deleting devices.
> 
> Normally you will create four device nodes, but if you switch to HD mode,
> then only one is active and attempting to use the others will return EBUSY
> or something. That's what the davinci driver does.
> 
> Creating and deleting device nodes depending on the mode makes a driver very
> complex and the application as well. Say you are in SD mode and you have nodes
> video[0-3], now you switch to HD mode and you have only node video0. You go
> back to SD mode and you may end up with nodes video0 and video[2-4] if in the
> meantime the user connected a USB webcam which took up video1.
> 
> Just create them upfront. You know beforehand what the maximum number of video
> nodes is since that is determined by the hardware. Let's keep things simple.
> Media boards are getting very, very complex and we should keep away from adding
> unnecessary further complications.

Ok, we may start with this approach, and move to a more complex one only if
needed. This should be properly documented to avoid miss-understandings.

> > See above. If you're grouping 4 A/D blocks into one A/D for handling HD, you're
> > doing more than just changing links, since the HD device will be just one
> > device: one STD, one video input mux, one audio input mux, etc.
> 
> So? You will just deactivate three SD device nodes. I don't see a problem with
> that, and that concept has already been proven to work in the davinci driver.

If just disabling applies to all cases, I agree stick with this idea. The
issue with enabling/disabling devices is that some complex hardware may need to
register a large amount of devices to expose all the different possibilities,
but only a very few of them being possible to be enabled. Let's see as time
goes by.

To work like you said, this means that we'll need an enable attribute at
the corresponding sysfs entry.

It should be noticed that, even not deleting a hardware, udev can still be
called. For example, an userspace application (like lirc) may need to be
started/stopped if you enable/disable IR (or restarted on some topology
changes, like using a different IR protocol).

> > Even when streaming, providing that you don't touch at the used IC blocks, it
> > should be possible to reconfigure the unused parts. It is just a matter of
> > having the right resource locks at the driver.
> 
> As you say, this will depend on the driver.

Yes.

> Some may be able to do this,
> others may just return -EBUSY. I would do the latter, personally, since
> allowing this would just make the driver more complicated for IMHO little
> gain.

Ok. Both approaches are valid. So the API should be able to support both ways,
providing a thread safe interface to userspace.

> > It would be easy to implement something like:
> > 
> > 	echo 1 >/sys/class/media/mc0/config_reload
> > 
> > to call request_firmware() and load the new topology. This is enough to have an
> > atomic operation, and we don't need to implement a shadow config.
> 
> OK, so instead we require an application to construct a file containing a new
> topology, write something to a sysfs file, require code in the v4l core to load
> and parse that file, then find out which links have changed (since you really
> don't want to set all the links: there can be many, many links, believe me on
> that), and finally call the driver to tell it to change those links.

As I said before, the design should take into account how frequent are those
changes. If they are very infrequent, this approach works, and offers one
advantage: the topology will survive to application crashes and warm/cold
reboots. If the changes are frequent, an approach like the audio
user_pin_configs work better (see my previous email - note that this approach
can be used for atomic operations if needed). You add at a sysfs node just the
dynamic changes you need. We may even have both ways, as alsa seems to have
(init_pin_configs and user_pin_configs).



Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 19:23                 ` Hans Verkuil
@ 2009-09-11 19:59                   ` Mauro Carvalho Chehab
  2009-09-11 20:15                     ` Hans Verkuil
  0 siblings, 1 reply; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-11 19:59 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Hiremath, Vaibhav, Devin Heitmueller, linux-media

Em Fri, 11 Sep 2009 21:23:44 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> > In the case of resizer, I don't see why this can't be implemented as an ioctl
> > over /dev/video device.
> 
> Well, no. Not in general. There are two problems. The first problem occurs if
> you have multiple instances of a resizer (OK, not likely, but you *can* have
> multiple video encoders or decoders or sensors). If all you have is the
> streaming device node, then you cannot select to which resizer (or video
> encoder) the ioctl should go. The media controller allows you to select the
> recipient of the ioctl explicitly. Thus providing the control that these
> applications need.

This case doesn't apply, since, if you have multiple encoders and/or decoders,
you'll also have multiple /dev/video instances. All you need is to call it at
the right device you need to control. Am I missing something here?

> The second problem is that this will pollute the 'namespace' of a v4l device
> node. Device drivers need to pass all those private ioctls to the right
> sub-device. But they shouldn't have to care about that. If someone wants to
> tweak the resizer (e.g. scaling coefficients), then pass it straight to the
> resizer component.

Sorry, I missed your point here



Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 19:59                   ` Mauro Carvalho Chehab
@ 2009-09-11 20:15                     ` Hans Verkuil
  2009-09-11 21:37                       ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 57+ messages in thread
From: Hans Verkuil @ 2009-09-11 20:15 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Hiremath, Vaibhav, Devin Heitmueller, linux-media

On Friday 11 September 2009 21:59:37 Mauro Carvalho Chehab wrote:
> Em Fri, 11 Sep 2009 21:23:44 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
> > > In the case of resizer, I don't see why this can't be implemented as an ioctl
> > > over /dev/video device.
> > 
> > Well, no. Not in general. There are two problems. The first problem occurs if
> > you have multiple instances of a resizer (OK, not likely, but you *can* have
> > multiple video encoders or decoders or sensors). If all you have is the
> > streaming device node, then you cannot select to which resizer (or video
> > encoder) the ioctl should go. The media controller allows you to select the
> > recipient of the ioctl explicitly. Thus providing the control that these
> > applications need.
> 
> This case doesn't apply, since, if you have multiple encoders and/or decoders,
> you'll also have multiple /dev/video instances. All you need is to call it at
> the right device you need to control. Am I missing something here?

Typical use-case: two video decoders feed video into a composer that combines
the two (e.g. for PiP) and streams the result to one video node.

Now you want to change e.g. the contrast on one of those video decoders. That's
not going to be possible using /dev/video.

> > The second problem is that this will pollute the 'namespace' of a v4l device
> > node. Device drivers need to pass all those private ioctls to the right
> > sub-device. But they shouldn't have to care about that. If someone wants to
> > tweak the resizer (e.g. scaling coefficients), then pass it straight to the
> > resizer component.
> 
> Sorry, I missed your point here

Example: a sub-device can produce certain statistics. You want to have an
ioctl to obtain those statistics. If you call that through /dev/videoX, then
that main driver has to handle that ioctl in vidioc_default and pass it on
to the right subdev. So you have to write that vidioc_default handler,
know about the sub-devices that you have and which sub-device is linked to
the device node. You really don't want to have to do that. Especially not
when you are dealing with i2c devices that are loaded from platform code.

If a video encoder supports private ioctls, then an omap3 driver doesn't
want to know about that. Oh, and before you ask: just broadcasting that
ioctl is not a solution if you have multiple identical video encoders.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 19:54         ` Mauro Carvalho Chehab
@ 2009-09-11 20:29           ` Hans Verkuil
  2009-09-11 21:28             ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 57+ messages in thread
From: Hans Verkuil @ 2009-09-11 20:29 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: linux-media

On Friday 11 September 2009 21:54:03 Mauro Carvalho Chehab wrote:
> Em Fri, 11 Sep 2009 21:08:13 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:

<snip>

> > OK, so instead we require an application to construct a file containing a new
> > topology, write something to a sysfs file, require code in the v4l core to load
> > and parse that file, then find out which links have changed (since you really
> > don't want to set all the links: there can be many, many links, believe me on
> > that), and finally call the driver to tell it to change those links.
> 
> As I said before, the design should take into account how frequent are those
> changes. If they are very infrequent, this approach works, and offers one
> advantage: the topology will survive to application crashes and warm/cold
> reboots. If the changes are frequent, an approach like the audio
> user_pin_configs work better (see my previous email - note that this approach
> can be used for atomic operations if needed). You add at a sysfs node just the
> dynamic changes you need. We may even have both ways, as alsa seems to have
> (init_pin_configs and user_pin_configs).

How frequent those changes are will depend entirely on the application.
Never underestimate the creativity of the end-users :-)

I think that a good worst case guideline would be 60 times per second.
Say for a surveillance type application that switches between video decoders
for each frame. Or some 3D type application that switches between two
sensors for each frame.

Of course, in the future you might want to get 3D done at 60 fps, meaning
that you have to switch between sensors 120 times per second.

One problem with media boards is that it is very hard to predict how they
will be used and what they will be capable of in the future.

Note that I am pretty sure that no application wants to have a media
board boot into an unpredicable initial topology. That would make life
very difficult for them.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 20:29           ` Hans Verkuil
@ 2009-09-11 21:28             ` Mauro Carvalho Chehab
  2009-09-11 22:39               ` Hans Verkuil
  0 siblings, 1 reply; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-11 21:28 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media

Em Fri, 11 Sep 2009 22:29:41 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> On Friday 11 September 2009 21:54:03 Mauro Carvalho Chehab wrote:
> > Em Fri, 11 Sep 2009 21:08:13 +0200
> > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
> <snip>
> 
> > > OK, so instead we require an application to construct a file containing a new
> > > topology, write something to a sysfs file, require code in the v4l core to load
> > > and parse that file, then find out which links have changed (since you really
> > > don't want to set all the links: there can be many, many links, believe me on
> > > that), and finally call the driver to tell it to change those links.
> > 
> > As I said before, the design should take into account how frequent are those
> > changes. If they are very infrequent, this approach works, and offers one
> > advantage: the topology will survive to application crashes and warm/cold
> > reboots. If the changes are frequent, an approach like the audio
> > user_pin_configs work better (see my previous email - note that this approach
> > can be used for atomic operations if needed). You add at a sysfs node just the
> > dynamic changes you need. We may even have both ways, as alsa seems to have
> > (init_pin_configs and user_pin_configs).
> 
> How frequent those changes are will depend entirely on the application.
> Never underestimate the creativity of the end-users :-)
> 
> I think that a good worst case guideline would be 60 times per second.
> Say for a surveillance type application that switches between video decoders
> for each frame.

The video input switch control, is already used by surveillance applications
for a long time. There's no need to add any API for it.

> Or some 3D type application that switches between two sensors for each frame.

Also, another case of video input selection.

We shouldn't design any new device for it.

I may be wrong, but from Vaibhav and your last comments, I'm starting to think
that you're wanting to replace V4L2 by a new "media controller" based new API.

So, let's go one step back and better understand what's expected by the media
controller.

>From my previous understanding, those are the needs:

1) V4L2 API will keep being used to control the devices and to do streaming,
working under the already well defined devices;

2) One Kernel object is needed to represent the entire board as a hole, to
enumerate its sub-devices and to change their topology;

3) For some very specific cases, it should be possible to "tweak" some
sub-devices to act on a non-usual way;

4) Some new ioctls are needed to control some parts of the devices that aren't
currently covered by V4L2 API.

Right?

If so:

(1) already exists;

(2) is the "topology manager" of the media controller, that should use
sysfs, due to its nature.

For (3), there are a few alternatives. IMO, the better is to use also sysfs,
since we'll have all subdevs already represented there. So, to change
something, it is just a matter to write something to a sysfs node. Another
alternative would be to create separate subdevs at /dev, but this will end on
creating much more complex drivers than probably needed.

(4) is implemented by some new ioctl additions at V4L2 API.

Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 20:15                     ` Hans Verkuil
@ 2009-09-11 21:37                       ` Mauro Carvalho Chehab
  2009-09-11 22:25                         ` Hans Verkuil
  2009-09-21 17:22                         ` Sakari Ailus
  0 siblings, 2 replies; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-11 21:37 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Hiremath, Vaibhav, Devin Heitmueller, linux-media

Em Fri, 11 Sep 2009 22:15:15 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> On Friday 11 September 2009 21:59:37 Mauro Carvalho Chehab wrote:
> > Em Fri, 11 Sep 2009 21:23:44 +0200
> > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > 
> > > > In the case of resizer, I don't see why this can't be implemented as an ioctl
> > > > over /dev/video device.
> > > 
> > > Well, no. Not in general. There are two problems. The first problem occurs if
> > > you have multiple instances of a resizer (OK, not likely, but you *can* have
> > > multiple video encoders or decoders or sensors). If all you have is the
> > > streaming device node, then you cannot select to which resizer (or video
> > > encoder) the ioctl should go. The media controller allows you to select the
> > > recipient of the ioctl explicitly. Thus providing the control that these
> > > applications need.
> > 
> > This case doesn't apply, since, if you have multiple encoders and/or decoders,
> > you'll also have multiple /dev/video instances. All you need is to call it at
> > the right device you need to control. Am I missing something here?
> 
> Typical use-case: two video decoders feed video into a composer that combines
> the two (e.g. for PiP) and streams the result to one video node.
> 
> Now you want to change e.g. the contrast on one of those video decoders. That's
> not going to be possible using /dev/video.

On your above example, each video decoder will need a /dev/video, and also the
video composer. 

So, if you want to control the first decoder, you'll use /dev/video0. If you
want to control the second, /dev/video1, and the mux, /dev/video2.

The topology will be properly described at the media controller sysfs nodes.

> 
> > > The second problem is that this will pollute the 'namespace' of a v4l device
> > > node. Device drivers need to pass all those private ioctls to the right
> > > sub-device. But they shouldn't have to care about that. If someone wants to
> > > tweak the resizer (e.g. scaling coefficients), then pass it straight to the
> > > resizer component.
> > 
> > Sorry, I missed your point here
> 
> Example: a sub-device can produce certain statistics. You want to have an
> ioctl to obtain those statistics. If you call that through /dev/videoX, then
> that main driver has to handle that ioctl in vidioc_default and pass it on
> to the right subdev. So you have to write that vidioc_default handler,
> know about the sub-devices that you have and which sub-device is linked to
> the device node. You really don't want to have to do that. Especially not
> when you are dealing with i2c devices that are loaded from platform code.
> If a video encoder supports private ioctls, then an omap3 driver doesn't
> want to know about that. Oh, and before you ask: just broadcasting that
> ioctl is not a solution if you have multiple identical video encoders.

This can be as easy as reading from /sys/class/media/dsp:stat0/stats


Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 21:37                       ` Mauro Carvalho Chehab
@ 2009-09-11 22:25                         ` Hans Verkuil
  2009-09-21 17:22                         ` Sakari Ailus
  1 sibling, 0 replies; 57+ messages in thread
From: Hans Verkuil @ 2009-09-11 22:25 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: Hiremath, Vaibhav, Devin Heitmueller, linux-media

On Friday 11 September 2009 23:37:58 Mauro Carvalho Chehab wrote:
> Em Fri, 11 Sep 2009 22:15:15 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
> > On Friday 11 September 2009 21:59:37 Mauro Carvalho Chehab wrote:
> > > Em Fri, 11 Sep 2009 21:23:44 +0200
> > > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > > 
> > > > > In the case of resizer, I don't see why this can't be implemented as an ioctl
> > > > > over /dev/video device.
> > > > 
> > > > Well, no. Not in general. There are two problems. The first problem occurs if
> > > > you have multiple instances of a resizer (OK, not likely, but you *can* have
> > > > multiple video encoders or decoders or sensors). If all you have is the
> > > > streaming device node, then you cannot select to which resizer (or video
> > > > encoder) the ioctl should go. The media controller allows you to select the
> > > > recipient of the ioctl explicitly. Thus providing the control that these
> > > > applications need.
> > > 
> > > This case doesn't apply, since, if you have multiple encoders and/or decoders,
> > > you'll also have multiple /dev/video instances. All you need is to call it at
> > > the right device you need to control. Am I missing something here?
> > 
> > Typical use-case: two video decoders feed video into a composer that combines
> > the two (e.g. for PiP) and streams the result to one video node.
> > 
> > Now you want to change e.g. the contrast on one of those video decoders. That's
> > not going to be possible using /dev/video.
> 
> On your above example, each video decoder will need a /dev/video, and also the
> video composer. 

Why? The video decoders do not do any streaming. There may well be just one
DMA engine that DMAs the output from the video composer.

Regards,

	Hans



-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 21:28             ` Mauro Carvalho Chehab
@ 2009-09-11 22:39               ` Hans Verkuil
  2009-09-16 18:15                 ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 57+ messages in thread
From: Hans Verkuil @ 2009-09-11 22:39 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: linux-media

On Friday 11 September 2009 23:28:47 Mauro Carvalho Chehab wrote:
> Em Fri, 11 Sep 2009 22:29:41 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
> > On Friday 11 September 2009 21:54:03 Mauro Carvalho Chehab wrote:
> > > Em Fri, 11 Sep 2009 21:08:13 +0200
> > > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > 
> > <snip>
> > 
> > > > OK, so instead we require an application to construct a file containing a new
> > > > topology, write something to a sysfs file, require code in the v4l core to load
> > > > and parse that file, then find out which links have changed (since you really
> > > > don't want to set all the links: there can be many, many links, believe me on
> > > > that), and finally call the driver to tell it to change those links.
> > > 
> > > As I said before, the design should take into account how frequent are those
> > > changes. If they are very infrequent, this approach works, and offers one
> > > advantage: the topology will survive to application crashes and warm/cold
> > > reboots. If the changes are frequent, an approach like the audio
> > > user_pin_configs work better (see my previous email - note that this approach
> > > can be used for atomic operations if needed). You add at a sysfs node just the
> > > dynamic changes you need. We may even have both ways, as alsa seems to have
> > > (init_pin_configs and user_pin_configs).
> > 
> > How frequent those changes are will depend entirely on the application.
> > Never underestimate the creativity of the end-users :-)
> > 
> > I think that a good worst case guideline would be 60 times per second.
> > Say for a surveillance type application that switches between video decoders
> > for each frame.
> 
> The video input switch control, is already used by surveillance applications
> for a long time. There's no need to add any API for it.
> 
> > Or some 3D type application that switches between two sensors for each frame.
> 
> Also, another case of video input selection.

True, bad example. Given enough time I can no doubt come up with some example :-)

> We shouldn't design any new device for it.
> 
> I may be wrong, but from Vaibhav and your last comments, I'm starting to think
> that you're wanting to replace V4L2 by a new "media controller" based new API.
> 
> So, let's go one step back and better understand what's expected by the media
> controller.
> 
> From my previous understanding, those are the needs:
> 
> 1) V4L2 API will keep being used to control the devices and to do streaming,
> working under the already well defined devices;

Yes.
 
> 2) One Kernel object is needed to represent the entire board as a hole, to
> enumerate its sub-devices and to change their topology;

Yes.

> 3) For some very specific cases, it should be possible to "tweak" some
> sub-devices to act on a non-usual way;

This will not be for 'some very specific cases'. This will become an essential
feature on embedded platforms. It's probably the most important part of the
media controller proposal.

> 4) Some new ioctls are needed to control some parts of the devices that aren't
> currently covered by V4L2 API.

No, that is not part of the proposal. Of course, as drivers for the more
advanced devices are submitted there may be some functionality that is general
enough to warrant inclusion in the V4L2 API, but that's business as usual.

> 
> Right?
> 
> If so:
> 
> (1) already exists;

Obviously.
 
> (2) is the "topology manager" of the media controller, that should use
> sysfs, due to its nature.

See the separate thread I started on sysfs vs ioctl.

> For (3), there are a few alternatives. IMO, the better is to use also sysfs,
> since we'll have all subdevs already represented there. So, to change
> something, it is just a matter to write something to a sysfs node.

See that same thread why that is a really bad idea.

> Another 
> alternative would be to create separate subdevs at /dev, but this will end on
> creating much more complex drivers than probably needed.

I agree with this.

> (4) is implemented by some new ioctl additions at V4L2 API.

Not an issue as stated above.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10 15:00       ` Hans Verkuil
  2009-09-10 19:19         ` Karicheri, Muralidharan
@ 2009-09-15 11:36         ` Laurent Pinchart
  1 sibling, 0 replies; 57+ messages in thread
From: Laurent Pinchart @ 2009-09-15 11:36 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Patrick Boettcher, Linux Media Mailing List

Hi Hans,

On Thursday 10 September 2009 17:00:40 Hans Verkuil wrote:
> > On Thu, 10 Sep 2009, Hans Verkuil wrote:
> > > > Could entities not be completely addressed (configuration ioctls)
> > > > through the mc-node?
> > >
> > > Not sure what you mean.
> >
> > Instead of having a device node for each entity, the ioctls for each
> > entities are done on the media controller-node address an entity by ID.
> 
> I definitely don't want to go there. Use device nodes (video, fb, alsa,
> dvb, etc) for streaming the actual media as we always did and use the
> media controller for controlling the board. It keeps everything nicely
> separate and clean.

I agree with this, but I think it might be what Patrick meant as well.

Beside enumeration and link setup, the media controller device will allow 
direct access to entities to get/set controls and formats. As such its API 
will overlap with the V4L2 control and format API. This is not a problem at 
all, both having different use cases (control/format at the V4L2 level are 
meant for "simple" applications in a backward-compatible fashion, and 
control/format at the media controller level are meant for power users).

V4L2 devices will be used for streaming video as that's what they do best. We 
don't want a video streaming API at the media controller level (not completely 
true, as we are toying with the idea of shared video buffers, but that's for 
later).

In the long term I can imagine the V4L2 control/format ioctls being deprecated 
and all control/format access being done through the media controller. That's 
very long term though.

-- 
Laurent Pinchart

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-10 21:59   ` Hans Verkuil
@ 2009-09-15 12:28     ` Laurent Pinchart
  0 siblings, 0 replies; 57+ messages in thread
From: Laurent Pinchart @ 2009-09-15 12:28 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Guennadi Liakhovetski, linux-media

On Thursday 10 September 2009 23:59:20 Hans Verkuil wrote:
> On Thursday 10 September 2009 23:28:40 Guennadi Liakhovetski wrote:
> > Hi Hans
> >
> > a couple of comments / questions from the first glance
> >
> > On Thu, 10 Sep 2009, Hans Verkuil wrote:
> >
[snip]

> > > This requires no API changes and is very easy to implement. One problem
> > > is that this is not thread-safe. We can either supply some sort of
> > > locking mechanism, or just tell the application programmer to do the
> > > locking in the application. I'm not sure what is the correct approach
> > > here. A reasonable compromise would be to store the target entity as
> > > part of the filehandle. So you can open the media controller multiple
> > > times and each handle can set its own target entity.
> > >
> > > This also has the advantage that you can have a filehandle 'targeted'
> > > at a resizer and a filehandle 'targeted' at the previewer, etc. If you
> > > want to use the same filehandle from multiple threads, then you have to
> > > implement locking yourself.
> >
> > You mean the driver should only care about internal consistency, and the
> > user is allowed to otherwise shoot herself in the foot? Makes sense to
> > me:-)
> 
> Basically, yes :-)
> 
> You can easily make something like a VIDIOC_MC_LOCK and VIDIOC_MC_UNLOCK
> ioctl that can be used to get exclusive access to the MC. Or we could
> reuse the G/S_PRIORITY ioctls. The first just feels like a big hack to me,
> the second has some merit, I think.

The target entity should really be stored at the file handle level, otherwise 
Very Bad Stuff (TM) will happen. Then, if a multi-threaded application wants 
to access the file handle from multiple threads, it will need to implement its 
own serializing.

I don't think any VIDIOC_MC_LOCK/UNLOCK is required, what would be the use 
cases for them ?

> > > Open issues
> > > ===========

[snip]

> > > 2) There can be a lot of device nodes in complicated boards. One
> > > suggestion is to only register them when they are linked to an entity
> > > (i.e. can be active). Should we do this or not?
> >
> > Really a lot of device nodes? not sub-devices? What can this be? Isn't
> > the decision when to register them board-specific?
> 
> Sub-devices do not in general have device nodes (note that i2c sub-devices
> will have an i2c device node, of course).
> 
> When to register device nodes is in the end driver-specific, but what to do
> when enumerating input device nodes and the device node doesn't exist yet?
> 
> I can't put my finger on it, but my intuition says that doing this is
> dangerous. I can't oversee all the consequences.

Why would it be dangerous ? As long as an input or output device node is not 
connected to anything in the internal board "graph" it will be completely 
pointless for applications to use those device nodes. What do you imagine 
going wrong ?

[snip]

> > > 6) For now I think we should leave enumerating input and output
> > > connectors to the bridge drivers (ENUMINPUT/ENUMOUTPUT). But as a
> > > future step it would make sense to also enumerate those in the media
> > > controller. However, it is not entirely clear what the relationship
> > > will be between that and the existing enumeration ioctls.
> >
> > Why should a bridge driver care? This isn't board-specific, is it?
> 
> I don't follow you. What input and output connectors a board has is by
> definition board specific. If you can enumerate them through the media
> controller, then you can be more precise how they are hooked up. E.g. an
> antenna input is connected to a tuner sub-device, while the composite
> video-in is connected to a video decoder and the audio inputs to an audio
> mixer sub-device. All things that cannot be represented by ENUMINPUT. But
> do we really care about that?
>
> My opinion is that we should leave this alone for now. There is enough to
> do and we can always add it later.

In that end that boils down to a (few) table(s) of static data. It won't make 
drivers more complex, and I think we should support enumerating the input and 
output connectors at the media controller level, if only for the sake of 
completeness and coherency.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 22:39               ` Hans Verkuil
@ 2009-09-16 18:15                 ` Mauro Carvalho Chehab
  2009-09-16 19:21                   ` Hans Verkuil
  0 siblings, 1 reply; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-16 18:15 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media

Em Sat, 12 Sep 2009 00:39:50 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > From my previous understanding, those are the needs:
> > 
> > 1) V4L2 API will keep being used to control the devices and to do streaming,
> > working under the already well defined devices;
> 
> Yes.
>  
> > 2) One Kernel object is needed to represent the entire board as a hole, to
> > enumerate its sub-devices and to change their topology;
> 
> Yes.
> 
> > 3) For some very specific cases, it should be possible to "tweak" some
> > sub-devices to act on a non-usual way;
> 
> This will not be for 'some very specific cases'. This will become an essential
> feature on embedded platforms. It's probably the most important part of the
> media controller proposal.

Embedded platforms is an specific use case. 

> > 4) Some new ioctls are needed to control some parts of the devices that aren't
> > currently covered by V4L2 API.
> 
> No, that is not part of the proposal. Of course, as drivers for the more
> advanced devices are submitted there may be some functionality that is general
> enough to warrant inclusion in the V4L2 API, but that's business as usual.
> 
> > 
> > Right?
> > 
> > If so:
> > 
> > (1) already exists;
> 
> Obviously.
>  
> > (2) is the "topology manager" of the media controller, that should use
> > sysfs, due to its nature.
> 
> See the separate thread I started on sysfs vs ioctl.
> 
> > For (3), there are a few alternatives. IMO, the better is to use also sysfs,
> > since we'll have all subdevs already represented there. So, to change
> > something, it is just a matter to write something to a sysfs node.
> 
> See that same thread why that is a really bad idea.
> 
> > Another 
> > alternative would be to create separate subdevs at /dev, but this will end on
> > creating much more complex drivers than probably needed.
> 
> I agree with this.
> 
> > (4) is implemented by some new ioctl additions at V4L2 API.
> 
> Not an issue as stated above.

I can't avoid to be distracted from my merge duties to address some points that
seem to be important to bold on those new RFC discussions.

We need to take care of not creating a "mess controller" instead of "media controller".

>From a few emails at the mailing list, It seems to me that some people are
thinking that the media controller is a replacement for what we have, or as
a "solution for all our problems".

It won't solve all our problems, nor it should be a replacement for what we have.

Basically, there's no reason for firing the V4L2 API.  We can extend it,
improve, add new capabilities, etc, but, considering the experiences learned
from moving from V4L1 to V4L2, for bad or for good, we can't get rid of it.

See the history: V4L2 was proposed in 1999 and added on kernel on 2002. Seven
years after its implementation, and ten years after its proposal, and there are
yet drivers needing to be ported. So, creating a "media controller" as a
replacement for it won't work.

The media controller, as proposed, has two very specific capabilities:

1) enumerate and change media device topology. 

This is something that it is out of the scope of V4L2 API, so it is valid to
think on implementing an API for it.

2) "sub-device" control. I think the mess started here.

We need to go one more step behind and see what this exactly means.

Let me try to identify the concepts and seek for the answers.

What's a sub-device?
====================

Well, if we strip v4l2-framework.txt and driver/media from "git grep", we have:

For "subdevice", there are several occurences. All of them refers to
subvendor/subdevice PCI ID.

For "sub-device": most references also talk about PCI subdevices. On all places
(except for V4L), where a subdevice exists, a kernel device is created.

So, basically, only V4L is using sub-device with a different meaning than what's at kernel.
On all other places, a subdevice is just another device.

It seems that we have a misconception here: sub-device is just an alias for
"device". 

IMO, it is better to avoid using "sub-device", as this cause confusion with the
widely used pci subdevice designation.

How kernel deals with (sub-)device ?
====================================

A device has nothing to do with a single physical component. In fact, since the
beginning of Linux, physical devices like superIO chips (now called as "south
bridge") exports several kernel devices associated to it, for example, to
serial interface , printer interface, rtc, pci controllers, etc.

Using another example from a driver I'm working for checking memory errors at
the i7 core machines: In order to get errors from each processor, the driver needs
to talk with 18 devices. All of those 18 kernel devices are part of just one
physical CPU chip. Worse than that, they are the memory controller part of a
single logical unit (called QPI- Quick Path Interconnet). All those 18 devices
are bound to an specific PCI bus for each memory controller (on a machine with
2 CPU sockets, there are 2 buses, 36 total PCI devices, each with lots of
registers).

So, basically, a kernel device is the kernel representation for a block element
inside physical device that needs to be controlled by kernel. A driver may need
to deal with and to export several different devices for userspace.

One of the concepts of the "media controller" is that a media controller device
will act like a proxy entity, sending commands to hidden devices, acting like a
bus to communicate with the physical components that aren't represented as
devices.

So, I want to return back to one question: 

Should we create a device for each "v4l sub-device"?
====================================================

While I said before that this would result into a complex representation, after
a careful study and analysis, I'm fully convinced that the answer should be:

YES, each "v4l sub-device" should be a device.

Rationale:

1) We already do this for several components: i2c devices, i2c bus, IR, audio;

2) On most cases, those components are already a device, being an i2c device or
another kind of device connected to another bus;

3) Userspace needs to communicate with them. The kernel model is to create a
device for device <=> userspace communication, and not to use proxy entities;

4) Creating device for "sub-devices" is the approach already taken on all other
drivers over the kernel.

Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-16 18:15                 ` Mauro Carvalho Chehab
@ 2009-09-16 19:21                   ` Hans Verkuil
  2009-09-16 20:38                     ` Guennadi Liakhovetski
  2009-09-16 20:50                     ` Mauro Carvalho Chehab
  0 siblings, 2 replies; 57+ messages in thread
From: Hans Verkuil @ 2009-09-16 19:21 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: linux-media

On Wednesday 16 September 2009 20:15:20 Mauro Carvalho Chehab wrote:
> Em Sat, 12 Sep 2009 00:39:50 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > > From my previous understanding, those are the needs:
> > > 
> > > 1) V4L2 API will keep being used to control the devices and to do streaming,
> > > working under the already well defined devices;
> > 
> > Yes.
> >  
> > > 2) One Kernel object is needed to represent the entire board as a hole, to
> > > enumerate its sub-devices and to change their topology;
> > 
> > Yes.
> > 
> > > 3) For some very specific cases, it should be possible to "tweak" some
> > > sub-devices to act on a non-usual way;
> > 
> > This will not be for 'some very specific cases'. This will become an essential
> > feature on embedded platforms. It's probably the most important part of the
> > media controller proposal.
> 
> Embedded platforms is an specific use case. 

However you look at it, it is certainly a very important use case. It's a
huge and important industry and we should have proper support for it in v4l-dvb.
And embedded platforms are used quite differently. Where device drivers for
comsumer market products should hide complexity (because the end-user or the
generic webcam/video application doesn't want to be bothered with that), they
should expose that complexity for embedded platforms since there the
application writes want to take full control.

> > > 4) Some new ioctls are needed to control some parts of the devices that aren't
> > > currently covered by V4L2 API.
> > 
> > No, that is not part of the proposal. Of course, as drivers for the more
> > advanced devices are submitted there may be some functionality that is general
> > enough to warrant inclusion in the V4L2 API, but that's business as usual.
> > 
> > > 
> > > Right?
> > > 
> > > If so:
> > > 
> > > (1) already exists;
> > 
> > Obviously.
> >  
> > > (2) is the "topology manager" of the media controller, that should use
> > > sysfs, due to its nature.
> > 
> > See the separate thread I started on sysfs vs ioctl.
> > 
> > > For (3), there are a few alternatives. IMO, the better is to use also sysfs,
> > > since we'll have all subdevs already represented there. So, to change
> > > something, it is just a matter to write something to a sysfs node.
> > 
> > See that same thread why that is a really bad idea.
> > 
> > > Another 
> > > alternative would be to create separate subdevs at /dev, but this will end on
> > > creating much more complex drivers than probably needed.
> > 
> > I agree with this.
> > 
> > > (4) is implemented by some new ioctl additions at V4L2 API.
> > 
> > Not an issue as stated above.
> 
> I can't avoid to be distracted from my merge duties to address some points that
> seem to be important to bold on those new RFC discussions.
> 
> We need to take care of not creating a "mess controller" instead of "media controller".
> 
> From a few emails at the mailing list, It seems to me that some people are
> thinking that the media controller is a replacement for what we have, or as
> a "solution for all our problems".
> 
> It won't solve all our problems, nor it should be a replacement for what we have.
> 
> Basically, there's no reason for firing the V4L2 API.  We can extend it,
> improve, add new capabilities, etc, but, considering the experiences learned
> from moving from V4L1 to V4L2, for bad or for good, we can't get rid of it.
> 
> See the history: V4L2 was proposed in 1999 and added on kernel on 2002. Seven
> years after its implementation, and ten years after its proposal, and there are
> yet drivers needing to be ported. So, creating a "media controller" as a
> replacement for it won't work.

I have absolutely no idea where you got the impression that the media
controller would replace V4L2. V4L2 has proven itself as an API and IMHO was
very well designed for the future. Sure, in hindsight there were a few things
we would do differently now, but especially in the video world it is very hard
to predict the future so the V4L2 API has and is doing an excellent job.

The media controller complements the V4L2 API and will in no way replace it.
 
> The media controller, as proposed, has two very specific capabilities:
> 
> 1) enumerate and change media device topology. 
> 
> This is something that it is out of the scope of V4L2 API, so it is valid to
> think on implementing an API for it.
> 
> 2) "sub-device" control. I think the mess started here.
> 
> We need to go one more step behind and see what this exactly means.
> 
> Let me try to identify the concepts and seek for the answers.
> 
> What's a sub-device?
> ====================
> 
> Well, if we strip v4l2-framework.txt and driver/media from "git grep", we have:
> 
> For "subdevice", there are several occurences. All of them refers to
> subvendor/subdevice PCI ID.
> 
> For "sub-device": most references also talk about PCI subdevices. On all places
> (except for V4L), where a subdevice exists, a kernel device is created.
> 
> So, basically, only V4L is using sub-device with a different meaning than what's at kernel.
> On all other places, a subdevice is just another device.
> 
> It seems that we have a misconception here: sub-device is just an alias for
> "device". 
> 
> IMO, it is better to avoid using "sub-device", as this cause confusion with the
> widely used pci subdevice designation.

We discussed this on the list at the time. I think my original name was
v4l2-client. If you can come up with a better name, then I'm happy to do a
search and replace.

Suggestions for a better name are welcome! Perhaps something more abstract
like v4l2-block? v4l2-part? v4l2-object? v4l2-function?

But the concept behind it will really not change with a different name.

Anyway, the definition of a sub-device within v4l is 'anything that has a
struct v4l2_subdev'. Seen in C++ terms a v4l2_subdev struct defines several
possible abstract interfaces. And objects can implement ('inherit') one or
more of these. Perhaps v4l2-object is a much better term since that removes
the association with a kernel device, which it is most definitely not.

> 
> How kernel deals with (sub-)device ?
> ====================================
> 
> A device has nothing to do with a single physical component. In fact, since the
> beginning of Linux, physical devices like superIO chips (now called as "south
> bridge") exports several kernel devices associated to it, for example, to
> serial interface , printer interface, rtc, pci controllers, etc.
> 
> Using another example from a driver I'm working for checking memory errors at
> the i7 core machines: In order to get errors from each processor, the driver needs
> to talk with 18 devices. All of those 18 kernel devices are part of just one
> physical CPU chip. Worse than that, they are the memory controller part of a
> single logical unit (called QPI- Quick Path Interconnet). All those 18 devices
> are bound to an specific PCI bus for each memory controller (on a machine with
> 2 CPU sockets, there are 2 buses, 36 total PCI devices, each with lots of
> registers).
> 
> So, basically, a kernel device is the kernel representation for a block element
> inside physical device that needs to be controlled by kernel. A driver may need
> to deal with and to export several different devices for userspace.
> 
> One of the concepts of the "media controller" is that a media controller device
> will act like a proxy entity, sending commands to hidden devices, acting like a
> bus to communicate with the physical components that aren't represented as
> devices.
> 
> So, I want to return back to one question: 
> 
> Should we create a device for each "v4l sub-device"?
> ====================================================
> 
> While I said before that this would result into a complex representation, after
> a careful study and analysis, I'm fully convinced that the answer should be:
> 
> YES, each "v4l sub-device" should be a device.
> 
> Rationale:
> 
> 1) We already do this for several components: i2c devices, i2c bus, IR, audio;
> 
> 2) On most cases, those components are already a device, being an i2c device or
> another kind of device connected to another bus;
> 
> 3) Userspace needs to communicate with them. The kernel model is to create a
> device for device <=> userspace communication, and not to use proxy entities;
> 
> 4) Creating device for "sub-devices" is the approach already taken on all other
> drivers over the kernel.

I gather that when you use the term 'device' you mean a 'device node' that
userspace can access. It is an option to have sub-devices create a device
node. Note that that would have to be a device node created by v4l; an i2c
device node for example is quite useless to us since you can only use it
for i2c ioctls.

I have considered this myself as well. The reason I decided against it was
that I think it is a lot of extra overhead and the creation of even more
device nodes when adding a single media controller would function just as
well. Especially since all this is quite uninteresting for most of the non-
embedded drivers. In fact, many of the current sub-devices have nothing or
almost nothing that needs to be controlled by userspace, so creating a device
node just for the sake of consistency sits not well with me.

And as I explained above, a v4l2_subdev just implements an interface. It has
no relation to devices. And yes, I'm beginning to agree with you that subdevice
was a bad name because it suggested something that it simply isn't.

That said, I also see some advantages in doing this. For statistics or
histogram sub-devices you can implement a read() call to read the data
instead of using ioctl. It is more flexible in that respect.

This is definitely an interesting topic that can be discussed both during
the LPC and here on the list.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-16 19:21                   ` Hans Verkuil
@ 2009-09-16 20:38                     ` Guennadi Liakhovetski
  2009-09-16 20:50                     ` Mauro Carvalho Chehab
  1 sibling, 0 replies; 57+ messages in thread
From: Guennadi Liakhovetski @ 2009-09-16 20:38 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Mauro Carvalho Chehab, linux-media

On Wed, 16 Sep 2009, Hans Verkuil wrote:

> On Wednesday 16 September 2009 20:15:20 Mauro Carvalho Chehab wrote:
> > 
> > What's a sub-device?
> > ====================
> > 
> > Well, if we strip v4l2-framework.txt and driver/media from "git grep", we have:
> > 
> > For "subdevice", there are several occurences. All of them refers to
> > subvendor/subdevice PCI ID.
> > 
> > For "sub-device": most references also talk about PCI subdevices. On all places
> > (except for V4L), where a subdevice exists, a kernel device is created.
> > 
> > So, basically, only V4L is using sub-device with a different meaning than what's at kernel.
> > On all other places, a subdevice is just another device.
> > 
> > It seems that we have a misconception here: sub-device is just an alias for
> > "device". 
> > 
> > IMO, it is better to avoid using "sub-device", as this cause confusion with the
> > widely used pci subdevice designation.
> 
> We discussed this on the list at the time. I think my original name was
> v4l2-client. If you can come up with a better name, then I'm happy to do a
> search and replace.

FWIW, I'm also mostly using the video -host and -client notation in 
soc-camera.

Thanks
Guennadi
---
Guennadi Liakhovetski

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-16 19:21                   ` Hans Verkuil
  2009-09-16 20:38                     ` Guennadi Liakhovetski
@ 2009-09-16 20:50                     ` Mauro Carvalho Chehab
  2009-09-16 21:34                       ` Hans Verkuil
  2009-09-16 22:28                       ` Karicheri, Muralidharan
  1 sibling, 2 replies; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-16 20:50 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media

Em Wed, 16 Sep 2009 21:21:16 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> On Wednesday 16 September 2009 20:15:20 Mauro Carvalho Chehab wrote:
> > Em Sat, 12 Sep 2009 00:39:50 +0200
> > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > > > From my previous understanding, those are the needs:
> > > > 
> > > > 1) V4L2 API will keep being used to control the devices and to do streaming,
> > > > working under the already well defined devices;
> > > 
> > > Yes.
> > >  
> > > > 2) One Kernel object is needed to represent the entire board as a hole, to
> > > > enumerate its sub-devices and to change their topology;
> > > 
> > > Yes.
> > > 
> > > > 3) For some very specific cases, it should be possible to "tweak" some
> > > > sub-devices to act on a non-usual way;
> > > 
> > > This will not be for 'some very specific cases'. This will become an essential
> > > feature on embedded platforms. It's probably the most important part of the
> > > media controller proposal.
> > 
> > Embedded platforms is an specific use case. 
> 
> However you look at it, it is certainly a very important use case. 

Yes, and I never said we shouldn't address embedded platform needs.
> It's a
> huge and important industry and we should have proper support for it in v4l-dvb.

Agreed.

> And embedded platforms are used quite differently. Where device drivers for
> comsumer market products should hide complexity (because the end-user or the
> generic webcam/video application doesn't want to be bothered with that), they
> should expose that complexity for embedded platforms since there the
> application writes want to take full control.

I'm just guessing, but If the two usecases are so different, maybe we shouldn't
try to find a common solution for the two problems, or maybe we should use an
approach similar to debufs, where you enable/mount only were needed (embedded).

> > IMO, it is better to avoid using "sub-device", as this cause confusion with the
> > widely used pci subdevice designation.
> 
> We discussed this on the list at the time. I think my original name was
> v4l2-client. If you can come up with a better name, then I'm happy to do a
> search and replace.
> 
> Suggestions for a better name are welcome! Perhaps something more abstract
> like v4l2-block? v4l2-part? v4l2-object? v4l2-function?
> 
> But the concept behind it will really not change with a different name.
> 
> Anyway, the definition of a sub-device within v4l is 'anything that has a
> struct v4l2_subdev'. Seen in C++ terms a v4l2_subdev struct defines several
> possible abstract interfaces. And objects can implement ('inherit') one or
> more of these. Perhaps v4l2-object is a much better term since that removes
> the association with a kernel device, which it is most definitely not.

v4l2-object seems good. also the -host/-client terms that Guennadi is proposing.

> > 4) Creating device for "sub-devices" is the approach already taken on all other
> > drivers over the kernel.
> 
> I gather that when you use the term 'device' you mean a 'device node' that
> userspace can access. It is an option to have sub-devices create a device
> node. Note that that would have to be a device node created by v4l; an i2c
> device node for example is quite useless to us since you can only use it
> for i2c ioctls.
> 
> I have considered this myself as well. The reason I decided against it was
> that I think it is a lot of extra overhead and the creation of even more
> device nodes when adding a single media controller would function just as
> well. Especially since all this is quite uninteresting for most of the non-
> embedded drivers.

This can be easily solved: Just add a Kconfig option for the tweak interfaces
eventually making it depending on CONFIG_EMBEDDED.

> In fact, many of the current sub-devices have nothing or
> almost nothing that needs to be controlled by userspace, so creating a device
> node just for the sake of consistency sits not well with me.

If the device will never needed to be seen on userspace then we can just not create
a device for it.

> And as I explained above, a v4l2_subdev just implements an interface. It has
> no relation to devices. And yes, I'm beginning to agree with you that subdevice
> was a bad name because it suggested something that it simply isn't.
> 
> That said, I also see some advantages in doing this. For statistics or
> histogram sub-devices you can implement a read() call to read the data
> instead of using ioctl. It is more flexible in that respect.

I think this will be more flexible and will be less complex than creating a proxy
device. For example, as you'll be directly addressing a device, you don't need to
have any locking to avoid the risk that different threads accessing different
sub-devices at the same time would result on a command sending to the wrong device.
So, both kernel driver and userspace app can be simpler.

> This is definitely an interesting topic that can be discussed both during
> the LPC and here on the list.
> 
> Regards,
> 
> 	Hans
> 




Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-16 20:50                     ` Mauro Carvalho Chehab
@ 2009-09-16 21:34                       ` Hans Verkuil
  2009-09-16 22:15                         ` Andy Walls
  2009-09-17 12:44                         ` Mauro Carvalho Chehab
  2009-09-16 22:28                       ` Karicheri, Muralidharan
  1 sibling, 2 replies; 57+ messages in thread
From: Hans Verkuil @ 2009-09-16 21:34 UTC (permalink / raw)
  To: Mauro Carvalho Chehab; +Cc: linux-media

On Wednesday 16 September 2009 22:50:43 Mauro Carvalho Chehab wrote:
> Em Wed, 16 Sep 2009 21:21:16 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
> > On Wednesday 16 September 2009 20:15:20 Mauro Carvalho Chehab wrote:
> > > Em Sat, 12 Sep 2009 00:39:50 +0200
> > > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > > > > From my previous understanding, those are the needs:
> > > > > 
> > > > > 1) V4L2 API will keep being used to control the devices and to do streaming,
> > > > > working under the already well defined devices;
> > > > 
> > > > Yes.
> > > >  
> > > > > 2) One Kernel object is needed to represent the entire board as a hole, to
> > > > > enumerate its sub-devices and to change their topology;
> > > > 
> > > > Yes.
> > > > 
> > > > > 3) For some very specific cases, it should be possible to "tweak" some
> > > > > sub-devices to act on a non-usual way;
> > > > 
> > > > This will not be for 'some very specific cases'. This will become an essential
> > > > feature on embedded platforms. It's probably the most important part of the
> > > > media controller proposal.
> > > 
> > > Embedded platforms is an specific use case. 
> > 
> > However you look at it, it is certainly a very important use case. 
> 
> Yes, and I never said we shouldn't address embedded platform needs.
> > It's a
> > huge and important industry and we should have proper support for it in v4l-dvb.
> 
> Agreed.
> 
> > And embedded platforms are used quite differently. Where device drivers for
> > comsumer market products should hide complexity (because the end-user or the
> > generic webcam/video application doesn't want to be bothered with that), they
> > should expose that complexity for embedded platforms since there the
> > application writes want to take full control.
> 
> I'm just guessing, but If the two usecases are so different, maybe we shouldn't
> try to find a common solution for the two problems, or maybe we should use an
> approach similar to debufs, where you enable/mount only were needed (embedded).

They are not *that* different. You still want the ability to discover the
available device nodes for consumer products (e.g. the alsa device belonging
to the video device). And there will no doubt be some borderline products
belonging to, say, the professional consumer market. It's not black-and-white.

<snip>

> v4l2-object seems good. also the -host/-client terms that Guennadi is proposing.

Just an idea: why not rename struct v4l2_device to v4l2_mc and v4l2_subdev to
v4l2_object? And if we decide to go all the way, then we can rename video_device
to v4l2_devnode. Or perhaps we go straight to the media_ prefix instead.

The term 'client' has for me similar problems as 'device': it's used in so many
different contexts that it is easy to get confused.

> > > 4) Creating device for "sub-devices" is the approach already taken on all other
> > > drivers over the kernel.
> > 
> > I gather that when you use the term 'device' you mean a 'device node' that
> > userspace can access. It is an option to have sub-devices create a device
> > node. Note that that would have to be a device node created by v4l; an i2c
> > device node for example is quite useless to us since you can only use it
> > for i2c ioctls.
> > 
> > I have considered this myself as well. The reason I decided against it was
> > that I think it is a lot of extra overhead and the creation of even more
> > device nodes when adding a single media controller would function just as
> > well. Especially since all this is quite uninteresting for most of the non-
> > embedded drivers.
> 
> This can be easily solved: Just add a Kconfig option for the tweak interfaces
> eventually making it depending on CONFIG_EMBEDDED.

An interesting idea. I don't think you want to make this specific for embedded
devices only. It can be done as a separate config option within V4L.

I have a problem though: what to do with sub-devices (if you don't mind, I'll
just keep using that term for now) that want to expose some advanced control.
We have seen several requests for that lately. E.g. an AGC-TOP control for
fine-tuning the AGC of tuners.

I think this example will be quite typical of several sub-devices: they may
have one or two 'advanced' controls that can be useful in very particular
cases for end-users.

There are a few possible ways of doing this:

1) With the mediacontroller concept from the RFC you can select the tuner
subdev through the mc device node and call VIDIOC_S_CTRL on that node (and
with QUERYCTRL you can also query all controls supported by that subdev,
including these advanced controls).

2) Create a device node for each subdev even if they have just a single control
to expose. Possible, but this still seems overkill for me.

3) Use your idea of only creating a device node for subdevs if a kernel config
is set. If no device nodes should be created, then the control framework can
still export such advanced controls to sysfs, allowing end-users to change
them. This is actually quite a nice idea: embedded systems or power-users can
get full control through the device nodes, while the average end-user can
just use the control from sysfs if he needs to tweak something.

4) Same as 3) but you can still use the mc to select a sub-device and call
ioctl on it. In other words, allow both mechanisms. It's trivial to implement,
but I got to admit that I don't like it. It's not clean, somehow.

So IF we go with the idea to create separate device nodes to access sub-devices,
then a good scheme would be this:

A) if a sub-device needs no control from outside, then no device node is ever
created. It is still enumerated in the media controller, but it has no associated
node.

B) if a sub-device needs a lot of control from outside, then we always create a
device node. This would typically be the case for e.g. a resizer or previewer
that is a core part of an embedded platform.

C) in all other cases you only get it if a kernel config option is on. And since
any advanced controls are still exposed in sysfs you can still change those even
if the config option was off.

What do you think about that? I would certainly like to hear what people think
about this.

Regards,

	Hans

> > In fact, many of the current sub-devices have nothing or
> > almost nothing that needs to be controlled by userspace, so creating a device
> > node just for the sake of consistency sits not well with me.
> 
> If the device will never needed to be seen on userspace then we can just not create
> a device for it.
> 
> > And as I explained above, a v4l2_subdev just implements an interface. It has
> > no relation to devices. And yes, I'm beginning to agree with you that subdevice
> > was a bad name because it suggested something that it simply isn't.
> > 
> > That said, I also see some advantages in doing this. For statistics or
> > histogram sub-devices you can implement a read() call to read the data
> > instead of using ioctl. It is more flexible in that respect.
> 
> I think this will be more flexible and will be less complex than creating a proxy
> device. For example, as you'll be directly addressing a device, you don't need to
> have any locking to avoid the risk that different threads accessing different
> sub-devices at the same time would result on a command sending to the wrong device.
> So, both kernel driver and userspace app can be simpler.
> 
> > This is definitely an interesting topic that can be discussed both during
> > the LPC and here on the list.
> > 
> > Regards,
> > 
> > 	Hans
> > 
> 
> 
> 
> 
> Cheers,
> Mauro
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 



-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-16 21:34                       ` Hans Verkuil
@ 2009-09-16 22:15                         ` Andy Walls
  2009-09-17  6:35                           ` Hans Verkuil
  2009-09-17 12:44                         ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 57+ messages in thread
From: Andy Walls @ 2009-09-16 22:15 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Mauro Carvalho Chehab, linux-media

On Wed, 2009-09-16 at 23:34 +0200, Hans Verkuil wrote:
> On Wednesday 16 September 2009 22:50:43 Mauro Carvalho Chehab wrote:
> > Em Wed, 16 Sep 2009 21:21:16 +0200

> C) in all other cases you only get it if a kernel config option is on. And since
> any advanced controls are still exposed in sysfs you can still change those even
> if the config option was off.

That is a user interface and support annoyance.  Either decide to have a
node for a subdevice or don't.  If a distribution wants to supress them,
udev rules could suffice - right?  Changing udev rules is
(theoretically) easier than rebuilding the kernel for most end users.

Regards,
Andy


> What do you think about that? I would certainly like to hear what people think
> about this.
> 
> Regards,
> 
> 	Hans



^ permalink raw reply	[flat|nested] 57+ messages in thread

* RE: RFCv2: Media controller proposal
  2009-09-16 20:50                     ` Mauro Carvalho Chehab
  2009-09-16 21:34                       ` Hans Verkuil
@ 2009-09-16 22:28                       ` Karicheri, Muralidharan
  2009-09-17  6:34                         ` Hans Verkuil
  1 sibling, 1 reply; 57+ messages in thread
From: Karicheri, Muralidharan @ 2009-09-16 22:28 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, Hans Verkuil; +Cc: linux-media

>
>> And as I explained above, a v4l2_subdev just implements an interface. It
>has
>> no relation to devices. And yes, I'm beginning to agree with you that
>subdevice
>> was a bad name because it suggested something that it simply isn't.
>>
>> That said, I also see some advantages in doing this. For statistics or
>> histogram sub-devices you can implement a read() call to read the data
>> instead of using ioctl. It is more flexible in that respect.
>
>I think this will be more flexible and will be less complex than creating a
>proxy
>device. For example, as you'll be directly addressing a device, you don't
>need to
>have any locking to avoid the risk that different threads accessing
>different
>sub-devices at the same time would result on a command sending to the wrong
>device.
>So, both kernel driver and userspace app can be simpler.


Not really. User application trying to parse the output of a histogram which
really will about 4K in size as described by Laurent. Imagine application does lot of parsing to decode the values thrown by the sysfs. Again on different platform, they can be different formats. With ioctl, each of these platforms provides api to access them and it is much simpler to use. Same for configuring IPIPE on DM355/DM365 where there are hundreds of parameters and write a lot of code in sysfs to parse each of these variables. I can see it as a nightmare for user space library or application developer.

>
>> This is definitely an interesting topic that can be discussed both during
>> the LPC and here on the list.
>>
>> Regards,
>>
>> 	Hans
>>
>
>
>
>
>Cheers,
>Mauro
>--
>To unsubscribe from this list: send the line "unsubscribe linux-media" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-16 22:28                       ` Karicheri, Muralidharan
@ 2009-09-17  6:34                         ` Hans Verkuil
  2009-09-17 12:11                           ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 57+ messages in thread
From: Hans Verkuil @ 2009-09-17  6:34 UTC (permalink / raw)
  To: Karicheri, Muralidharan; +Cc: Mauro Carvalho Chehab, linux-media

On Thursday 17 September 2009 00:28:38 Karicheri, Muralidharan wrote:
> >
> >> And as I explained above, a v4l2_subdev just implements an interface. It
> >has
> >> no relation to devices. And yes, I'm beginning to agree with you that
> >subdevice
> >> was a bad name because it suggested something that it simply isn't.
> >>
> >> That said, I also see some advantages in doing this. For statistics or
> >> histogram sub-devices you can implement a read() call to read the data
> >> instead of using ioctl. It is more flexible in that respect.
> >
> >I think this will be more flexible and will be less complex than creating a
> >proxy
> >device. For example, as you'll be directly addressing a device, you don't
> >need to
> >have any locking to avoid the risk that different threads accessing
> >different
> >sub-devices at the same time would result on a command sending to the wrong
> >device.
> >So, both kernel driver and userspace app can be simpler.
> 
> 
> Not really. User application trying to parse the output of a histogram which
> really will about 4K in size as described by Laurent. Imagine application does lot of parsing to decode the values thrown by the sysfs. Again on different platform, they can be different formats. With ioctl, each of these platforms provides api to access them and it is much simpler to use. Same for configuring IPIPE on DM355/DM365 where there are hundreds of parameters and write a lot of code in sysfs to parse each of these variables. I can see it as a nightmare for user space library or application developer.

I believe Mauro was talking about normal device nodes, not sysfs.

What is a bit more complex in Mauro's scheme is that to get hold of the right
device node needed to access a sub-device you will need to first get the
subdev's entity information from the media controller, then go to libudev to
translate major/minor numbers to an actual device path, and then open that.

On the other hand, we will have a library available to do this.

On balance I think that the kernel implementation will be more complex by
creating device nodes, although not by much, and that userspace will be
slightly simpler in the case of using the same mc filehandle in a multi-
threaded application.

Regards,

	Hans


-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-16 22:15                         ` Andy Walls
@ 2009-09-17  6:35                           ` Hans Verkuil
  2009-09-17 11:59                             ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 57+ messages in thread
From: Hans Verkuil @ 2009-09-17  6:35 UTC (permalink / raw)
  To: Andy Walls; +Cc: Mauro Carvalho Chehab, linux-media

On Thursday 17 September 2009 00:15:23 Andy Walls wrote:
> On Wed, 2009-09-16 at 23:34 +0200, Hans Verkuil wrote:
> > On Wednesday 16 September 2009 22:50:43 Mauro Carvalho Chehab wrote:
> > > Em Wed, 16 Sep 2009 21:21:16 +0200
> 
> > C) in all other cases you only get it if a kernel config option is on. And since
> > any advanced controls are still exposed in sysfs you can still change those even
> > if the config option was off.
> 
> That is a user interface and support annoyance.  Either decide to have a
> node for a subdevice or don't.  If a distribution wants to supress them,
> udev rules could suffice - right?  Changing udev rules is
> (theoretically) easier than rebuilding the kernel for most end users.

Good point.

	Hans

> 
> Regards,
> Andy
> 
> 
> > What do you think about that? I would certainly like to hear what people think
> > about this.
> > 
> > Regards,
> > 
> > 	Hans
> 
> 
> 



-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-17  6:35                           ` Hans Verkuil
@ 2009-09-17 11:59                             ` Mauro Carvalho Chehab
  0 siblings, 0 replies; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-17 11:59 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Andy Walls, linux-media

Em Thu, 17 Sep 2009 08:35:57 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> On Thursday 17 September 2009 00:15:23 Andy Walls wrote:
> > On Wed, 2009-09-16 at 23:34 +0200, Hans Verkuil wrote:
> > > On Wednesday 16 September 2009 22:50:43 Mauro Carvalho Chehab wrote:
> > > > Em Wed, 16 Sep 2009 21:21:16 +0200
> > 
> > > C) in all other cases you only get it if a kernel config option is on. And since
> > > any advanced controls are still exposed in sysfs you can still change those even
> > > if the config option was off.
> > 
> > That is a user interface and support annoyance.  Either decide to have a
> > node for a subdevice or don't.  If a distribution wants to supress them,
> > udev rules could suffice - right?  Changing udev rules is
> > (theoretically) easier than rebuilding the kernel for most end users.
> 
> Good point.

I suspect that, in practice, the drivers will talk for themselves: e. g.
drivers that are used with embedded and that requires extra parameters for
tweaking will add some callback methods to indicate V4L2 core that they need
a /dev. Others will not implement those methods and won't have any /dev
associated.

Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-17  6:34                         ` Hans Verkuil
@ 2009-09-17 12:11                           ` Mauro Carvalho Chehab
  2009-09-17 12:53                             ` Nova S2 HD scanning problems Claes Lindblom
  0 siblings, 1 reply; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-17 12:11 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Karicheri, Muralidharan, linux-media

Em Thu, 17 Sep 2009 08:34:23 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> On Thursday 17 September 2009 00:28:38 Karicheri, Muralidharan wrote:
> > >
> > >> And as I explained above, a v4l2_subdev just implements an interface. It
> > >has
> > >> no relation to devices. And yes, I'm beginning to agree with you that
> > >subdevice
> > >> was a bad name because it suggested something that it simply isn't.
> > >>
> > >> That said, I also see some advantages in doing this. For statistics or
> > >> histogram sub-devices you can implement a read() call to read the data
> > >> instead of using ioctl. It is more flexible in that respect.
> > >
> > >I think this will be more flexible and will be less complex than creating a
> > >proxy
> > >device. For example, as you'll be directly addressing a device, you don't
> > >need to
> > >have any locking to avoid the risk that different threads accessing
> > >different
> > >sub-devices at the same time would result on a command sending to the wrong
> > >device.
> > >So, both kernel driver and userspace app can be simpler.
> > 
> > 
> > Not really. User application trying to parse the output of a histogram which
> > really will about 4K in size as described by Laurent. Imagine application does lot of parsing to decode the values thrown by the sysfs. Again on different platform, they can be different formats. With ioctl, each of these platforms provides api to access them and it is much simpler to use. Same for configuring IPIPE on DM355/DM365 where there are hundreds of parameters and write a lot of code in sysfs to parse each of these variables. I can see it as a nightmare for user space library or application developer.
> 
> I believe Mauro was talking about normal device nodes, not sysfs.

Yes.

> What is a bit more complex in Mauro's scheme is that to get hold of the right
> device node needed to access a sub-device you will need to first get the
> subdev's entity information from the media controller, then go to libudev to
> translate major/minor numbers to an actual device path, and then open that.

Good point. This reforces my thesis that the media controller (or, at least his
enumeration function) will be better done via sysfs.

As Andy pointed, one of the biggest advantages is that udev can enrich the
user's experience by calling some tweak applications or by calling special
applications (like lirc) when certain media devices are created.

Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-16 21:34                       ` Hans Verkuil
  2009-09-16 22:15                         ` Andy Walls
@ 2009-09-17 12:44                         ` Mauro Carvalho Chehab
  1 sibling, 0 replies; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-09-17 12:44 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: linux-media

Em Wed, 16 Sep 2009 23:34:08 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> > I'm just guessing, but If the two usecases are so different, maybe we shouldn't
> > try to find a common solution for the two problems, or maybe we should use an
> > approach similar to debufs, where you enable/mount only were needed (embedded).
> 
> They are not *that* different. You still want the ability to discover the
> available device nodes for consumer products (e.g. the alsa device belonging
> to the video device). And there will no doubt be some borderline products
> belonging to, say, the professional consumer market. It's not black-and-white.

Agreed.

> > v4l2-object seems good. also the -host/-client terms that Guennadi is proposing.
> 
> Just an idea: why not rename struct v4l2_device to v4l2_mc and v4l2_subdev to
> v4l2_object? And if we decide to go all the way, then we can rename video_device
> to v4l2_devnode. Or perhaps we go straight to the media_ prefix instead.
> 
> The term 'client' has for me similar problems as 'device': it's used in so many
> different contexts that it is easy to get confused.

IMO, let's patch the docs, but, at least for a while, let's not change API names again.

Perhaps, I'm just too stressed with all the merge extra work I had to do this time due
to the last function rename that stopped me merging patches while the arch changes
were not upstream... I generally take one or two days for merging most patches,
but I'm working hardly this entire week due to that.

> > This can be easily solved: Just add a Kconfig option for the tweak interfaces
> > eventually making it depending on CONFIG_EMBEDDED.
> 
> An interesting idea. I don't think you want to make this specific for embedded
> devices only. It can be done as a separate config option within V4L.
> 
> I have a problem though: what to do with sub-devices (if you don't mind, I'll
> just keep using that term for now) that want to expose some advanced control.
> We have seen several requests for that lately. 

I think we should discuss this case by case. When I said that people were considering the
media controller as a replacement for V4L2 API, I was referring to the fact that lately,
all proposals are thinking on doing things only at the sub-devices, where, on most cases,
the control should be applied via an already-existing API call.

> E.g. an AGC-TOP control for fine-tuning the AGC of tuners.

In this specific case, there's already AFC parameter for vidioc_[g/s]_tuner, being
also an example of advanced control for tuners. So, IMO, the proper place for
AGC-TOP is together with AFC, e. g., at struct v4l2_tuner.

> I think this example will be quite typical of several sub-devices: they may
> have one or two 'advanced' controls that can be useful in very particular
> cases for end-users.

On some cases, they can be just one extra G/S_CTRL. 

We need to have a clear rule of what kind of controls should go via the current
V4L2 standard way for those that will go via a subdev interface, to avoid the
"mess controller" scenario.

IMO, They should only use the sub-dev interface when there are more than one
subdev associated to the same /dev/video interface and were each may need
different settings for the same control.

Let me use an arbitrary scenario:

/dev/video0 -> dsp0 -> dsp1 -> ...

let's imagine that both dsp0 and dsp1 blocks are identical, and can do
a set of image enhancement functions, including movement detection and image
filtering.

If we need to set dsp0 block to do image filtering and dsp block 2 to do
movement detection, no V4L2 current methods will fit. In this case, subdev
interface should be used.

> There are a few possible ways of doing this:
> 
> 1) With the mediacontroller concept from the RFC you can select the tuner
> subdev through the mc device node and call VIDIOC_S_CTRL on that node (and
> with QUERYCTRL you can also query all controls supported by that subdev,
> including these advanced controls).

In this case, what would happen if the S_CTRL were applied at /dev/video? There
will be several possible ways (refuse, apply to all subdevs, apply to the first
one that accepts, etc), each with advantages and dis-advantages. IMO, too messy.

> 2) Create a device node for each subdev even if they have just a single control
> to expose. Possible, but this still seems overkill for me.
> 
> 3) Use your idea of only creating a device node for subdevs if a kernel config
> is set. If no device nodes should be created, then the control framework can
> still export such advanced controls to sysfs, allowing end-users to change
> them. This is actually quite a nice idea: embedded systems or power-users can
> get full control through the device nodes, while the average end-user can
> just use the control from sysfs if he needs to tweak something.

IMO, both 2 and 3 are OK. Considering Andy's argument that we can always avoid
creating a device at udev, (2) seems better.
> 
> 4) Same as 3) but you can still use the mc to select a sub-device and call
> ioctl on it. In other words, allow both mechanisms. It's trivial to implement,
> but I got to admit that I don't like it. It's not clean, somehow.

I also don't like it.

> So IF we go with the idea to create separate device nodes to access sub-devices,
> then a good scheme would be this:
> 
> A) if a sub-device needs no control from outside, then no device node is ever
> created. It is still enumerated in the media controller, but it has no associated
> node.
> 
> B) if a sub-device needs a lot of control from outside, then we always create a
> device node. This would typically be the case for e.g. a resizer or previewer
> that is a core part of an embedded platform.

(A) and (B) are OK.

> C) in all other cases you only get it if a kernel config option is on. And since
> any advanced controls are still exposed in sysfs you can still change those even
> if the config option was off.

Again, considering Andy's argument, maybe we can avoid having an extra config
for it, letting distros to disable or enable those interfaces at sysfs.




Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Nova S2 HD scanning problems
  2009-09-17 12:11                           ` Mauro Carvalho Chehab
@ 2009-09-17 12:53                             ` Claes Lindblom
  2009-09-18  6:42                               ` Claes Lindblom
  0 siblings, 1 reply; 57+ messages in thread
From: Claes Lindblom @ 2009-09-17 12:53 UTC (permalink / raw)
  To: linux-media; +Cc: claesl

Hi all,
I have just installed my new Nova S2 HD card on ubuntu x86_64, 
2.6.28-13-generic, changed the
firmware to version 1.23.86.1 and I have also tried version 1.22.82.0. 
The result is that I can tune and watch
channels without any problems but when I try to scan for channels it 
will not work.

The card is loaded as adapter 1 so I use the following command to scan 
with the latest scan-s2
sudo ./scan-s2 -a 1 -s 0 -l UNIVERSAL dvb-s/Thor-1.0W

I have a DisEqC switch but I know it's working since I have a Azurewave 
AD SP400 card that I have scanned channels before
and also the card can tune in on both LNB's with szap-s2.

Has anyone ideas about the scanning problems?

Dmesg does not output anything.
Output of scan-s2:
----------------------------------> Using DVB-S
 >>> tune to: 11216:vC78S0:S0.0W:24500:
DVB-S IF freq is 1466000
WARNING: >>> tuning failed!!!
 >>> tune to: 11216:vC78S0:S0.0W:24500: (tuning failed)
DVB-S IF freq is 1466000
WARNING: >>> tuning failed!!!
----------------------------------> Using DVB-S2
 >>> tune to: 11216:vC78S1:S0.0W:24500:
DVB-S IF freq is 1466000
WARNING: >>> tuning failed!!!
 >>> tune to: 11216:vC78S1:S0.0W:24500: (tuning failed)
DVB-S IF freq is 1466000
WARNING: >>> tuning failed!!!


Regards
/Claes


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: Nova S2 HD scanning problems
  2009-09-17 12:53                             ` Nova S2 HD scanning problems Claes Lindblom
@ 2009-09-18  6:42                               ` Claes Lindblom
  0 siblings, 0 replies; 57+ messages in thread
From: Claes Lindblom @ 2009-09-18  6:42 UTC (permalink / raw)
  To: linux-media; +Cc: claesl

Claes Lindblom wrote:
> Hi all,
> I have just installed my new Nova S2 HD card on ubuntu x86_64, 
> 2.6.28-13-generic, changed the
> firmware to version 1.23.86.1 and I have also tried version 1.22.82.0. 
> The result is that I can tune and watch
> channels without any problems but when I try to scan for channels it 
> will not work.
>
> The card is loaded as adapter 1 so I use the following command to scan 
> with the latest scan-s2
> sudo ./scan-s2 -a 1 -s 0 -l UNIVERSAL dvb-s/Thor-1.0W
>
I have made a strange discovery about this. scan-s2 does not work but 
when I used scan it worked
and after that I started a new scan-s2 which suddenly worked. A while 
later I tried again but it did not work so
I had to use scan first and then scan-s2.

I think this is really strange when I have succeded with scan-s2 on my 
other card.

Another that appeared with this is that Mythtv 0.22 cannot present any 
info on the frontend when installing new cards.
I know that it migt belong in this mailinglist but it feels like it's 
connected somehow.
> I have a DisEqC switch but I know it's working since I have a 
> Azurewave AD SP400 card that I have scanned channels before
> and also the card can tune in on both LNB's with szap-s2.
>
> Has anyone ideas about the scanning problems?
>
> Dmesg does not output anything.
> Output of scan-s2:
> ----------------------------------> Using DVB-S
> >>> tune to: 11216:vC78S0:S0.0W:24500:
> DVB-S IF freq is 1466000
> WARNING: >>> tuning failed!!!
> >>> tune to: 11216:vC78S0:S0.0W:24500: (tuning failed)
> DVB-S IF freq is 1466000
> WARNING: >>> tuning failed!!!
> ----------------------------------> Using DVB-S2
> >>> tune to: 11216:vC78S1:S0.0W:24500:
> DVB-S IF freq is 1466000
> WARNING: >>> tuning failed!!!
> >>> tune to: 11216:vC78S1:S0.0W:24500: (tuning failed)
> DVB-S IF freq is 1466000
> WARNING: >>> tuning failed!!!
>
>
> Regards
> /Claes
>


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-11 21:37                       ` Mauro Carvalho Chehab
  2009-09-11 22:25                         ` Hans Verkuil
@ 2009-09-21 17:22                         ` Sakari Ailus
  2009-10-27  8:04                           ` Guennadi Liakhovetski
  1 sibling, 1 reply; 57+ messages in thread
From: Sakari Ailus @ 2009-09-21 17:22 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Hiremath, Vaibhav, Devin Heitmueller, linux-media,
	Cohen David Abraham, Koskipaa Antti (Nokia-D/Helsinki),
	Zutshi Vimarsh

Mauro Carvalho Chehab wrote:
> Em Fri, 11 Sep 2009 22:15:15 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
>> On Friday 11 September 2009 21:59:37 Mauro Carvalho Chehab wrote:
>>> Em Fri, 11 Sep 2009 21:23:44 +0200
>>> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
>>>> The second problem is that this will pollute the 'namespace' of a v4l device
>>>> node. Device drivers need to pass all those private ioctls to the right
>>>> sub-device. But they shouldn't have to care about that. If someone wants to
>>>> tweak the resizer (e.g. scaling coefficients), then pass it straight to the
>>>> resizer component.
>>> Sorry, I missed your point here
>> Example: a sub-device can produce certain statistics. You want to have an
>> ioctl to obtain those statistics. If you call that through /dev/videoX, then
>> that main driver has to handle that ioctl in vidioc_default and pass it on
>> to the right subdev. So you have to write that vidioc_default handler,
>> know about the sub-devices that you have and which sub-device is linked to
>> the device node. You really don't want to have to do that. Especially not
>> when you are dealing with i2c devices that are loaded from platform code.
>> If a video encoder supports private ioctls, then an omap3 driver doesn't
>> want to know about that. Oh, and before you ask: just broadcasting that
>> ioctl is not a solution if you have multiple identical video encoders.
> 
> This can be as easy as reading from /sys/class/media/dsp:stat0/stats

In general, the H3A block producing the statistics is configured first,
after which it starts producing statistics. Statistics buffers are
usually smallish, the maximum size is half MiB or so. For such a buffer
you'd have to ask the data for a number of times since the sysfs show() 
limit is one page (4 kiB usually).

Statistics are also often available before the actual frame since the
whole frame is not used to compute them. The statistics are used by e.g.
the AEWB algorithm which then comes up with the new exposure and gain
values. Applying them to the sensor in time is important since the
sensor may start exposing a new frame already before the last one has ended.

This requires event delivery to userspace (Laurent has written about it
under subject "[RFC] Video events").

-- 
Sakari Ailus
sakari.ailus@maxwell.research.nokia.com




^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-09-21 17:22                         ` Sakari Ailus
@ 2009-10-27  8:04                           ` Guennadi Liakhovetski
  2009-10-27 13:56                             ` Devin Heitmueller
  0 siblings, 1 reply; 57+ messages in thread
From: Guennadi Liakhovetski @ 2009-10-27  8:04 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Mauro Carvalho Chehab, Hans Verkuil, Hiremath, Vaibhav,
	Devin Heitmueller, linux-media, Cohen David Abraham,
	Koskipaa Antti (Nokia-D/Helsinki),
	Zutshi Vimarsh

Hi

(repeating my preamble from a previous post)

This is a general comment to the whole "media controller" work: having 
given a talk at the ELC-E in Grenoble on soc-camera, I mentioned briefly a 
few related RFCs, including this one. I've got a couple of comments back, 
including the following ones (which is to say, opinions are not mine and 
may or may not be relevant, I'm just fulfilling my promise to pass them 
on;)):

1) what about DVB? Wouldn't they also benefit from such an API? I wasn't 
able to reply to the question, whether the DVB folks know about this and 
have a chance to take part in the discussion and eventually use this API?

2) what I am even less sure about is, whether ALSA / ASoC have been 
mentioned as possible users of MC, or, at least, possible sources for 
ideas. ASoC has definitely been mentioned as an audio analog of 
soc-camera, so, I'll be looking at that - at least at their documentation 
- to see if I can borrow some of their ideas:-)

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-10-27  8:04                           ` Guennadi Liakhovetski
@ 2009-10-27 13:56                             ` Devin Heitmueller
  2009-11-05 14:22                               ` Hans Verkuil
  0 siblings, 1 reply; 57+ messages in thread
From: Devin Heitmueller @ 2009-10-27 13:56 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Sakari Ailus, Mauro Carvalho Chehab, Hans Verkuil, Hiremath,
	Vaibhav, linux-media, Cohen David Abraham,
	Koskipaa Antti (Nokia-D/Helsinki),
	Zutshi Vimarsh

On Tue, Oct 27, 2009 at 4:04 AM, Guennadi Liakhovetski
<g.liakhovetski@gmx.de> wrote:
> Hi
>
> (repeating my preamble from a previous post)
>
> This is a general comment to the whole "media controller" work: having
> given a talk at the ELC-E in Grenoble on soc-camera, I mentioned briefly a
> few related RFCs, including this one. I've got a couple of comments back,
> including the following ones (which is to say, opinions are not mine and
> may or may not be relevant, I'm just fulfilling my promise to pass them
> on;)):
>
> 1) what about DVB? Wouldn't they also benefit from such an API? I wasn't
> able to reply to the question, whether the DVB folks know about this and
> have a chance to take part in the discussion and eventually use this API?

The extent to which DVB applies is that the DVB devices will appear in
the MC enumeration.  This will allow userland to be able to see
"hybrid devices" where both DVB and analog are tied to the same tuner
and cannot be used at the same time.

> 2) what I am even less sure about is, whether ALSA / ASoC have been
> mentioned as possible users of MC, or, at least, possible sources for
> ideas. ASoC has definitely been mentioned as an audio analog of
> soc-camera, so, I'll be looking at that - at least at their documentation
> - to see if I can borrow some of their ideas:-)

ALSA devices will definitely be available, although at this point I
have no reason to believe this will require changes the ALSA code
itself.  All of the changes involve enumeration within v4l to find the
correct ALSA device associated with the tuner and report the correct
card number.  The ALSA case is actually my foremost concern with
regards to the MC API, since it will solve the problem related to
applications such as tvtime figuring out which ALSA device to playback
audio on.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-10-27 13:56                             ` Devin Heitmueller
@ 2009-11-05 14:22                               ` Hans Verkuil
  2009-11-05 16:02                                 ` Devin Heitmueller
  2009-11-05 16:23                                 ` Mauro Carvalho Chehab
  0 siblings, 2 replies; 57+ messages in thread
From: Hans Verkuil @ 2009-11-05 14:22 UTC (permalink / raw)
  To: Devin Heitmueller
  Cc: Guennadi Liakhovetski, Sakari Ailus, Mauro Carvalho Chehab,
	Hiremath, Vaibhav, linux-media, Cohen David Abraham,
	Koskipaa Antti (Nokia-D/Helsinki),
	Zutshi Vimarsh

On Tuesday 27 October 2009 14:56:24 Devin Heitmueller wrote:
> On Tue, Oct 27, 2009 at 4:04 AM, Guennadi Liakhovetski
> <g.liakhovetski@gmx.de> wrote:
> > Hi
> >
> > (repeating my preamble from a previous post)
> >
> > This is a general comment to the whole "media controller" work: having
> > given a talk at the ELC-E in Grenoble on soc-camera, I mentioned briefly a
> > few related RFCs, including this one. I've got a couple of comments back,
> > including the following ones (which is to say, opinions are not mine and
> > may or may not be relevant, I'm just fulfilling my promise to pass them
> > on;)):
> >
> > 1) what about DVB? Wouldn't they also benefit from such an API? I wasn't
> > able to reply to the question, whether the DVB folks know about this and
> > have a chance to take part in the discussion and eventually use this API?
> 
> The extent to which DVB applies is that the DVB devices will appear in
> the MC enumeration.  This will allow userland to be able to see
> "hybrid devices" where both DVB and analog are tied to the same tuner
> and cannot be used at the same time.
> 
> > 2) what I am even less sure about is, whether ALSA / ASoC have been
> > mentioned as possible users of MC, or, at least, possible sources for
> > ideas. ASoC has definitely been mentioned as an audio analog of
> > soc-camera, so, I'll be looking at that - at least at their documentation
> > - to see if I can borrow some of their ideas:-)
> 
> ALSA devices will definitely be available, although at this point I
> have no reason to believe this will require changes the ALSA code
> itself.  All of the changes involve enumeration within v4l to find the
> correct ALSA device associated with the tuner and report the correct
> card number.  The ALSA case is actually my foremost concern with
> regards to the MC API, since it will solve the problem related to
> applications such as tvtime figuring out which ALSA device to playback
> audio on.
> 
> Devin
> 

Does anyone know if alsa has similar routing problems as we have for SoCs?
Currently the MC can be used to discover and change the routing of video streams,
but it would be very easy indeed to include audio streams (or any type of
stream for that matter) as well.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-11-05 14:22                               ` Hans Verkuil
@ 2009-11-05 16:02                                 ` Devin Heitmueller
  2009-11-05 16:23                                 ` Mauro Carvalho Chehab
  1 sibling, 0 replies; 57+ messages in thread
From: Devin Heitmueller @ 2009-11-05 16:02 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Guennadi Liakhovetski, Sakari Ailus, Mauro Carvalho Chehab,
	Hiremath, Vaibhav, linux-media, Cohen David Abraham,
	Koskipaa Antti (Nokia-D/Helsinki),
	Zutshi Vimarsh

On Thu, Nov 5, 2009 at 9:22 AM, Hans Verkuil <hverkuil@xs4all.nl> wrote:
> Does anyone know if alsa has similar routing problems as we have for SoCs?
> Currently the MC can be used to discover and change the routing of video streams,
> but it would be very easy indeed to include audio streams (or any type of
> stream for that matter) as well.
>
> Regards,
>
>        Hans

As far as I have seen, generally speaking the audio rerouting is done
automatically when changing video sources (and doesn't get done by
ALSA itself but rather in the code for the decoder or bridge).  In
theory people might want to be able to play with the routing through
some sort of ALSA controls, but I don't think anyone is doing that
now.

Devin


-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: RFCv2: Media controller proposal
  2009-11-05 14:22                               ` Hans Verkuil
  2009-11-05 16:02                                 ` Devin Heitmueller
@ 2009-11-05 16:23                                 ` Mauro Carvalho Chehab
  1 sibling, 0 replies; 57+ messages in thread
From: Mauro Carvalho Chehab @ 2009-11-05 16:23 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Devin Heitmueller, Guennadi Liakhovetski, Sakari Ailus, Hiremath,
	Vaibhav, linux-media, Cohen David Abraham,
	Koskipaa Antti (Nokia-D/Helsinki),
	Zutshi Vimarsh

Em Thu, 5 Nov 2009 15:22:09 +0100
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> On Tuesday 27 October 2009 14:56:24 Devin Heitmueller wrote:
> > On Tue, Oct 27, 2009 at 4:04 AM, Guennadi Liakhovetski
> > <g.liakhovetski@gmx.de> wrote:
> > > Hi
> > >
> > > (repeating my preamble from a previous post)
> > >
> > > This is a general comment to the whole "media controller" work: having
> > > given a talk at the ELC-E in Grenoble on soc-camera, I mentioned briefly a
> > > few related RFCs, including this one. I've got a couple of comments back,
> > > including the following ones (which is to say, opinions are not mine and
> > > may or may not be relevant, I'm just fulfilling my promise to pass them
> > > on;)):
> > >
> > > 1) what about DVB? Wouldn't they also benefit from such an API? I wasn't
> > > able to reply to the question, whether the DVB folks know about this and
> > > have a chance to take part in the discussion and eventually use this API?
> > 
> > The extent to which DVB applies is that the DVB devices will appear in
> > the MC enumeration.  This will allow userland to be able to see
> > "hybrid devices" where both DVB and analog are tied to the same tuner
> > and cannot be used at the same time.
> > 
> > > 2) what I am even less sure about is, whether ALSA / ASoC have been
> > > mentioned as possible users of MC, or, at least, possible sources for
> > > ideas. ASoC has definitely been mentioned as an audio analog of
> > > soc-camera, so, I'll be looking at that - at least at their documentation
> > > - to see if I can borrow some of their ideas:-)
> > 
> > ALSA devices will definitely be available, although at this point I
> > have no reason to believe this will require changes the ALSA code
> > itself.  All of the changes involve enumeration within v4l to find the
> > correct ALSA device associated with the tuner and report the correct
> > card number.  The ALSA case is actually my foremost concern with
> > regards to the MC API, since it will solve the problem related to
> > applications such as tvtime figuring out which ALSA device to playback
> > audio on.
> > 
> > Devin
> > 
> 
> Does anyone know if alsa has similar routing problems as we have for SoCs?
> Currently the MC can be used to discover and change the routing of video streams,
> but it would be very easy indeed to include audio streams (or any type of
> stream for that matter) as well.

em28xx can have an ac97 device with lots of mixers inside, for input and for
output. Some of those ac97 chips are also present on some ac97 motherboards
with advanced audio sound, although I'm not sure if this makes much sense.

On some devices (for example Hauppauge USB 2), there are separate output
mixers for the analog output via the output plug and for the digital output.

On others, you could eventually mix the audio input from the tuner with an
external audio source.

The more complex ac97 are more common on capture-only devices.

Currently, we don't support those advanced usages. For example, with Hauppauge
USB 2, the code just assumes that we want audio on both analog and digital
outputs, but the device allows independent control to each volume.

I'm not sure what would be the proper way for mapping it: maybe two independent
mixers at -alsa? Yet, what happens if the -alsa module weren't compiled?

I'm also not quite sure what would be the media controller paper on such cases.

Cheers,
Mauro

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2009-11-05 16:24 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-09-10  7:13 RFCv2: Media controller proposal Hans Verkuil
2009-09-10 13:01 ` Patrick Boettcher
2009-09-10 13:50   ` Hans Verkuil
2009-09-10 14:24     ` Patrick Boettcher
2009-09-10 15:00       ` Hans Verkuil
2009-09-10 19:19         ` Karicheri, Muralidharan
2009-09-10 20:27           ` Hans Verkuil
2009-09-10 23:08             ` Karicheri, Muralidharan
2009-09-11  6:20               ` Hans Verkuil
2009-09-11  6:29                 ` Hiremath, Vaibhav
2009-09-11  6:26             ` Hiremath, Vaibhav
2009-09-15 11:36         ` Laurent Pinchart
2009-09-10 20:20 ` Mauro Carvalho Chehab
2009-09-10 20:27   ` Devin Heitmueller
2009-09-11 12:59     ` Mauro Carvalho Chehab
2009-09-10 21:35   ` Hans Verkuil
2009-09-11 15:13     ` Mauro Carvalho Chehab
2009-09-11 15:46       ` Devin Heitmueller
2009-09-11 15:53         ` Hiremath, Vaibhav
2009-09-11 17:03           ` Mauro Carvalho Chehab
2009-09-11 17:34             ` Hiremath, Vaibhav
2009-09-11 18:52               ` Mauro Carvalho Chehab
2009-09-11 19:23                 ` Hans Verkuil
2009-09-11 19:59                   ` Mauro Carvalho Chehab
2009-09-11 20:15                     ` Hans Verkuil
2009-09-11 21:37                       ` Mauro Carvalho Chehab
2009-09-11 22:25                         ` Hans Verkuil
2009-09-21 17:22                         ` Sakari Ailus
2009-10-27  8:04                           ` Guennadi Liakhovetski
2009-10-27 13:56                             ` Devin Heitmueller
2009-11-05 14:22                               ` Hans Verkuil
2009-11-05 16:02                                 ` Devin Heitmueller
2009-11-05 16:23                                 ` Mauro Carvalho Chehab
2009-09-11 19:08       ` Hans Verkuil
2009-09-11 19:54         ` Mauro Carvalho Chehab
2009-09-11 20:29           ` Hans Verkuil
2009-09-11 21:28             ` Mauro Carvalho Chehab
2009-09-11 22:39               ` Hans Verkuil
2009-09-16 18:15                 ` Mauro Carvalho Chehab
2009-09-16 19:21                   ` Hans Verkuil
2009-09-16 20:38                     ` Guennadi Liakhovetski
2009-09-16 20:50                     ` Mauro Carvalho Chehab
2009-09-16 21:34                       ` Hans Verkuil
2009-09-16 22:15                         ` Andy Walls
2009-09-17  6:35                           ` Hans Verkuil
2009-09-17 11:59                             ` Mauro Carvalho Chehab
2009-09-17 12:44                         ` Mauro Carvalho Chehab
2009-09-16 22:28                       ` Karicheri, Muralidharan
2009-09-17  6:34                         ` Hans Verkuil
2009-09-17 12:11                           ` Mauro Carvalho Chehab
2009-09-17 12:53                             ` Nova S2 HD scanning problems Claes Lindblom
2009-09-18  6:42                               ` Claes Lindblom
2009-09-10 21:28 ` RFCv2: Media controller proposal Guennadi Liakhovetski
2009-09-10 21:59   ` Hans Verkuil
2009-09-15 12:28     ` Laurent Pinchart
2009-09-11  6:16 ` Hiremath, Vaibhav
2009-09-11  6:35   ` Hans Verkuil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.