All of lore.kernel.org
 help / color / mirror / Atom feed
* Questions regarding Devices/Subdevices/MediaController usage in case of a SoC
@ 2011-09-02  9:38 Alain VOLMAT
  2011-09-02 21:30 ` Sakari Ailus
  0 siblings, 1 reply; 6+ messages in thread
From: Alain VOLMAT @ 2011-09-02  9:38 UTC (permalink / raw)
  To: linux-media

Hi,

I'm writing you in order to have some advices in the design of the V4L2 driver for a rather complex device. It is mainly related to device/subdev/media controller.

This driver would target SoCs which usually handle inputs (capture devices, for ex LinuxDVB, HDMI capture), several layers of graphical or video planes and outputs such as HDMI/analog. 
Basically we have 3 levels, capture devices data being pushed onto planes and planes being mixed on outputs. Moreover it is also possible to input or output datas from several points of the device.

The idea is to take advantage of the new MediaController in order to be able to define internal data path by linking capture devices to layers and layers to outputs.
Since MediaController allows to link pads of entities together, our understanding is that we need to have 1 subdevice per hardware resource. That is if we have 2 planes, we will have 2 subdevices handling them. Same for outputs and capture.
Is our understanding correct ?

A second point is now about the number of devices. I think we have 2 ways of doing that, and I would like to get your opinions about those 2 ways.
#1 Single device:
I could think of a single device which expose several inputs and outputs. We could enumerate them with VIDIOC_ENUM* and select them using VIDIOC_S_*. After the selection, data exchange could be done upon specifying a proper buffer type. The merit of such model is that an application using such device would only have to access the single available /dev/video0 for everything, without having to know if video0 is for capture, video1 output and so on.

#2 Multiple device:
In such case, each video device would only provide a single (or small amount of similar) input or output. So several video device nodes would be available to the application.
Looking at some other drivers around such as the OMAP4 ISP or Samsung S5P, it seems to be the preferred way to go, is that correct ? This way also fit more in the V4L2 model of device type (Video capture device, video output device) since way #1 would at last create a single big device which implement a mix of all those devices.

As far as the media controller is concerned, since all those resources are not sharable, it seems proper to have only a single media entry point in order to setup the SoC and internal data path (and not abstract media to match their video device counterpart)

It would be very helpful if you could advice me about the preferred design, based on your experience, existing drivers and existing applications ?

Best regards,

Alain Volmat

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Questions regarding Devices/Subdevices/MediaController usage in case of a SoC
  2011-09-02  9:38 Questions regarding Devices/Subdevices/MediaController usage in case of a SoC Alain VOLMAT
@ 2011-09-02 21:30 ` Sakari Ailus
  2011-09-05 10:10   ` Laurent Pinchart
  0 siblings, 1 reply; 6+ messages in thread
From: Sakari Ailus @ 2011-09-02 21:30 UTC (permalink / raw)
  To: Alain VOLMAT; +Cc: linux-media, laurent.pinchart

On Fri, Sep 02, 2011 at 11:38:06AM +0200, Alain VOLMAT wrote:
> Hi,

Hi Alain,

> I'm writing you in order to have some advices in the design of the V4L2
> driver for a rather complex device. It is mainly related to
> device/subdev/media controller.
> 
> This driver would target SoCs which usually handle inputs (capture
> devices, for ex LinuxDVB, HDMI capture), several layers of graphical or
> video planes and outputs such as HDMI/analog. Basically we have 3 levels,
> capture devices data being pushed onto planes and planes being mixed on
> outputs. Moreover it is also possible to input or output datas from
> several points of the device.

Do you have a graphical representation of the flow of data inside the device
and memory inputs / outputs?

> The idea is to take advantage of the new MediaController in order to be
> able to define internal data path by linking capture devices to layers and
> layers to outputs. Since MediaController allows to link pads of entities
> together, our understanding is that we need to have 1 subdevice per
> hardware resource. That is if we have 2 planes, we will have 2 subdevices
> handling them. Same for outputs and capture. Is our understanding correct
> ?

I think this dependes a little what are the capabilities of those hardware
resources. If they contain several steps of complex image processing this
might be the case. For example, the following diagram contains the OMAP 3
ISP media controller graph:

<URL:http://www.ideasonboard.org/media/omap3isp.ps>

The graph represents a digital camera with sensor, lens and flash. The ISP
(image signal processor) consists of several subdevs in the graph.

> A second point is now about the number of devices. I think we have 2 ways
> #of doing that, and I would like to get your opinions about those 2 ways.
> #1 Single device: I could think of a single device which expose several
> inputs and outputs. We could enumerate them with VIDIOC_ENUM* and select
> them using VIDIOC_S_*. After the selection, data exchange could be done
> upon specifying a proper buffer type. The merit of such model is that an
> application using such device would only have to access the single
> available /dev/video0 for everything, without having to know if video0 is
> for capture, video1 output and so on.
> 
> #2 Multiple device: In such case, each video device would only provide a
> single (or small amount of similar) input or output. So several video
> device nodes would be available to the application. Looking at some other
> drivers around such as the OMAP4 ISP or Samsung S5P, it seems to be the

I believe you refer to OMAP 3, not 4?

> preferred way to go, is that correct ? This way also fit more in the V4L2
> model of device type (Video capture device, video output device) since way
> #1 would at last create a single big device which implement a mix of all
> those devices.

In general, V4L2 device nodes should represent memory input / output for the
device, or a DMA engine. The devices you are referring to above offer
possibilities to write the data to memory in several points in the pipeline.
Based on what you're writing above, it sounds like to me that your device
should likely expose several V4L2 device nodes.

> As far as the media controller is concerned, since all those resources are
> not sharable, it seems proper to have only a single media entry point in

The interface to user space should also be such that it doesn't place
artificial limitations on what data paths can be used: the same driver
should be usable without changes in all situations. Of course one might not
implement each and every feature right away.

> order to setup the SoC and internal data path (and not abstract media to
> match their video device counterpart)

I'm afraid I don't know enough to comment this. It would be quite helpful if
you can provide more detailed information on the hardware.

Kind regards,

-- 
Sakari Ailus
sakari.ailus@iki.fi

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Questions regarding Devices/Subdevices/MediaController usage in case of a SoC
  2011-09-02 21:30 ` Sakari Ailus
@ 2011-09-05 10:10   ` Laurent Pinchart
  2011-09-13  9:07     ` Alain VOLMAT
  0 siblings, 1 reply; 6+ messages in thread
From: Laurent Pinchart @ 2011-09-05 10:10 UTC (permalink / raw)
  To: Sakari Ailus; +Cc: Alain VOLMAT, linux-media

Hi,

On Friday 02 September 2011 23:30:11 Sakari Ailus wrote:
> On Fri, Sep 02, 2011 at 11:38:06AM +0200, Alain VOLMAT wrote:
> > I'm writing you in order to have some advices in the design of the V4L2
> > driver for a rather complex device. It is mainly related to
> > device/subdev/media controller.
> > 
> > This driver would target SoCs which usually handle inputs (capture
> > devices, for ex LinuxDVB, HDMI capture), several layers of graphical or
> > video planes and outputs such as HDMI/analog. Basically we have 3 levels,
> > capture devices data being pushed onto planes and planes being mixed on
> > outputs. Moreover it is also possible to input or output datas from
> > several points of the device.
> 
> Do you have a graphical representation of the flow of data inside the
> device and memory inputs / outputs?

I second Sakari's request here. It would be much easier to advice you with a 
block diagram of your hardware.

> > The idea is to take advantage of the new MediaController in order to be
> > able to define internal data path by linking capture devices to layers
> > and layers to outputs. Since MediaController allows to link pads of
> > entities together, our understanding is that we need to have 1 subdevice
> > per hardware resource. That is if we have 2 planes, we will have 2
> > subdevices handling them. Same for outputs and capture. Is our
> > understanding correct ?
> 
> I think this dependes a little what are the capabilities of those hardware
> resources. If they contain several steps of complex image processing this
> might be the case. For example, the following diagram contains the OMAP 3
> ISP media controller graph:
> 
> <URL:http://www.ideasonboard.org/media/omap3isp.ps>
> 
> The graph represents a digital camera with sensor, lens and flash. The ISP
> (image signal processor) consists of several subdevs in the graph.
> 
> > A second point is now about the number of devices. I think we have 2 ways
> > #of doing that, and I would like to get your opinions about those 2 ways.
> > #1 Single device: I could think of a single device which expose several
> > inputs and outputs. We could enumerate them with VIDIOC_ENUM* and select
> > them using VIDIOC_S_*. After the selection, data exchange could be done
> > upon specifying a proper buffer type. The merit of such model is that an
> > application using such device would only have to access the single
> > available /dev/video0 for everything, without having to know if video0 is
> > for capture, video1 output and so on.
> > 
> > #2 Multiple device: In such case, each video device would only provide a
> > single (or small amount of similar) input or output. So several video
> > device nodes would be available to the application. Looking at some other
> > drivers around such as the OMAP4 ISP or Samsung S5P, it seems to be the
> 
> I believe you refer to OMAP 3, not 4?
> 
> > preferred way to go, is that correct ? This way also fit more in the V4L2
> > model of device type (Video capture device, video output device) since
> > way #1 would at last create a single big device which implement a mix of
> > all those devices.
> 
> In general, V4L2 device nodes should represent memory input / output for
> the device, or a DMA engine. The devices you are referring to above offer
> possibilities to write the data to memory in several points in the
> pipeline. Based on what you're writing above, it sounds like to me that
> your device should likely expose several V4L2 device nodes.

This is my opinion as well.

> > As far as the media controller is concerned, since all those resources
> > are not sharable, it seems proper to have only a single media entry
> > point in
> 
> The interface to user space should also be such that it doesn't place
> artificial limitations on what data paths can be used: the same driver
> should be usable without changes in all situations. Of course one might not
> implement each and every feature right away.
> 
> > order to setup the SoC and internal data path (and not abstract media to
> > match their video device counterpart)
> 
> I'm afraid I don't know enough to comment this. It would be quite helpful
> if you can provide more detailed information on the hardware.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: Questions regarding Devices/Subdevices/MediaController usage in case of a SoC
  2011-09-05 10:10   ` Laurent Pinchart
@ 2011-09-13  9:07     ` Alain VOLMAT
  2011-09-13 10:50       ` Laurent Pinchart
  2011-09-27  7:54       ` Sakari Ailus
  0 siblings, 2 replies; 6+ messages in thread
From: Alain VOLMAT @ 2011-09-13  9:07 UTC (permalink / raw)
  To: Laurent Pinchart, Sakari Ailus; +Cc: linux-media

Hi Sakari, Hi Laurent,

Thanks for your replies. Sorry for taking so much time.

I don't have perfect graphs to explain our device but the following links helps a little.
Device as this one are targeted:    http://www.st.com/internet/imag_video/product/251021.jsp
Corresponding circuit diagram:      http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_DIAGRAM/CIRCUIT_DIAGRAM/circuit_diagram_17848.pdf

Although the audio part will have to be addressed also at some point, I'm now focusing on the video part so it is the area above the ST-Bus INTERCONNECT.
Basically we have several kind of inputs (memory, HDMI, analog, frontends) and several kind of outputs (memory, graphic plane, video plane, dual ..)

Currently those kind of devices are already supported at some level via LinuxDVB/V4L2 drivers (those drivers are actually already available on the web) but they do not offer enough flexibility.
As you know those kind of devices can have several data path which were not easily configurable via LinuxDVB/V4L2 and that's the reason why we are now trying to move to a Subdev/Media Controller based implementation.
I actually discovered recently the presentation about the OMAP2+ Display Subsystem (DSS) (http://elinux.org/images/8/83/Elc2011_semwal.pdf). It is quite similar to what we have to do except that in case of the DSS, as the name says, it is about the display part only.
One difference with the DSS is that in our case, we do not feed directly the GFX/OVLs from the userspace (as framebuffer or video device) but they can ALSO be feed via data decoded by the hardware, coming from data pushed via LinuxDVB. To give you an example, we can pushed streams to be decoded via LinuxDVB, they are decoded, will receive all the necessary processing before "going out" as V4L2 capture devices (all this is done within the kernel and in some cases might never even come back to user space before being displayed on the display panel).
So going back to the graph of the DSS, in our case, in front of the GFX/OVLs, we'll have another set of subdevices that correspond to our decoders "capture device". And even before that (but not available as a subdevice/media controller entity), we have LinuxDVB inputs.

I will post you a graph to explain that more easily but need to have a bit more of internal paper work for that.

> In general, V4L2 device nodes should represent memory input / output for the
> device, or a DMA engine. The devices you are referring to above offer
> possibilities to write the data to memory in several points in the pipeline.
> Based on what you're writing above, it sounds like to me that your device
> should likely expose several V4L2 device nodes.

Ok, yes, since we can output / input data at various part of the device, we will have several device nodes.

Concerning the media controller, since we have 1 entity for each resource we can use, we should be able to have a whole bunch of entities(sub devices), attached to several video devices and to a single media device.

Talking now a bit more about legacy applications (application that are using V4L2 and thus need to have some "default" data path but do not know anything about the media controller), what is the intended way to handle them ? Should we have a "platform configuration" application that configure data path via the media controller in order to make those application happy ?
I kind of understood that there were some idea of plugin for libv4l in order to configure the media controller. Are there any useful document about this plugin thing are should I just dig into libv4l source code to have a better understanding of that ?

Regards,

Alain Volmat

-----Original Message-----
From: Laurent Pinchart [mailto:laurent.pinchart@ideasonboard.com] 
Sent: Monday, September 05, 2011 12:10 PM
To: Sakari Ailus
Cc: Alain VOLMAT; linux-media@vger.kernel.org
Subject: Re: Questions regarding Devices/Subdevices/MediaController usage in case of a SoC

Hi,

On Friday 02 September 2011 23:30:11 Sakari Ailus wrote:
> On Fri, Sep 02, 2011 at 11:38:06AM +0200, Alain VOLMAT wrote:
> > I'm writing you in order to have some advices in the design of the V4L2
> > driver for a rather complex device. It is mainly related to
> > device/subdev/media controller.
> > 
> > This driver would target SoCs which usually handle inputs (capture
> > devices, for ex LinuxDVB, HDMI capture), several layers of graphical or
> > video planes and outputs such as HDMI/analog. Basically we have 3 levels,
> > capture devices data being pushed onto planes and planes being mixed on
> > outputs. Moreover it is also possible to input or output datas from
> > several points of the device.
> 
> Do you have a graphical representation of the flow of data inside the
> device and memory inputs / outputs?

I second Sakari's request here. It would be much easier to advice you with a 
block diagram of your hardware.

> > The idea is to take advantage of the new MediaController in order to be
> > able to define internal data path by linking capture devices to layers
> > and layers to outputs. Since MediaController allows to link pads of
> > entities together, our understanding is that we need to have 1 subdevice
> > per hardware resource. That is if we have 2 planes, we will have 2
> > subdevices handling them. Same for outputs and capture. Is our
> > understanding correct ?
> 
> I think this dependes a little what are the capabilities of those hardware
> resources. If they contain several steps of complex image processing this
> might be the case. For example, the following diagram contains the OMAP 3
> ISP media controller graph:
> 
> <URL:http://www.ideasonboard.org/media/omap3isp.ps>
> 
> The graph represents a digital camera with sensor, lens and flash. The ISP
> (image signal processor) consists of several subdevs in the graph.
> 
> > A second point is now about the number of devices. I think we have 2 ways
> > #of doing that, and I would like to get your opinions about those 2 ways.
> > #1 Single device: I could think of a single device which expose several
> > inputs and outputs. We could enumerate them with VIDIOC_ENUM* and select
> > them using VIDIOC_S_*. After the selection, data exchange could be done
> > upon specifying a proper buffer type. The merit of such model is that an
> > application using such device would only have to access the single
> > available /dev/video0 for everything, without having to know if video0 is
> > for capture, video1 output and so on.
> > 
> > #2 Multiple device: In such case, each video device would only provide a
> > single (or small amount of similar) input or output. So several video
> > device nodes would be available to the application. Looking at some other
> > drivers around such as the OMAP4 ISP or Samsung S5P, it seems to be the
> 
> I believe you refer to OMAP 3, not 4?
> 
> > preferred way to go, is that correct ? This way also fit more in the V4L2
> > model of device type (Video capture device, video output device) since
> > way #1 would at last create a single big device which implement a mix of
> > all those devices.
> 
> In general, V4L2 device nodes should represent memory input / output for
> the device, or a DMA engine. The devices you are referring to above offer
> possibilities to write the data to memory in several points in the
> pipeline. Based on what you're writing above, it sounds like to me that
> your device should likely expose several V4L2 device nodes.

This is my opinion as well.

> > As far as the media controller is concerned, since all those resources
> > are not sharable, it seems proper to have only a single media entry
> > point in
> 
> The interface to user space should also be such that it doesn't place
> artificial limitations on what data paths can be used: the same driver
> should be usable without changes in all situations. Of course one might not
> implement each and every feature right away.
> 
> > order to setup the SoC and internal data path (and not abstract media to
> > match their video device counterpart)
> 
> I'm afraid I don't know enough to comment this. It would be quite helpful
> if you can provide more detailed information on the hardware.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Questions regarding Devices/Subdevices/MediaController usage in case of a SoC
  2011-09-13  9:07     ` Alain VOLMAT
@ 2011-09-13 10:50       ` Laurent Pinchart
  2011-09-27  7:54       ` Sakari Ailus
  1 sibling, 0 replies; 6+ messages in thread
From: Laurent Pinchart @ 2011-09-13 10:50 UTC (permalink / raw)
  To: Alain VOLMAT; +Cc: Sakari Ailus, linux-media

Hi Alain,

On Tuesday 13 September 2011 11:07:02 Alain VOLMAT wrote:
> Hi Sakari, Hi Laurent,
> 
> Thanks for your replies. Sorry for taking so much time.
> 
> I don't have perfect graphs to explain our device but the following links
> helps a little. Device as this one are targeted:   
> http://www.st.com/internet/imag_video/product/251021.jsp Corresponding
> circuit diagram:     
> http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_DIAGRAM/CIRCU
> IT_DIAGRAM/circuit_diagram_17848.pdf
> 
> Although the audio part will have to be addressed also at some point, I'm
> now focusing on the video part so it is the area above the ST-Bus
> INTERCONNECT. Basically we have several kind of inputs (memory, HDMI,
> analog, frontends) and several kind of outputs (memory, graphic plane,
> video plane, dual ..)
> 
> Currently those kind of devices are already supported at some level via
> LinuxDVB/V4L2 drivers (those drivers are actually already available on the
> web) but they do not offer enough flexibility. As you know those kind of
> devices can have several data path which were not easily configurable via
> LinuxDVB/V4L2 and that's the reason why we are now trying to move to a
> Subdev/Media Controller based implementation. I actually discovered
> recently the presentation about the OMAP2+ Display Subsystem (DSS)
> (http://elinux.org/images/8/83/Elc2011_semwal.pdf). It is quite similar to
> what we have to do except that in case of the DSS, as the name says, it is
> about the display part only. One difference with the DSS is that in our
> case, we do not feed directly the GFX/OVLs from the userspace (as
> framebuffer or video device) but they can ALSO be feed via data decoded by
> the hardware, coming from data pushed via LinuxDVB. To give you an
> example, we can pushed streams to be decoded via LinuxDVB, they are
> decoded, will receive all the necessary processing before "going out" as
> V4L2 capture devices (all this is done within the kernel and in some cases
> might never even come back to user space before being displayed on the
> display panel). So going back to the graph of the DSS, in our case, in
> front of the GFX/OVLs, we'll have another set of subdevices that
> correspond to our decoders "capture device". And even before that (but not
> available as a subdevice/media controller entity), we have LinuxDVB
> inputs.
> 
> I will post you a graph to explain that more easily but need to have a bit
> more of internal paper work for that.

Thank you for the information. The hardware looks quite complex indeed, and I 
believe using the media controller would be a good solution.

> > In general, V4L2 device nodes should represent memory input / output for
> > the device, or a DMA engine. The devices you are referring to above
> > offer possibilities to write the data to memory in several points in the
> > pipeline. Based on what you're writing above, it sounds like to me that
> > your device should likely expose several V4L2 device nodes.
> 
> Ok, yes, since we can output / input data at various part of the device, we
> will have several device nodes.
> 
> Concerning the media controller, since we have 1 entity for each resource
> we can use, we should be able to have a whole bunch of entities(sub
> devices), attached to several video devices and to a single media device.

That looks good to me.

> Talking now a bit more about legacy applications (application that are
> using V4L2 and thus need to have some "default" data path but do not know
> anything about the media controller), what is the intended way to handle
> them ? Should we have a "platform configuration" application that
> configure data path via the media controller in order to make those
> application happy ? I kind of understood that there were some idea of
> plugin for libv4l in order to configure the media controller.

If you can configure your hardware with a default pipeline at startup that's 
of course good, but as soon as a media controller-aware application will 
modify the pipeline pure V4L2 applications will be stuck.

For that reason implementing pipeline configuration support in a libv4l plugin 
for pure V4L2 applications has my preference. The idea is that high-level V4L2 
applications (such as a popular closed-source video-conferencing application 
that people seem to like for a reason I can't fathom :-)) should work with 
that plugin, and the high-level features they expect should be provided.

> Are there any useful document about this plugin thing are should I just dig
> into libv4l source code to have a better understanding of that ?

Sakari, do you know if the libv4l plugin API is documented ? Do you have a 
link to the OMAP3 ISP libv4l plugin ?

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Questions regarding Devices/Subdevices/MediaController usage in case of a SoC
  2011-09-13  9:07     ` Alain VOLMAT
  2011-09-13 10:50       ` Laurent Pinchart
@ 2011-09-27  7:54       ` Sakari Ailus
  1 sibling, 0 replies; 6+ messages in thread
From: Sakari Ailus @ 2011-09-27  7:54 UTC (permalink / raw)
  To: Alain VOLMAT; +Cc: Laurent Pinchart, linux-media

Alain VOLMAT wrote:
> Hi Sakari, Hi Laurent,

Hi Alain,

> Thanks for your replies. Sorry for taking so much time.

The same on my side. I finally had time to take a look again.

> I don't have perfect graphs to explain our device but the following
> links helps a little. Device as this one are targeted:
> http://www.st.com/internet/imag_video/product/251021.jsp 
> Corresponding circuit diagram:
> http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_DIAGRAM/CIRCUIT_DIAGRAM/circuit_diagram_17848.pdf

This looks like a complex device indeed which makes it perfect to be
supported on the Media controller interface. :-)

>  Although the audio part will have to be addressed also at some
> point, I'm now focusing on the video part so it is the area above the
> ST-Bus INTERCONNECT. Basically we have several kind of inputs
> (memory, HDMI, analog, frontends) and several kind of outputs
> (memory, graphic plane, video plane, dual ..)

I agree that's a good approach. One problem at a time.

> Currently those kind of devices are already supported at some level
> via LinuxDVB/V4L2 drivers (those drivers are actually already
> available on the web) but they do not offer enough flexibility. As
> you know those kind of devices can have several data path which were
> not easily configurable via LinuxDVB/V4L2 and that's the reason why
> we are now trying to move to a Subdev/Media Controller based
> implementation. I actually discovered recently the presentation about
> the OMAP2+ Display Subsystem (DSS)
> (http://elinux.org/images/8/83/Elc2011_semwal.pdf). It is quite
> similar to what we have to do except that in case of the DSS, as the
> name says, it is about the display part only. One difference with the
> DSS is that in our case, we do not feed directly the GFX/OVLs from
> the userspace (as framebuffer or video device) but they can ALSO be
> feed via data decoded by the hardware, coming from data pushed via
> LinuxDVB. To give you an example, we can pushed streams to be decoded
> via LinuxDVB, they are decoded, will receive all the necessary
> processing before "going out" as V4L2 capture devices (all this is
> done within the kernel and in some cases might never even come back
> to user space before being displayed on the display panel). So going
> back to the graph of the DSS, in our case, in front of the GFX/OVLs,
> we'll have another set of subdevices that correspond to our decoders
> "capture device". And even before that (but not available as a
> subdevice/media controller entity), we have LinuxDVB inputs.

The media device should include all the hardware devices which may transfer
the image data between them, without going through memory in between. Based
on the graph, it seems that most would belong under a single device.

It might make sense to implement a driver that just handles the pipeline
configuration and interacting with the hardware devices. Laurent might
actually have a better idea on this.

All the actual hardware drivers providing the V4L2 intereface could then
provide the V4L2 subdev to the main driver. Beyond that, it needs to be
defined what kind of interfaces are provided to user space by media
entities that are not V4L2 subdevs. This depends on what kind of user
space APIs, what kind of entity level configuration and what streaming
configuration must be supported.

W2ht else is needed than LinuxDVB besides the V4L2?

> I will post you a graph to explain that more easily but need to have
> a bit more of internal paper work for that.
> 
>> In general, V4L2 device nodes should represent memory input /
>> output for the device, or a DMA engine. The devices you are
>> referring to above offer possibilities to write the data to memory
>> in several points in the pipeline. Based on what you're writing
>> above, it sounds like to me that your device should likely expose
>> several V4L2 device nodes.
> 
> Ok, yes, since we can output / input data at various part of the
> device, we will have several device nodes.
> 
> Concerning the media controller, since we have 1 entity for each
> resource we can use, we should be able to have a whole bunch of
> entities(sub devices), attached to several video devices and to a
> single media device.
> 
> Talking now a bit more about legacy applications (application that
> are using V4L2 and thus need to have some "default" data path but do
> not know anything about the media controller), what is the intended
> way to handle them ? Should we have a "platform configuration"
> application that configure data path via the media controller in
> order to make those application happy ? I kind of understood that
> there were some idea of plugin for libv4l in order to configure the
> media controller. Are there any useful document about this plugin
> thing are should I just dig into libv4l source code to have a better
> understanding of that ?

Such plugin does not exist yet, but a few pieces exist already: libmediactl,
libv4l2subdev and the libv4l plugin patches:

<URL:http://www.mail-archive.com/linux-media@vger.kernel.org/msg31596.html>

Essentially the configuration parsing should be part of the libraries rather
than the media-ctl test program. I'm working on patches to fix that.

Regards,

-- 
Sakari Ailus
sakari.ailus@iki.fi

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2011-09-27  7:54 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-02  9:38 Questions regarding Devices/Subdevices/MediaController usage in case of a SoC Alain VOLMAT
2011-09-02 21:30 ` Sakari Ailus
2011-09-05 10:10   ` Laurent Pinchart
2011-09-13  9:07     ` Alain VOLMAT
2011-09-13 10:50       ` Laurent Pinchart
2011-09-27  7:54       ` Sakari Ailus

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.