All of lore.kernel.org
 help / color / mirror / Atom feed
* CDF meeting @FOSDEM report
@ 2013-02-05 22:27 Laurent Pinchart
  2013-02-06 11:11 ` Tomi Valkeinen
  2013-02-12 22:45 ` Stéphane Marchesin
  0 siblings, 2 replies; 15+ messages in thread
From: Laurent Pinchart @ 2013-02-05 22:27 UTC (permalink / raw)
  To: dri-devel, linaro-mm-sig, linux-fbdev, linux-media
  Cc: Sebastien Guiriec, Jesse Barnes, Benjamin Gaignard, Sumit Semwal,
	Thierry Reding, Tom Gall, Kyungmin Park, Tomi Valkeinen,
	Stephen Warren, Mark Zhang, Stéphane Marchesin,
	Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni, gn.de,
	Sunil Joshi, Maxime Ripard, Vikas Sajjan

Hello,

We've hosted a CDF meeting at the FOSDEM on Sunday morning. Here's a summary 
of the discussions.

I would like to start with a big thank to UrLab, the ULB university hacker 
space, for providing us with a meeting room.

The meeting would of course not have been successful without the wide range of 
participants, so I also want to thank all the people who woke up on Sunday 
morning to attend the meeting :-)

(The CC list is pretty long, please let me know - by private e-mail in order 
not to spam the list - if you would like not to receive future CDF-related e-
mails directly)

0. Abbreviations
----------------

DBI - Display Bus Interface, a parallel video control and data bus that 
transmits data using parallel data, read/write, chip select and address 
signals, similarly to 8051-style microcontroller parallel busses. This is a 
mixed video control and data bus.

DPI - Display Pixel Interface, a parallel video data bus that transmits data 
using parallel data, h/v sync and clock signals. This is a video data bus 
only.

DSI - Display Serial Interface, a serial video control and data bus that 
transmits data using one or more differential serial lines. This is a mixed 
video control and data bus.

DT - Device Tree, a representation of a hardware system as a tree of physical 
devices with associated properties.

SFI - Simple Firmware Interface, a lightweight method for firmware to export 
static tables to the operating system. Those tables can contain display device 
topology information.

VBT - Video BIOS Table, a block of data residing in the video BIOS that can 
contain display device topology information.

1. Goals
--------

The meeting started with a brief discussion about the CDF goals.

Tomi Valkeinin and Tomasz Figa have sent RFC patches to show their views of 
what CDF could/should be. Many others have provided very valuable feedback. 
Given the early development stage propositions were sometimes contradictory, 
and focused on different areas of interest. We have thus started the meeting 
with a discussion about what CDF should try to achieve, and what it shouldn't.

CDF has two main purposes. The original goal was to support display panels in 
a platform- and subsystem-independent way. While mostly useful for embedded 
systems, the emergence of platforms such as Intel Medfield and ARM-based PCs 
that blends the embedded and PC worlds makes panel support useful for the PC 
world as well. 

The second purpose is to provide a cross-subsystem interface to support video 
encoders. The idea originally came from a generalisation of the original RFC 
that supported panels only. While encoder support is considered as lower 
priority than display panel support by developers focussed on display 
controller driver (Intel, Renesas, ST Ericsson, TI), companies that produce 
video encoders (Analog Devices, and likely others) don't share that point of 
view and would like to provide a single encoder driver that can be used in 
both KMS and V4L2 drivers.

Both display panels and encoders are thus the target of a lot of attention, 
depending on the audience. As long as none of them is forgotten in CDF, the 
overall agreement was that focussing on panels first is acceptable. Care shall 
be taken in that case to avoid any architecture that would make encoders 
support difficult or impossible.

2. Subsystems
-------------

Display panels are used in conjunction with FBDEV and KMS drivers. There was 
to the audience knowledge no V4L2 driver that needs to explicitly handle 
display panels. Even though at least one V4L2 output drivers (omap_vout) can 
output video to a display panel, it does so in conjunction with the KMS and/or 
FBDEV APIs that handle panel configuration. Panels are thus not exposed to 
V4L2 drivers.

Encoders, on the other hand, are widely used in the V4L2 subsystem. Many V4L2 
devices output video in either analog (Composite, S-Video, VGA) or digital 
(DVI, HDMI) way.

Display panel drivers don't need to be shared with the V4L2 subsystem. 
Furthermore, as the general opinion during the meeting was that the FBDEV 
subsystem should be considered as legacy and deprecate in the future, 
restricting panel support to KMS hasn't been considered by anyone as an issue. 
KMS will thus be the main target of display panel support in CDF, and FBDEV 
will be supported if that doesn't bring any drawback from an architecture 
point of view.

Encoder drivers need to be shared with the V4L2 subsystem. Similarly to panel 
drivers, excluding FBDEV support from CDF isn't considered as an issue.

3. KMS Extensions
-----------------

The usefulness of V4L2 for output devices was questioned, and the possibility 
of using KMS for complex video devices usually associated with V4L2 was 
raised. The TI DaVinci 8xxx family is an example of chips that could benefit 
from KMS support.

The KMS API is lacking support for deep-pipelining ("framebuffers" that are 
sourced from a data stream instead of a memory buffer) today. Extending the 
KMS API with deep-pipelining support was considered as a sensible goal that 
would mostly require the creation of a new KMS source object. Exposing the 
topology of the whole device would then be handled by the Media Controller 
API.

Given that no evidence of this KMS extension being ready in a reasonable time 
frame exists, sharing encoder drivers with the V4L2 subsystem hasn't been 
seriously questioned.

4. Discovery and Initialization
-------------------------------

As CDF will split support for complete display devices across different 
drivers, the question of physical devices discovery and initialization caused 
concern among the audience.

Topology and connectivity information can come from a wide variety of sources. 
Embedded platforms typically provide that information in platform data 
supplied by board code or through the device tree. PC platforms usually store 
the information in the firmware exposed through ACPI, SFI, VBT or other 
interfaces. Pluggable devices (PCI being the most common case) can also store 
the information on an on-board non-volatile memory or hardcode it in drivers.

When using the device tree display entity information are bundled with the 
display entity device DT node. The associated driver shall thus extract itself 
information from the DT node. In all other cases the display entity driver 
shall not parse data from the information source directly, but shall instead 
received a platform data structure filled with data parsed by the display 
controller driver. In the most complex case a machine driver, similar to ASoC 
machine drivers, might be needed, in which case platform data could be 
provided by that machine driver.

Display entity drivers are encouraged to internally fill a platform data 
structure from their DT node to reuse the same code path for both platform 
data- and DT-based initialization.

5. Bus Model
------------

Display panels are connected to a video bus that transmits video data and 
optionally to a control bus. Those two busses can be separate physical 
interfaces or combined into a single physical interface.

The Linux device model represents the system as a tree of devices (not to be 
confused by the device tree, abreviated as DT). The tree is organized around 
control busses, with every device being a child of its control bus master. For 
instance an I2C device will be a child of its I2C controller device, which can 
itself be a child of its parent PCI device.

Display panels will be represented as Linux devices. They will have a single 
parent from the Linux device model point of view, but will be potentially 
connected to multiple physical busses. CDF thus needs to define what bus to 
select as the Linux parent bus.

In theory any physical bus that the device is attached to can be selected as 
the parent bus. However, selecting a video data bus would depart from the 
traditional Linux device model that uses control busses only. This caused 
concern among several people who argued that not presenting the device to the 
kernel as attached to its control bus would bring issues in embedded system. 
Unlike on PC systems where the control bus master is usually the same physical 
device as the data bus master, embedded systems are made of a potentially 
complex assembly of completely unrelated devices. Not representing an I2C-
controlled panel as a child of its I2C master in DT was thus frown upon, even 
though no clear agreement was reached on the subject.

Panels can be divided in three categories based on their bus model.

- No control bus

Many panels don't offer any control interface. They are usually referred to as 
'dumb panels' as they directly display the data received on their video bus 
without any configurable option. Panels in this category often use DPI is 
their video bus, but other options such as DSI (using the DSI video mode only) 
are possible.

Panels with no control bus can be represented in the device model as platform 
devices, or as being attached to their video bus. In the later case we would 
need Linux busses for pure video data interfaces such as DPI or VGA. Nobody 
was particularly enthousiastic about this idea. Dumb panels will thus likely 
be represented as platform devices.

- Separate video and control busses

The typical case is a panel connected to an I2C or SPI bus that receives data 
through a DPI video interface or DSI video mode interface.

Using a mixed control and video bus (such as DSI and DBI) for control only 
with a different bus for video data is possible in theory but very unlikely in 
practice (although the creativity of hardware developers should never be 
underestimated).

Display panels that use a control bus supported by the Linux kernel should 
likely be represented as children of their control bus master. Other options 
are possible as mentioned above but were received without enthousiasm by most 
embedded kernel developers.

When the control bus isn't supported by the kernel, a new bus type can be 
developed, or the panel can be represented as a platform device. The right 
option will likely very depending on the control bus.

- Combined video and control busses

When the two busses are combined in a single physical bus the panel device 
will obviously be represented as a child of that single physical bus. 

In such cases the control bus could expose video bus control methods. This 
would remove the need for a video source as proposed by Tomi Valkeinen in his 
CDF model. However, if the bus can be used for video data transfer in 
combination with a different control bus, a video source corresponding to the 
data bus will be needed.

No decision has been taken on whether to use a video source in addition to the 
control bus in the combined busses case. Experimentation will be needed, and 
the right solution might depend on the bus type.

- Multiple control busses

One panel was mentioned as being connected to a DSI bus and an I2C bus. The 
DSI bus is used for both control and video, and the I2C bus for control only. 
configuring the panel requires sending commands through both DSI and I2C. The 
opinion on such panels was a large *sigh* followed by a "this should be 
handled by the device core, let's ask Greg KH".

6. Miscellaneous
----------------

- If the OMAP3 DSS driver is used as a model for the DSI support 
implementation, Daniel Vetter requested the DSI bus lock semaphore to be 
killed as it prevents lockdep from working correctly (reference needed ;-)).

- Do we need to support chaining several encoders ? We can come up with 
several theoretical use cases, some of them probably exist in real hardware, 
but the details are still a bit fuzzy.

7. Actions
----------

I will post a third CDF RFC that will target DPI panels only, with support for 
a concept similar to the video source as proposed by Tomi Valkeinen. Marcus 
Lorentzon will then try to implement support for DSI panel using a single bus 
without a video source. Rob Clark will test DPI support with the OMAP DSS 
driver on a OMAP3 platform.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: CDF meeting @FOSDEM report
  2013-02-05 22:27 CDF meeting @FOSDEM report Laurent Pinchart
@ 2013-02-06 11:11 ` Tomi Valkeinen
  2013-02-06 12:11   ` Jani Nikula
  2013-02-06 14:44     ` Alex Deucher
  2013-02-12 22:45 ` Stéphane Marchesin
  1 sibling, 2 replies; 15+ messages in thread
From: Tomi Valkeinen @ 2013-02-06 11:11 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: linux-fbdev, Sebastien Guiriec, dri-devel, Jesse Barnes,
	Benjamin Gaignard, Sumit Semwal, Tom Gall, Kyungmin Park,
	linux-media, Stephen Warren, Thierry Reding, Mark Zhang,
	linaro-mm-sig, Stéphane Marchesin, Alexandre Courbot,
	Ragesh Radhakrishnan, Thomas Petazzoni, Sunil Joshi,
	Maxime Ripard, Vikas Sajjan


[-- Attachment #1.1: Type: text/plain, Size: 11324 bytes --]

Hi,

On 2013-02-06 00:27, Laurent Pinchart wrote:
> Hello,
> 
> We've hosted a CDF meeting at the FOSDEM on Sunday morning. Here's a summary 
> of the discussions.

Thanks for the summary. I've been on a longish leave, and just got back,
so I haven't read the recent CDF discussions on lists yet. I thought
I'll start by replying to this summary first =).

> 0. Abbreviations
> ----------------
> 
> DBI - Display Bus Interface, a parallel video control and data bus that 
> transmits data using parallel data, read/write, chip select and address 
> signals, similarly to 8051-style microcontroller parallel busses. This is a 
> mixed video control and data bus.
> 
> DPI - Display Pixel Interface, a parallel video data bus that transmits data 
> using parallel data, h/v sync and clock signals. This is a video data bus 
> only.
> 
> DSI - Display Serial Interface, a serial video control and data bus that 
> transmits data using one or more differential serial lines. This is a mixed 
> video control and data bus.

In case you'll re-use these abbrevs in later posts, I think it would be
good to mention that DPI is a one-way bus, whereas DBI and DSI are
two-way (perhaps that's implicit with control bus, though).

> 1. Goals
> --------
> 
> The meeting started with a brief discussion about the CDF goals.
> 
> Tomi Valkeinin and Tomasz Figa have sent RFC patches to show their views of 
> what CDF could/should be. Many others have provided very valuable feedback. 
> Given the early development stage propositions were sometimes contradictory, 
> and focused on different areas of interest. We have thus started the meeting 
> with a discussion about what CDF should try to achieve, and what it shouldn't.
> 
> CDF has two main purposes. The original goal was to support display panels in 
> a platform- and subsystem-independent way. While mostly useful for embedded 
> systems, the emergence of platforms such as Intel Medfield and ARM-based PCs 
> that blends the embedded and PC worlds makes panel support useful for the PC 
> world as well. 
> 
> The second purpose is to provide a cross-subsystem interface to support video 
> encoders. The idea originally came from a generalisation of the original RFC 
> that supported panels only. While encoder support is considered as lower 
> priority than display panel support by developers focussed on display 
> controller driver (Intel, Renesas, ST Ericsson, TI), companies that produce 
> video encoders (Analog Devices, and likely others) don't share that point of 
> view and would like to provide a single encoder driver that can be used in 
> both KMS and V4L2 drivers.

What is an encoder? Something that takes a video signal in, and lets the
CPU store the received data to memory? Isn't that a decoder?

Or do you mean something that takes a video signal in, and outputs a
video signal in another format? (transcoder?)

If the latter, I don't see them as lower priority. If we use CDF also
for SoC internal components (which I think would be great), then every
OMAP board has transcoders.

I'm not sure about the vocabulary on this area, but a normal OMAP
scenario could have a following video pipeline:

1. encoder (OMAP's DISPC, reads pixels from memory and outputs parallel RGB)
2. transcoder (OMAP's DSI, gets parallel RGB and outputs DSI)
3. transcoder (external DSI-to-LVDS chip)
4. panel (LVDS panel)

Even in the case where a panel would be connected directly to the OMAP,
there would be the internal transcoder.

> 2. Subsystems
> -------------
> 
> Display panels are used in conjunction with FBDEV and KMS drivers. There was 
> to the audience knowledge no V4L2 driver that needs to explicitly handle 
> display panels. Even though at least one V4L2 output drivers (omap_vout) can 
> output video to a display panel, it does so in conjunction with the KMS and/or 
> FBDEV APIs that handle panel configuration. Panels are thus not exposed to 
> V4L2 drivers.

Hmm, I'm no expert on omap_vout, but it doesn't use KMS nor omapfb. It
uses omapdss directly, and thus accesses the panels.

That said, I'm fine with leaving omap_vout out from the equation.

> 3. KMS Extensions
> -----------------
> 
> The usefulness of V4L2 for output devices was questioned, and the possibility 
> of using KMS for complex video devices usually associated with V4L2 was 
> raised. The TI DaVinci 8xxx family is an example of chips that could benefit 
> from KMS support.
> 
> The KMS API is lacking support for deep-pipelining ("framebuffers" that are 
> sourced from a data stream instead of a memory buffer) today. Extending the 
> KMS API with deep-pipelining support was considered as a sensible goal that 
> would mostly require the creation of a new KMS source object. Exposing the 
> topology of the whole device would then be handled by the Media Controller 
> API.

Isn't there also the problem that KSM doesn't support arbitrarily long
chains of display devices? That actually sounds more like
"deep-pipelining" than what you said, getting the source data from a
data stream.

> 5. Bus Model
> ------------
> 
> Display panels are connected to a video bus that transmits video data and 
> optionally to a control bus. Those two busses can be separate physical 
> interfaces or combined into a single physical interface.
> 
> The Linux device model represents the system as a tree of devices (not to be 
> confused by the device tree, abreviated as DT). The tree is organized around 
> control busses, with every device being a child of its control bus master. For 
> instance an I2C device will be a child of its I2C controller device, which can 
> itself be a child of its parent PCI device.
> 
> Display panels will be represented as Linux devices. They will have a single 
> parent from the Linux device model point of view, but will be potentially 
> connected to multiple physical busses. CDF thus needs to define what bus to 
> select as the Linux parent bus.
> 
> In theory any physical bus that the device is attached to can be selected as 
> the parent bus. However, selecting a video data bus would depart from the 
> traditional Linux device model that uses control busses only. This caused 
> concern among several people who argued that not presenting the device to the 
> kernel as attached to its control bus would bring issues in embedded system. 
> Unlike on PC systems where the control bus master is usually the same physical 
> device as the data bus master, embedded systems are made of a potentially 
> complex assembly of completely unrelated devices. Not representing an I2C-
> controlled panel as a child of its I2C master in DT was thus frown upon, even 
> though no clear agreement was reached on the subject.

I've been thinking that a good rule of thumb would be that the device
must be somewhat usable after the parent bus is ready. So for, say,
DPI+SPI panel, when the SPI is set up the driver can send messages to
the panel, perhaps read an ID or such, even if the actual video cannot
be shown yet (presuming DPI bus is still missing).

Of course there are the funny cases, as always. Say, a DSI panel,
controlled via i2c, and the panel gets its functional clock from the DSI
bus's clock. In that case both busses need to be up and running before
the panel can do anything.

> - Combined video and control busses
> 
> When the two busses are combined in a single physical bus the panel device 
> will obviously be represented as a child of that single physical bus. 
> 
> In such cases the control bus could expose video bus control methods. This 
> would remove the need for a video source as proposed by Tomi Valkeinen in his 
> CDF model. However, if the bus can be used for video data transfer in 
> combination with a different control bus, a video source corresponding to the 
> data bus will be needed.

I think this is always the case. If a bus can be used for control and
video data, you can always use it only for video data.

> No decision has been taken on whether to use a video source in addition to the 
> control bus in the combined busses case. Experimentation will be needed, and 
> the right solution might depend on the bus type.
> 
> - Multiple control busses
> 
> One panel was mentioned as being connected to a DSI bus and an I2C bus. The 
> DSI bus is used for both control and video, and the I2C bus for control only. 
> configuring the panel requires sending commands through both DSI and I2C. The 

I have luckily not encountered such a device. However, many of the DSI
devices do have i2c control as an option. From the device's point of
view, both can be used at the same time, but I think usually it's saner
to just pick one and use it.

The driver for the device should support both control busses, though.
Probably 99% of the driver code is common for both cases.

> 6. Miscellaneous
> ----------------
> 
> - If the OMAP3 DSS driver is used as a model for the DSI support 
> implementation, Daniel Vetter requested the DSI bus lock semaphore to be 
> killed as it prevents lockdep from working correctly (reference needed ;-)).

I don't think OMAP DSS should be used as a model. It has too much legacy
crap that should be rewritten. However, it can be used as a reference to
see what kind of features are needed, as it does support both video and
command mode DSI modes, and has been used with many different kinds of
DSI panels and DSI transcoders.

As for the semaphore, sure, it can be removed, although I'm not aware of
this lockdep problem. If there's a problem it should be fixed in any case.

> - Do we need to support chaining several encoders ? We can come up with 
> several theoretical use cases, some of them probably exist in real hardware, 
> but the details are still a bit fuzzy.

If encoder means the same as the "transcoder" term I used earlier, then
yes, I think so.

As I wrote, I'd like to model the OMAP DSS internal components with CDF.
The internal IP blocks are in no way different than external IP blocks,
they just happen to be integrated into OMAP. The same display IPs are
used with multiple different TI SoCs.

Also, the IPs vary between TI SoCs (for ex, omap2 doesn't have DSI,
omap3 has one DSI, omap4 has two DSIs), so we'll anyway need to have
some kind of dynamic system inside omapdss driver. If I can't use CDF
for that, I'll need to implement a custom one, which I believe would
resemble CDF in many ways.

I'm guessing that having multiple external transcoders is quite rare on
production hardware, but is a very useful feature with development
boards. It's not just once or twice that we've used a transcoder or two
between a SoC and a panel, because we haven't had the final panel yet.

Also, sometimes there are small simple chips in the video pipeline, that
do things like level shifting or ESD protection. In some cases these
chips just work automatically, but in some cases one needs to setup
regulators and gpios to get them up and running (for example,
http://www.ti.com/product/tpd12s015). And if that's the case, then I
believe having a CDF "transcoder" driver for the chip is the easiest
solution.

 Tomi



[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 899 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: CDF meeting @FOSDEM report
  2013-02-06 11:11 ` Tomi Valkeinen
@ 2013-02-06 12:11   ` Jani Nikula
  2013-02-06 12:54     ` Tomi Valkeinen
  2013-02-06 14:44     ` Alex Deucher
  1 sibling, 1 reply; 15+ messages in thread
From: Jani Nikula @ 2013-02-06 12:11 UTC (permalink / raw)
  To: Tomi Valkeinen, Laurent Pinchart
  Cc: linux-fbdev, Sebastien Guiriec, dri-devel, Jesse Barnes,
	Sumit Semwal, Tom Gall, Kyungmin Park, linux-media,
	Stephen Warren, Mark Zhang, Thierry Reding, linaro-mm-sig,
	Stéphane Marchesin, Alexandre Courbot, Ragesh Radhakrishnan,
	Thomas Petazzoni, Sunil Joshi, Maxime Ripard, Vikas Sajjan

On Wed, 06 Feb 2013, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
>> 6. Miscellaneous
>> ----------------
>> 
>> - If the OMAP3 DSS driver is used as a model for the DSI support 
>> implementation, Daniel Vetter requested the DSI bus lock semaphore to be 
>> killed as it prevents lockdep from working correctly (reference needed ;-)).

[...]

> As for the semaphore, sure, it can be removed, although I'm not aware of
> this lockdep problem. If there's a problem it should be fixed in any case.

The problem is that lockdep does not support semaphores out of the
box. I'm not sure how hard it would be to manually lockdep annotate the
bus lock, and whether it would really work. In any case, as I think we
learned in the past, getting locking right in a DSI command mode panel
driver with an asynchronous update callback, DSI bus lock, and a driver
data specific mutex can be a PITA. Lockdep would be extremely useful
there.

AFAICS simply replacing the semaphore with a mutex would work for all
other cases except DSI command mode display update, unless you're
prepared to wait in the call until the next tearing effect interrupt
plus framedone. Which would suck. I think you and I have talked about
this part in the past...


BR,
Jani.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: CDF meeting @FOSDEM report
  2013-02-06 12:11   ` Jani Nikula
@ 2013-02-06 12:54     ` Tomi Valkeinen
  0 siblings, 0 replies; 15+ messages in thread
From: Tomi Valkeinen @ 2013-02-06 12:54 UTC (permalink / raw)
  To: Jani Nikula
  Cc: linux-fbdev, Sebastien Guiriec, dri-devel, Jesse Barnes,
	Laurent Pinchart, Sumit Semwal, Tom Gall, Kyungmin Park,
	linux-media, Stephen Warren, Mark Zhang, Thierry Reding,
	linaro-mm-sig, Stéphane Marchesin, Alexandre Courbot,
	Ragesh Radhakrishnan, Thomas Petazzoni, Sunil Joshi,
	Maxime Ripard, Vikas Sajjan


[-- Attachment #1.1: Type: text/plain, Size: 2761 bytes --]

On 2013-02-06 14:11, Jani Nikula wrote:
> On Wed, 06 Feb 2013, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
>>> 6. Miscellaneous
>>> ----------------
>>>
>>> - If the OMAP3 DSS driver is used as a model for the DSI support 
>>> implementation, Daniel Vetter requested the DSI bus lock semaphore to be 
>>> killed as it prevents lockdep from working correctly (reference needed ;-)).
> 
> [...]
> 
>> As for the semaphore, sure, it can be removed, although I'm not aware of
>> this lockdep problem. If there's a problem it should be fixed in any case.
> 
> The problem is that lockdep does not support semaphores out of the
> box. I'm not sure how hard it would be to manually lockdep annotate the
> bus lock, and whether it would really work. In any case, as I think we
> learned in the past, getting locking right in a DSI command mode panel
> driver with an asynchronous update callback, DSI bus lock, and a driver
> data specific mutex can be a PITA. Lockdep would be extremely useful
> there.
> 
> AFAICS simply replacing the semaphore with a mutex would work for all
> other cases except DSI command mode display update, unless you're
> prepared to wait in the call until the next tearing effect interrupt
> plus framedone. Which would suck. I think you and I have talked about
> this part in the past...

Mutex requires locking and unlocking to happen from the same thread. But
I guess that's what you meant that the problem would be with display
update, where the framedone callback is used to release the bus lock.

The semaphore could probably be changed to use wait queues, but isn't
that more or less what a semaphore already does?

And I want to point out to those not familiar with omapdss, that the DSI
bus lock in question does not protect any data in memory, but is an
indication that the DSI bus is currently in use. The bus lock can be
used to wait until the bus is free again.

I guess one option would be to disallow any waiting for the bus lock. If
the panel driver would try acquire bus lock, and the lock is already
taken, the call would fail. This would move the handling of exclusivity
to the user of the panel (drm driver, I guess), which already should
handle the framedone event.

The above would require that everything the panel does should be managed
by the drm driver. Currently this is not the case for OMAP, as the panel
driver can get calls via sysfs, or via backlight driver, or via (gpio)
interrupts.

I don't really know what would be the best option here. On one hand
requiring all panel calls to be managed by drm would be nice and simple.
But it is a bit limiting when thinking about complex display chips. Will
that work for all cases? I'm not sure.

 Tomi



[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 899 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: CDF meeting @FOSDEM report
  2013-02-06 11:11 ` Tomi Valkeinen
  2013-02-06 12:11   ` Jani Nikula
@ 2013-02-06 14:44     ` Alex Deucher
  1 sibling, 0 replies; 15+ messages in thread
From: Alex Deucher @ 2013-02-06 14:44 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: Laurent Pinchart, linux-fbdev, Sebastien Guiriec, dri-devel,
	Jesse Barnes, Benjamin Gaignard, Sumit Semwal, Tom Gall,
	Kyungmin Park, linux-media, Stephen Warren, Thierry Reding,
	Mark Zhang, linaro-mm-sig, Stéphane Marchesin,
	Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
	Sunil Joshi, Maxime Ripard, Vikas Sajjan

On Wed, Feb 6, 2013 at 6:11 AM, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> Hi,
>
> On 2013-02-06 00:27, Laurent Pinchart wrote:
>> Hello,
>>
>> We've hosted a CDF meeting at the FOSDEM on Sunday morning. Here's a summary
>> of the discussions.
>
> Thanks for the summary. I've been on a longish leave, and just got back,
> so I haven't read the recent CDF discussions on lists yet. I thought
> I'll start by replying to this summary first =).
>
>> 0. Abbreviations
>> ----------------
>>
>> DBI - Display Bus Interface, a parallel video control and data bus that
>> transmits data using parallel data, read/write, chip select and address
>> signals, similarly to 8051-style microcontroller parallel busses. This is a
>> mixed video control and data bus.
>>
>> DPI - Display Pixel Interface, a parallel video data bus that transmits data
>> using parallel data, h/v sync and clock signals. This is a video data bus
>> only.
>>
>> DSI - Display Serial Interface, a serial video control and data bus that
>> transmits data using one or more differential serial lines. This is a mixed
>> video control and data bus.
>
> In case you'll re-use these abbrevs in later posts, I think it would be
> good to mention that DPI is a one-way bus, whereas DBI and DSI are
> two-way (perhaps that's implicit with control bus, though).
>
>> 1. Goals
>> --------
>>
>> The meeting started with a brief discussion about the CDF goals.
>>
>> Tomi Valkeinin and Tomasz Figa have sent RFC patches to show their views of
>> what CDF could/should be. Many others have provided very valuable feedback.
>> Given the early development stage propositions were sometimes contradictory,
>> and focused on different areas of interest. We have thus started the meeting
>> with a discussion about what CDF should try to achieve, and what it shouldn't.
>>
>> CDF has two main purposes. The original goal was to support display panels in
>> a platform- and subsystem-independent way. While mostly useful for embedded
>> systems, the emergence of platforms such as Intel Medfield and ARM-based PCs
>> that blends the embedded and PC worlds makes panel support useful for the PC
>> world as well.
>>
>> The second purpose is to provide a cross-subsystem interface to support video
>> encoders. The idea originally came from a generalisation of the original RFC
>> that supported panels only. While encoder support is considered as lower
>> priority than display panel support by developers focussed on display
>> controller driver (Intel, Renesas, ST Ericsson, TI), companies that produce
>> video encoders (Analog Devices, and likely others) don't share that point of
>> view and would like to provide a single encoder driver that can be used in
>> both KMS and V4L2 drivers.
>
> What is an encoder? Something that takes a video signal in, and lets the
> CPU store the received data to memory? Isn't that a decoder?
>
> Or do you mean something that takes a video signal in, and outputs a
> video signal in another format? (transcoder?)

In KMS parlance, we have two objects a crtc and an encoder.  A crtc
reads data from memory and produces a data stream with display timing.
 The encoder then takes that datastream and timing from the crtc and
converts it some sort of physical signal (LVDS, TMDS, DP, etc.).  It's
not always a perfect match to the hardware.  For example a lot of GPUs
have a DVO encoder which feeds a secondary encoder like an sil164 DVO
to TMDS encoder.

Alex

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: CDF meeting @FOSDEM report
@ 2013-02-06 14:44     ` Alex Deucher
  0 siblings, 0 replies; 15+ messages in thread
From: Alex Deucher @ 2013-02-06 14:44 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: Thomas Petazzoni, linux-fbdev, Tom Gall, Stephen Warren,
	Thierry Reding, Mark Zhang, dri-devel, Sunil Joshi,
	linaro-mm-sig, Stéphane Marchesin, Kyungmin Park,
	Jesse Barnes, Laurent Pinchart, Sebastien Guiriec,
	Alexandre Courbot, Maxime Ripard, Vikas Sajjan, Sumit Semwal,
	Ragesh Radhakrishnan, linux-media

On Wed, Feb 6, 2013 at 6:11 AM, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> Hi,
>
> On 2013-02-06 00:27, Laurent Pinchart wrote:
>> Hello,
>>
>> We've hosted a CDF meeting at the FOSDEM on Sunday morning. Here's a summary
>> of the discussions.
>
> Thanks for the summary. I've been on a longish leave, and just got back,
> so I haven't read the recent CDF discussions on lists yet. I thought
> I'll start by replying to this summary first =).
>
>> 0. Abbreviations
>> ----------------
>>
>> DBI - Display Bus Interface, a parallel video control and data bus that
>> transmits data using parallel data, read/write, chip select and address
>> signals, similarly to 8051-style microcontroller parallel busses. This is a
>> mixed video control and data bus.
>>
>> DPI - Display Pixel Interface, a parallel video data bus that transmits data
>> using parallel data, h/v sync and clock signals. This is a video data bus
>> only.
>>
>> DSI - Display Serial Interface, a serial video control and data bus that
>> transmits data using one or more differential serial lines. This is a mixed
>> video control and data bus.
>
> In case you'll re-use these abbrevs in later posts, I think it would be
> good to mention that DPI is a one-way bus, whereas DBI and DSI are
> two-way (perhaps that's implicit with control bus, though).
>
>> 1. Goals
>> --------
>>
>> The meeting started with a brief discussion about the CDF goals.
>>
>> Tomi Valkeinin and Tomasz Figa have sent RFC patches to show their views of
>> what CDF could/should be. Many others have provided very valuable feedback.
>> Given the early development stage propositions were sometimes contradictory,
>> and focused on different areas of interest. We have thus started the meeting
>> with a discussion about what CDF should try to achieve, and what it shouldn't.
>>
>> CDF has two main purposes. The original goal was to support display panels in
>> a platform- and subsystem-independent way. While mostly useful for embedded
>> systems, the emergence of platforms such as Intel Medfield and ARM-based PCs
>> that blends the embedded and PC worlds makes panel support useful for the PC
>> world as well.
>>
>> The second purpose is to provide a cross-subsystem interface to support video
>> encoders. The idea originally came from a generalisation of the original RFC
>> that supported panels only. While encoder support is considered as lower
>> priority than display panel support by developers focussed on display
>> controller driver (Intel, Renesas, ST Ericsson, TI), companies that produce
>> video encoders (Analog Devices, and likely others) don't share that point of
>> view and would like to provide a single encoder driver that can be used in
>> both KMS and V4L2 drivers.
>
> What is an encoder? Something that takes a video signal in, and lets the
> CPU store the received data to memory? Isn't that a decoder?
>
> Or do you mean something that takes a video signal in, and outputs a
> video signal in another format? (transcoder?)

In KMS parlance, we have two objects a crtc and an encoder.  A crtc
reads data from memory and produces a data stream with display timing.
 The encoder then takes that datastream and timing from the crtc and
converts it some sort of physical signal (LVDS, TMDS, DP, etc.).  It's
not always a perfect match to the hardware.  For example a lot of GPUs
have a DVO encoder which feeds a secondary encoder like an sil164 DVO
to TMDS encoder.

Alex

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: CDF meeting @FOSDEM report
@ 2013-02-06 14:44     ` Alex Deucher
  0 siblings, 0 replies; 15+ messages in thread
From: Alex Deucher @ 2013-02-06 14:44 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: Thomas Petazzoni, linux-fbdev, Tom Gall, Stephen Warren,
	Thierry Reding, Mark Zhang, dri-devel, Sunil Joshi,
	linaro-mm-sig, Stéphane Marchesin, Kyungmin Park,
	Jesse Barnes, Laurent Pinchart, Sebastien Guiriec,
	Alexandre Courbot, Maxime Ripard, Vikas Sajjan, Sumit Semwal,
	Ragesh Radhakrishnan, linux-media

On Wed, Feb 6, 2013 at 6:11 AM, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> Hi,
>
> On 2013-02-06 00:27, Laurent Pinchart wrote:
>> Hello,
>>
>> We've hosted a CDF meeting at the FOSDEM on Sunday morning. Here's a summary
>> of the discussions.
>
> Thanks for the summary. I've been on a longish leave, and just got back,
> so I haven't read the recent CDF discussions on lists yet. I thought
> I'll start by replying to this summary first =).
>
>> 0. Abbreviations
>> ----------------
>>
>> DBI - Display Bus Interface, a parallel video control and data bus that
>> transmits data using parallel data, read/write, chip select and address
>> signals, similarly to 8051-style microcontroller parallel busses. This is a
>> mixed video control and data bus.
>>
>> DPI - Display Pixel Interface, a parallel video data bus that transmits data
>> using parallel data, h/v sync and clock signals. This is a video data bus
>> only.
>>
>> DSI - Display Serial Interface, a serial video control and data bus that
>> transmits data using one or more differential serial lines. This is a mixed
>> video control and data bus.
>
> In case you'll re-use these abbrevs in later posts, I think it would be
> good to mention that DPI is a one-way bus, whereas DBI and DSI are
> two-way (perhaps that's implicit with control bus, though).
>
>> 1. Goals
>> --------
>>
>> The meeting started with a brief discussion about the CDF goals.
>>
>> Tomi Valkeinin and Tomasz Figa have sent RFC patches to show their views of
>> what CDF could/should be. Many others have provided very valuable feedback.
>> Given the early development stage propositions were sometimes contradictory,
>> and focused on different areas of interest. We have thus started the meeting
>> with a discussion about what CDF should try to achieve, and what it shouldn't.
>>
>> CDF has two main purposes. The original goal was to support display panels in
>> a platform- and subsystem-independent way. While mostly useful for embedded
>> systems, the emergence of platforms such as Intel Medfield and ARM-based PCs
>> that blends the embedded and PC worlds makes panel support useful for the PC
>> world as well.
>>
>> The second purpose is to provide a cross-subsystem interface to support video
>> encoders. The idea originally came from a generalisation of the original RFC
>> that supported panels only. While encoder support is considered as lower
>> priority than display panel support by developers focussed on display
>> controller driver (Intel, Renesas, ST Ericsson, TI), companies that produce
>> video encoders (Analog Devices, and likely others) don't share that point of
>> view and would like to provide a single encoder driver that can be used in
>> both KMS and V4L2 drivers.
>
> What is an encoder? Something that takes a video signal in, and lets the
> CPU store the received data to memory? Isn't that a decoder?
>
> Or do you mean something that takes a video signal in, and outputs a
> video signal in another format? (transcoder?)

In KMS parlance, we have two objects a crtc and an encoder.  A crtc
reads data from memory and produces a data stream with display timing.
 The encoder then takes that datastream and timing from the crtc and
converts it some sort of physical signal (LVDS, TMDS, DP, etc.).  It's
not always a perfect match to the hardware.  For example a lot of GPUs
have a DVO encoder which feeds a secondary encoder like an sil164 DVO
to TMDS encoder.

Alex

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: CDF meeting @FOSDEM report
  2013-02-06 14:44     ` Alex Deucher
  (?)
@ 2013-02-06 15:00       ` Tomi Valkeinen
  -1 siblings, 0 replies; 15+ messages in thread
From: Tomi Valkeinen @ 2013-02-06 15:00 UTC (permalink / raw)
  To: Alex Deucher
  Cc: Laurent Pinchart, linux-fbdev, Sebastien Guiriec, dri-devel,
	Jesse Barnes, Benjamin Gaignard, Sumit Semwal, Tom Gall,
	Kyungmin Park, linux-media, Stephen Warren, Thierry Reding,
	Mark Zhang, linaro-mm-sig, Stéphane Marchesin,
	Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
	Sunil Joshi, Maxime Ripard, Vikas Sajjan

[-- Attachment #1: Type: text/plain, Size: 1840 bytes --]

On 2013-02-06 16:44, Alex Deucher wrote:
> On Wed, Feb 6, 2013 at 6:11 AM, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

>> What is an encoder? Something that takes a video signal in, and lets the
>> CPU store the received data to memory? Isn't that a decoder?
>>
>> Or do you mean something that takes a video signal in, and outputs a
>> video signal in another format? (transcoder?)
> 
> In KMS parlance, we have two objects a crtc and an encoder.  A crtc
> reads data from memory and produces a data stream with display timing.
>  The encoder then takes that datastream and timing from the crtc and
> converts it some sort of physical signal (LVDS, TMDS, DP, etc.).  It's

Isn't the video stream between CRTC and encoder just as physical, it
just happens to be inside the GPU?

This is the case for OMAP, at least, where DISPC could be considered
CRTC, and DSI/HDMI/etc could be considered encoder. The stream between
DISPC and DSI/HDMI is plain parallel RGB signal. The video stream could
as well be outside OMAP.

> not always a perfect match to the hardware.  For example a lot of GPUs
> have a DVO encoder which feeds a secondary encoder like an sil164 DVO
> to TMDS encoder.

Right. I think mapping the DRM entities to CDF ones is one of the bigger
question marks we have with CDF. While I'm no expert on DRM, I think we
have the following options:

1. Force DRM's model to CDF, meaning one encoder.

2. Extend DRM to support multiple encoders in a chain.

3. Support multiple encoders in a chain in CDF, but somehow map them to
a single encoder in DRM side.

I really dislike the first option, as it would severely limit where CDF
can be used, or would force you to write some kind of combined drivers,
so that you can have one encoder driver running multiple encoder devices.

 Tomi



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 899 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: CDF meeting @FOSDEM report
@ 2013-02-06 15:00       ` Tomi Valkeinen
  0 siblings, 0 replies; 15+ messages in thread
From: Tomi Valkeinen @ 2013-02-06 15:00 UTC (permalink / raw)
  To: Alex Deucher
  Cc: Laurent Pinchart, linux-fbdev, Sebastien Guiriec, dri-devel,
	Jesse Barnes, Benjamin Gaignard, Sumit Semwal, Tom Gall,
	Kyungmin Park, linux-media, Stephen Warren, Thierry Reding,
	Mark Zhang, linaro-mm-sig, Stéphane Marchesin,
	Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
	Sunil Joshi, Maxime Ripard, Vikas Sajjan

[-- Attachment #1: Type: text/plain, Size: 1840 bytes --]

On 2013-02-06 16:44, Alex Deucher wrote:
> On Wed, Feb 6, 2013 at 6:11 AM, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

>> What is an encoder? Something that takes a video signal in, and lets the
>> CPU store the received data to memory? Isn't that a decoder?
>>
>> Or do you mean something that takes a video signal in, and outputs a
>> video signal in another format? (transcoder?)
> 
> In KMS parlance, we have two objects a crtc and an encoder.  A crtc
> reads data from memory and produces a data stream with display timing.
>  The encoder then takes that datastream and timing from the crtc and
> converts it some sort of physical signal (LVDS, TMDS, DP, etc.).  It's

Isn't the video stream between CRTC and encoder just as physical, it
just happens to be inside the GPU?

This is the case for OMAP, at least, where DISPC could be considered
CRTC, and DSI/HDMI/etc could be considered encoder. The stream between
DISPC and DSI/HDMI is plain parallel RGB signal. The video stream could
as well be outside OMAP.

> not always a perfect match to the hardware.  For example a lot of GPUs
> have a DVO encoder which feeds a secondary encoder like an sil164 DVO
> to TMDS encoder.

Right. I think mapping the DRM entities to CDF ones is one of the bigger
question marks we have with CDF. While I'm no expert on DRM, I think we
have the following options:

1. Force DRM's model to CDF, meaning one encoder.

2. Extend DRM to support multiple encoders in a chain.

3. Support multiple encoders in a chain in CDF, but somehow map them to
a single encoder in DRM side.

I really dislike the first option, as it would severely limit where CDF
can be used, or would force you to write some kind of combined drivers,
so that you can have one encoder driver running multiple encoder devices.

 Tomi



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 899 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: CDF meeting @FOSDEM report
@ 2013-02-06 15:00       ` Tomi Valkeinen
  0 siblings, 0 replies; 15+ messages in thread
From: Tomi Valkeinen @ 2013-02-06 15:00 UTC (permalink / raw)
  To: Alex Deucher
  Cc: Laurent Pinchart, linux-fbdev, Sebastien Guiriec, dri-devel,
	Jesse Barnes, Benjamin Gaignard, Sumit Semwal, Tom Gall,
	Kyungmin Park, linux-media, Stephen Warren, Thierry Reding,
	Mark Zhang, linaro-mm-sig, Stéphane Marchesin,
	Alexandre Courbot, Ragesh Radhakrishnan, Thomas Petazzoni,
	Sunil Joshi, Maxime Ripard, Vikas Sajjan

[-- Attachment #1: Type: text/plain, Size: 1840 bytes --]

On 2013-02-06 16:44, Alex Deucher wrote:
> On Wed, Feb 6, 2013 at 6:11 AM, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

>> What is an encoder? Something that takes a video signal in, and lets the
>> CPU store the received data to memory? Isn't that a decoder?
>>
>> Or do you mean something that takes a video signal in, and outputs a
>> video signal in another format? (transcoder?)
> 
> In KMS parlance, we have two objects a crtc and an encoder.  A crtc
> reads data from memory and produces a data stream with display timing.
>  The encoder then takes that datastream and timing from the crtc and
> converts it some sort of physical signal (LVDS, TMDS, DP, etc.).  It's

Isn't the video stream between CRTC and encoder just as physical, it
just happens to be inside the GPU?

This is the case for OMAP, at least, where DISPC could be considered
CRTC, and DSI/HDMI/etc could be considered encoder. The stream between
DISPC and DSI/HDMI is plain parallel RGB signal. The video stream could
as well be outside OMAP.

> not always a perfect match to the hardware.  For example a lot of GPUs
> have a DVO encoder which feeds a secondary encoder like an sil164 DVO
> to TMDS encoder.

Right. I think mapping the DRM entities to CDF ones is one of the bigger
question marks we have with CDF. While I'm no expert on DRM, I think we
have the following options:

1. Force DRM's model to CDF, meaning one encoder.

2. Extend DRM to support multiple encoders in a chain.

3. Support multiple encoders in a chain in CDF, but somehow map them to
a single encoder in DRM side.

I really dislike the first option, as it would severely limit where CDF
can be used, or would force you to write some kind of combined drivers,
so that you can have one encoder driver running multiple encoder devices.

 Tomi



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 899 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Linaro-mm-sig] CDF meeting @FOSDEM report
  2013-02-06 15:00       ` Tomi Valkeinen
@ 2013-02-06 16:14         ` Daniel Vetter
  -1 siblings, 0 replies; 15+ messages in thread
From: Daniel Vetter @ 2013-02-06 16:14 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: Alex Deucher, Thomas Petazzoni, linux-fbdev, Stephen Warren,
	Thierry Reding, Mark Zhang, dri-devel, Sunil Joshi,
	linaro-mm-sig, Stéphane Marchesin, Kyungmin Park,
	Jesse Barnes, Laurent Pinchart, Sebastien Guiriec,
	Alexandre Courbot, Maxime Ripard, Vikas Sajjan,
	Ragesh Radhakrishnan, linux-media

On Wed, Feb 6, 2013 at 4:00 PM, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
>> not always a perfect match to the hardware.  For example a lot of GPUs
>> have a DVO encoder which feeds a secondary encoder like an sil164 DVO
>> to TMDS encoder.
>
> Right. I think mapping the DRM entities to CDF ones is one of the bigger
> question marks we have with CDF. While I'm no expert on DRM, I think we
> have the following options:
>
> 1. Force DRM's model to CDF, meaning one encoder.
>
> 2. Extend DRM to support multiple encoders in a chain.
>
> 3. Support multiple encoders in a chain in CDF, but somehow map them to
> a single encoder in DRM side.

4. Ignore drm kms encoders.

They are only exposed to userspace as a means for userspace to
discover very simple constraints, e.g. 1 encoder connected to 2
outputs means you can only use one of the outputs at the same time.
They are completely irrelevant for the actual modeset interface
exposed to drivers, so you could create a fake kms encoder for each
connector you expose through kms.

The crtc helpers use the encoders as a real entity, and if you opt to
use the crtc helpers to implement the modeset sequence in your driver
it makes sense to map them to some real piece of hw. But you can
essentially pick any transcoder in your crtc -> final output chain for
this. Generic userspace needs to be able to cope with a failed modeset
due to arbitrary reasons anyway, so can't presume that simply because
the currently exposed constraints are fulfilled it'll work.

> I really dislike the first option, as it would severely limit where CDF
> can be used, or would force you to write some kind of combined drivers,
> so that you can have one encoder driver running multiple encoder devices.

Imo CDF and drm encoders don't really have that much to do with each
another, it should just be a driver implementation detail. Of course,
if common patterns emerge we could extract them somehow. E.g. if many
drivers end up exposing the CDF transcoder chain as a drm encoder
using the crtc helpers, we could add some library functions to make
that simpler.

Another conclusion (at least from my pov) from the fosdem discussion
is that we should separate the panel interface from the actual
control/pixel data buses. That should give us more flexibility for
insane hw and also directly exposing properties and knobs to the
userspace interface from e.g. dsi transcoders. So I don't think we'll
end up with _the_ canonical CDF sink interface anyway.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Linaro-mm-sig] CDF meeting @FOSDEM report
@ 2013-02-06 16:14         ` Daniel Vetter
  0 siblings, 0 replies; 15+ messages in thread
From: Daniel Vetter @ 2013-02-06 16:14 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: Alex Deucher, Thomas Petazzoni, linux-fbdev, Stephen Warren,
	Thierry Reding, Mark Zhang, dri-devel, Sunil Joshi,
	linaro-mm-sig, Stéphane Marchesin, Kyungmin Park,
	Jesse Barnes, Laurent Pinchart, Sebastien Guiriec,
	Alexandre Courbot, Maxime Ripard, Vikas Sajjan,
	Ragesh Radhakrishnan, linux-media

On Wed, Feb 6, 2013 at 4:00 PM, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
>> not always a perfect match to the hardware.  For example a lot of GPUs
>> have a DVO encoder which feeds a secondary encoder like an sil164 DVO
>> to TMDS encoder.
>
> Right. I think mapping the DRM entities to CDF ones is one of the bigger
> question marks we have with CDF. While I'm no expert on DRM, I think we
> have the following options:
>
> 1. Force DRM's model to CDF, meaning one encoder.
>
> 2. Extend DRM to support multiple encoders in a chain.
>
> 3. Support multiple encoders in a chain in CDF, but somehow map them to
> a single encoder in DRM side.

4. Ignore drm kms encoders.

They are only exposed to userspace as a means for userspace to
discover very simple constraints, e.g. 1 encoder connected to 2
outputs means you can only use one of the outputs at the same time.
They are completely irrelevant for the actual modeset interface
exposed to drivers, so you could create a fake kms encoder for each
connector you expose through kms.

The crtc helpers use the encoders as a real entity, and if you opt to
use the crtc helpers to implement the modeset sequence in your driver
it makes sense to map them to some real piece of hw. But you can
essentially pick any transcoder in your crtc -> final output chain for
this. Generic userspace needs to be able to cope with a failed modeset
due to arbitrary reasons anyway, so can't presume that simply because
the currently exposed constraints are fulfilled it'll work.

> I really dislike the first option, as it would severely limit where CDF
> can be used, or would force you to write some kind of combined drivers,
> so that you can have one encoder driver running multiple encoder devices.

Imo CDF and drm encoders don't really have that much to do with each
another, it should just be a driver implementation detail. Of course,
if common patterns emerge we could extract them somehow. E.g. if many
drivers end up exposing the CDF transcoder chain as a drm encoder
using the crtc helpers, we could add some library functions to make
that simpler.

Another conclusion (at least from my pov) from the fosdem discussion
is that we should separate the panel interface from the actual
control/pixel data buses. That should give us more flexibility for
insane hw and also directly exposing properties and knobs to the
userspace interface from e.g. dsi transcoders. So I don't think we'll
end up with _the_ canonical CDF sink interface anyway.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: CDF meeting @FOSDEM report
  2013-02-05 22:27 CDF meeting @FOSDEM report Laurent Pinchart
  2013-02-06 11:11 ` Tomi Valkeinen
@ 2013-02-12 22:45 ` Stéphane Marchesin
  2013-02-13  9:25   ` Marcus Lorentzon
  2013-02-14  9:35   ` Tomi Valkeinen
  1 sibling, 2 replies; 15+ messages in thread
From: Stéphane Marchesin @ 2013-02-12 22:45 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: linux-fbdev, Sebastien Guiriec, dri-devel, Jesse Barnes,
	Benjamin Gaignard, Sumit Semwal, Tom Gall, Kyungmin Park,
	Tomi Valkeinen, linux-media, Stephen Warren, Mark Zhang,
	linaro-mm-sig, Alexandre Courbot, Ragesh Radhakrishnan,
	Thomas Petazzoni, Sunil Joshi, Maxime Ripard, Vikas Sajjan

On Tue, Feb 5, 2013 at 2:27 PM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
> Hello,
>
> We've hosted a CDF meeting at the FOSDEM on Sunday morning. Here's a summary
> of the discussions.
>
> I would like to start with a big thank to UrLab, the ULB university hacker
> space, for providing us with a meeting room.
>
> The meeting would of course not have been successful without the wide range of
> participants, so I also want to thank all the people who woke up on Sunday
> morning to attend the meeting :-)
>
> (The CC list is pretty long, please let me know - by private e-mail in order
> not to spam the list - if you would like not to receive future CDF-related e-
> mails directly)
>
> 0. Abbreviations
> ----------------
>
> DBI - Display Bus Interface, a parallel video control and data bus that
> transmits data using parallel data, read/write, chip select and address
> signals, similarly to 8051-style microcontroller parallel busses. This is a
> mixed video control and data bus.
>
> DPI - Display Pixel Interface, a parallel video data bus that transmits data
> using parallel data, h/v sync and clock signals. This is a video data bus
> only.
>
> DSI - Display Serial Interface, a serial video control and data bus that
> transmits data using one or more differential serial lines. This is a mixed
> video control and data bus.
>
> DT - Device Tree, a representation of a hardware system as a tree of physical
> devices with associated properties.
>
> SFI - Simple Firmware Interface, a lightweight method for firmware to export
> static tables to the operating system. Those tables can contain display device
> topology information.
>
> VBT - Video BIOS Table, a block of data residing in the video BIOS that can
> contain display device topology information.
>
> 1. Goals
> --------
>
> The meeting started with a brief discussion about the CDF goals.
>
> Tomi Valkeinin and Tomasz Figa have sent RFC patches to show their views of
> what CDF could/should be. Many others have provided very valuable feedback.
> Given the early development stage propositions were sometimes contradictory,
> and focused on different areas of interest. We have thus started the meeting
> with a discussion about what CDF should try to achieve, and what it shouldn't.
>
> CDF has two main purposes. The original goal was to support display panels in
> a platform- and subsystem-independent way. While mostly useful for embedded
> systems, the emergence of platforms such as Intel Medfield and ARM-based PCs
> that blends the embedded and PC worlds makes panel support useful for the PC
> world as well.
>
> The second purpose is to provide a cross-subsystem interface to support video
> encoders. The idea originally came from a generalisation of the original RFC
> that supported panels only. While encoder support is considered as lower
> priority than display panel support by developers focussed on display
> controller driver (Intel, Renesas, ST Ericsson, TI), companies that produce
> video encoders (Analog Devices, and likely others) don't share that point of
> view and would like to provide a single encoder driver that can be used in
> both KMS and V4L2 drivers.
>
> Both display panels and encoders are thus the target of a lot of attention,
> depending on the audience. As long as none of them is forgotten in CDF, the
> overall agreement was that focussing on panels first is acceptable. Care shall
> be taken in that case to avoid any architecture that would make encoders
> support difficult or impossible.
>
> 2. Subsystems
> -------------
>
> Display panels are used in conjunction with FBDEV and KMS drivers. There was
> to the audience knowledge no V4L2 driver that needs to explicitly handle
> display panels. Even though at least one V4L2 output drivers (omap_vout) can
> output video to a display panel, it does so in conjunction with the KMS and/or
> FBDEV APIs that handle panel configuration. Panels are thus not exposed to
> V4L2 drivers.
>
> Encoders, on the other hand, are widely used in the V4L2 subsystem. Many V4L2
> devices output video in either analog (Composite, S-Video, VGA) or digital
> (DVI, HDMI) way.
>
> Display panel drivers don't need to be shared with the V4L2 subsystem.
> Furthermore, as the general opinion during the meeting was that the FBDEV
> subsystem should be considered as legacy and deprecate in the future,
> restricting panel support to KMS hasn't been considered by anyone as an issue.
> KMS will thus be the main target of display panel support in CDF, and FBDEV
> will be supported if that doesn't bring any drawback from an architecture
> point of view.
>
> Encoder drivers need to be shared with the V4L2 subsystem. Similarly to panel
> drivers, excluding FBDEV support from CDF isn't considered as an issue.
>
> 3. KMS Extensions
> -----------------
>
> The usefulness of V4L2 for output devices was questioned, and the possibility
> of using KMS for complex video devices usually associated with V4L2 was
> raised. The TI DaVinci 8xxx family is an example of chips that could benefit
> from KMS support.
>
> The KMS API is lacking support for deep-pipelining ("framebuffers" that are
> sourced from a data stream instead of a memory buffer) today. Extending the
> KMS API with deep-pipelining support was considered as a sensible goal that
> would mostly require the creation of a new KMS source object. Exposing the
> topology of the whole device would then be handled by the Media Controller
> API.
>
> Given that no evidence of this KMS extension being ready in a reasonable time
> frame exists, sharing encoder drivers with the V4L2 subsystem hasn't been
> seriously questioned.
>
> 4. Discovery and Initialization
> -------------------------------
>
> As CDF will split support for complete display devices across different
> drivers, the question of physical devices discovery and initialization caused
> concern among the audience.
>
> Topology and connectivity information can come from a wide variety of sources.
> Embedded platforms typically provide that information in platform data
> supplied by board code or through the device tree. PC platforms usually store
> the information in the firmware exposed through ACPI, SFI, VBT or other
> interfaces. Pluggable devices (PCI being the most common case) can also store
> the information on an on-board non-volatile memory or hardcode it in drivers.
>
> When using the device tree display entity information are bundled with the
> display entity device DT node. The associated driver shall thus extract itself
> information from the DT node. In all other cases the display entity driver
> shall not parse data from the information source directly, but shall instead
> received a platform data structure filled with data parsed by the display
> controller driver. In the most complex case a machine driver, similar to ASoC
> machine drivers, might be needed, in which case platform data could be
> provided by that machine driver.
>
> Display entity drivers are encouraged to internally fill a platform data
> structure from their DT node to reuse the same code path for both platform
> data- and DT-based initialization.
>
> 5. Bus Model
> ------------
>
> Display panels are connected to a video bus that transmits video data and
> optionally to a control bus. Those two busses can be separate physical
> interfaces or combined into a single physical interface.
>
> The Linux device model represents the system as a tree of devices (not to be
> confused by the device tree, abreviated as DT). The tree is organized around
> control busses, with every device being a child of its control bus master. For
> instance an I2C device will be a child of its I2C controller device, which can
> itself be a child of its parent PCI device.
>
> Display panels will be represented as Linux devices. They will have a single
> parent from the Linux device model point of view, but will be potentially
> connected to multiple physical busses. CDF thus needs to define what bus to
> select as the Linux parent bus.
>
> In theory any physical bus that the device is attached to can be selected as
> the parent bus. However, selecting a video data bus would depart from the
> traditional Linux device model that uses control busses only. This caused
> concern among several people who argued that not presenting the device to the
> kernel as attached to its control bus would bring issues in embedded system.
> Unlike on PC systems where the control bus master is usually the same physical
> device as the data bus master, embedded systems are made of a potentially
> complex assembly of completely unrelated devices. Not representing an I2C-
> controlled panel as a child of its I2C master in DT was thus frown upon, even
> though no clear agreement was reached on the subject.
>
> Panels can be divided in three categories based on their bus model.
>
> - No control bus
>
> Many panels don't offer any control interface. They are usually referred to as
> 'dumb panels' as they directly display the data received on their video bus
> without any configurable option. Panels in this category often use DPI is
> their video bus, but other options such as DSI (using the DSI video mode only)
> are possible.
>
> Panels with no control bus can be represented in the device model as platform
> devices, or as being attached to their video bus. In the later case we would
> need Linux busses for pure video data interfaces such as DPI or VGA. Nobody
> was particularly enthousiastic about this idea. Dumb panels will thus likely
> be represented as platform devices.
>
> - Separate video and control busses
>
> The typical case is a panel connected to an I2C or SPI bus that receives data
> through a DPI video interface or DSI video mode interface.
>
> Using a mixed control and video bus (such as DSI and DBI) for control only
> with a different bus for video data is possible in theory but very unlikely in
> practice (although the creativity of hardware developers should never be
> underestimated).
>
> Display panels that use a control bus supported by the Linux kernel should
> likely be represented as children of their control bus master. Other options
> are possible as mentioned above but were received without enthousiasm by most
> embedded kernel developers.
>
> When the control bus isn't supported by the kernel, a new bus type can be
> developed, or the panel can be represented as a platform device. The right
> option will likely very depending on the control bus.
>
> - Combined video and control busses
>
> When the two busses are combined in a single physical bus the panel device
> will obviously be represented as a child of that single physical bus.
>
> In such cases the control bus could expose video bus control methods. This
> would remove the need for a video source as proposed by Tomi Valkeinen in his
> CDF model. However, if the bus can be used for video data transfer in
> combination with a different control bus, a video source corresponding to the
> data bus will be needed.
>
> No decision has been taken on whether to use a video source in addition to the
> control bus in the combined busses case. Experimentation will be needed, and
> the right solution might depend on the bus type.
>
> - Multiple control busses
>
> One panel was mentioned as being connected to a DSI bus and an I2C bus. The
> DSI bus is used for both control and video, and the I2C bus for control only.
> configuring the panel requires sending commands through both DSI and I2C. The
> opinion on such panels was a large *sigh* followed by a "this should be
> handled by the device core, let's ask Greg KH".
>
> 6. Miscellaneous
> ----------------
>
> - If the OMAP3 DSS driver is used as a model for the DSI support
> implementation, Daniel Vetter requested the DSI bus lock semaphore to be
> killed as it prevents lockdep from working correctly (reference needed ;-)).
>
> - Do we need to support chaining several encoders ? We can come up with
> several theoretical use cases, some of them probably exist in real hardware,
> but the details are still a bit fuzzy.

So, a part which is completely omitted in this thread is how to handle
suspend/resume ordering. If you have multiple encoders which need to
be turned on/off in a given order at suspend/resume, how do you handle
that given the current scheme where they are just separate platform
drivers in drivers/video?

This problems occurs with drm/exynos in current 3.8 kernels for
example. On that platform, the DP driver and the FIMD driver will
suspend/resume in random order, and therefore fail resuming half the
time. Is there something which could be done in CDF to address that?

Stéphane

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: CDF meeting @FOSDEM report
  2013-02-12 22:45 ` Stéphane Marchesin
@ 2013-02-13  9:25   ` Marcus Lorentzon
  2013-02-14  9:35   ` Tomi Valkeinen
  1 sibling, 0 replies; 15+ messages in thread
From: Marcus Lorentzon @ 2013-02-13  9:25 UTC (permalink / raw)
  To: Stéphane Marchesin
  Cc: linux-fbdev, Sebastien Guiriec, dri-devel, Jesse Barnes,
	Laurent Pinchart, Benjamin Gaignard, Sumit Semwal, Tom Gall,
	Kyungmin Park, Tomi Valkeinen, linux-media, Stephen Warren,
	Mark Zhang, linaro-mm-sig, Alexandre Courbot,
	Ragesh Radhakrishnan, Thomas Petazzoni, Sunil Joshi,
	Maxime Ripard

On 02/12/2013 11:45 PM, Stéphane Marchesin wrote:
>> - Do we need to support chaining several encoders ? We can come up with
>> >  several theoretical use cases, some of them probably exist in real hardware,
>> >  but the details are still a bit fuzzy.
> So, a part which is completely omitted in this thread is how to handle
> suspend/resume ordering. If you have multiple encoders which need to
> be turned on/off in a given order at suspend/resume, how do you handle
> that given the current scheme where they are just separate platform
> drivers in drivers/video?
>
> This problems occurs with drm/exynos in current 3.8 kernels for
> example. On that platform, the DP driver and the FIMD driver will
> suspend/resume in random order, and therefore fail resuming half the
> time. Is there something which could be done in CDF to address that?
My idea here is two parts. First hide the chaining within the CDF 
driver. So it is always the first CDF driver that is responsible for the 
rest of the chain. Second, I'm looking at using the dev->parent and bus 
relationchip to describe this dependency. Then power usually works out 
fine, since children could be forced to be suspended before parent 
("bus" host).

/BR
/Marcus

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: CDF meeting @FOSDEM report
  2013-02-12 22:45 ` Stéphane Marchesin
  2013-02-13  9:25   ` Marcus Lorentzon
@ 2013-02-14  9:35   ` Tomi Valkeinen
  1 sibling, 0 replies; 15+ messages in thread
From: Tomi Valkeinen @ 2013-02-14  9:35 UTC (permalink / raw)
  To: Stéphane Marchesin
  Cc: linux-fbdev, Sebastien Guiriec, dri-devel, Jesse Barnes,
	Laurent Pinchart, Benjamin Gaignard, Sumit Semwal, Tom Gall,
	Kyungmin Park, linux-media, Stephen Warren, Mark Zhang,
	linaro-mm-sig, Alexandre Courbot, Ragesh Radhakrishnan,
	Thomas Petazzoni, Sunil Joshi, Maxime Ripard, Vikas Sajjan


[-- Attachment #1.1: Type: text/plain, Size: 1082 bytes --]

On 2013-02-13 00:45, Stéphane Marchesin wrote:

> So, a part which is completely omitted in this thread is how to handle
> suspend/resume ordering. If you have multiple encoders which need to
> be turned on/off in a given order at suspend/resume, how do you handle
> that given the current scheme where they are just separate platform
> drivers in drivers/video?
> 
> This problems occurs with drm/exynos in current 3.8 kernels for
> example. On that platform, the DP driver and the FIMD driver will
> suspend/resume in random order, and therefore fail resuming half the
> time. Is there something which could be done in CDF to address that?

I don't think we have a perfect solution for this, but I think we can
handle this by using PM notifiers, PM_SUSPEND_PREPARE and PM_POST_SUSPEND.

The code that manages the whole chain should register to those
notifiers, and disable or enable the display devices accordingly. This
way the devices are enabled and disabled in the right order, and also
(hopefully) so that the control busses are operational.

 Tomi



[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 899 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2013-02-14  9:36 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-05 22:27 CDF meeting @FOSDEM report Laurent Pinchart
2013-02-06 11:11 ` Tomi Valkeinen
2013-02-06 12:11   ` Jani Nikula
2013-02-06 12:54     ` Tomi Valkeinen
2013-02-06 14:44   ` Alex Deucher
2013-02-06 14:44     ` Alex Deucher
2013-02-06 14:44     ` Alex Deucher
2013-02-06 15:00     ` Tomi Valkeinen
2013-02-06 15:00       ` Tomi Valkeinen
2013-02-06 15:00       ` Tomi Valkeinen
2013-02-06 16:14       ` [Linaro-mm-sig] " Daniel Vetter
2013-02-06 16:14         ` Daniel Vetter
2013-02-12 22:45 ` Stéphane Marchesin
2013-02-13  9:25   ` Marcus Lorentzon
2013-02-14  9:35   ` Tomi Valkeinen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.