From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tomi Valkeinen Subject: Re: [PATCH/RFC v3 00/19] Common Display Framework Date: Mon, 2 Sep 2013 14:06:04 +0300 Message-ID: <5224711C.6050601@ti.com> References: <1376068510-30363-1-git-send-email-laurent.pinchart+renesas@ideasonboard.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1720294601==" Return-path: Received: from devils.ext.ti.com (devils.ext.ti.com [198.47.26.153]) by gabe.freedesktop.org (Postfix) with ESMTP id 717A4E68AA for ; Mon, 2 Sep 2013 04:06:27 -0700 (PDT) In-Reply-To: <1376068510-30363-1-git-send-email-laurent.pinchart+renesas@ideasonboard.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dri-devel-bounces+sf-dri-devel=m.gmane.org@lists.freedesktop.org Errors-To: dri-devel-bounces+sf-dri-devel=m.gmane.org@lists.freedesktop.org To: Laurent Pinchart Cc: linux-fbdev@vger.kernel.org, dri-devel@lists.freedesktop.org, Jesse Barnes , Benjamin Gaignard , Tom Gall , Kyungmin Park , linux-media@vger.kernel.org, Stephen Warren , Mark Zhang , Alexandre Courbot , Ragesh Radhakrishnan , Thomas Petazzoni , Sunil Joshi , Maxime Ripard , Vikas Sajjan , Marcus Lorentzon List-Id: dri-devel@lists.freedesktop.org --===============1720294601== Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="fkGLxRNQV3QqSV1lQ5bmvUbnPk5Xj8fJn" --fkGLxRNQV3QqSV1lQ5bmvUbnPk5Xj8fJn Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable On 09/08/13 20:14, Laurent Pinchart wrote: > Hi everybody, >=20 > Here's the third RFC of the Common Display Framework. >=20 > I won't repeat all the background information from the versions one and= two > here, you can read it at http://lwn.net/Articles/512363/ and > http://lwn.net/Articles/526965/. >=20 > This RFC isn't final. Given the high interest in CDF and the urgent tas= ks that > kept delaying the next version of the patch set, I've decided to releas= e v3 > before completing all parts of the implementation. Known missing items = are >=20 > - documentation: kerneldoc and this cover letter should provide basic > information, more extensive documentation will likely make it to v4. >=20 > - pipeline configuration and control: generic code to configure and con= trol > display pipelines (in a nutshell, translating high-level mode setting= and > DPMS calls to low-level entity operations) is missing. Video and stre= am > control operations have been carried over from v2, but will need to b= e > revised for v4. >=20 > - DSI support: I still have no DSI hardware I can easily test the code = on. >=20 > Special thanks go to >=20 > - Renesas for inviting me to LinuxCon Japan 2013 where I had the opport= unity > to validate the CDF v3 concepts with Alexandre Courbot (NVidia) and T= omasz > Figa (Samsung). >=20 > - Tomi Valkeinen (TI) for taking the time to deeply brainstorm v3 with = me. >=20 > - Linaro for inviting me to Linaro Connect Europe 2013, the discussions= we had > there greatly helped moving CDF forward. >=20 > - And of course all the developers who showed interest in CDF and spent= time > sharing ideas, reviewing patches and testing code. >=20 > I have to confess I was a bit lost and discouraged after all the CDF-re= lated > meetings during which we have discussed how to move from v2 to v3. With= every > meeting I was hoping to run the implementation through use cases of var= ious > interesting parties and narrow down the scope of the huge fuzzy beast t= hat CDF > was. With every meeting the scope actually broadened, with no clear pat= h at > sight anywhere. >=20 > Earlier this year I was about to drop one of the requirements on which = I had > based CDF v2: sharing drivers between DRM/KMS and V4L2. With only two H= DMI > transmitters as use cases for that feature (with only out-of-tree drive= rs so > far), I just thought the involved completely wasn't worth it and that I= should > implement CDF v3 as a DRM/KMS-only helper framework. However, a seeming= ly > unrelated discussion with Xilinx developers showed me that hybrid SoC-F= PGA > platforms such as the Xilinx Zynq 7000 have a larger library of IP core= s that > can be used in camera capture pipelines and in display pipelines. The t= wo use > cases suddenly became tens or even hundreds of use cases that I couldn'= t > ignore anymore. Should this be Common Video Framework then? ;) > CDF v3 is thus userspace API agnostic. It isn't tied to DRM/KMS or V4L2= and > can be used by any kernel subsystem, potentially including FBDEV (altho= ugh I > won't personally wrote FBDEV support code, as I've already advocated fo= r FBDEV > to be deprecated). >=20 > The code you are about to read is based on the concept of display entit= ies > introduced in v2. Diagrams related to the explanations below are availa= ble at > http://ideasonboard.org/media/cdf/20130709-lce-cdf.pdf. >=20 >=20 > Display Entities > ---------------- >=20 > A display entity abstracts any hardware block that sources, processes o= r sinks > display-related video streams. It offers an abstract API, implemented b= y display > entity drivers, that is used by master drivers (such as the main displa= y driver) > to query, configure and control display pipelines. >=20 > Display entities are connected to at least one video data bus, and opti= onally > to a control bus. The video data busses carry display-related video dat= a out > of sources (such as a CRTC in a display controller) to sinks (such as a= panel > or a monitor), optionally going through transmitters, encoders, decoder= s, > bridges or other similar devices. A CRTC or a panel will usually be con= nected > to a single data bus, while an encoder or a transmitter will be connect= ed to > two data busses. >=20 > The simple linear display pipelines we find in most embedded platforms = at the > moment are expected to grow more complex with time. CDF needs to accomo= date > those needs from the start to be, if not future-proof, at least present= -proof > at the time it will get merged in to mainline. For this reason display > entities have data ports through which video streams flow in or out, wi= th link > objects representing the connections between those ports. A typical ent= ity in > a linear display pipeline will have one (for video source and video sin= k > entities such as CRTCs or panels) or two ports (for video processing en= tities > such as encoders), but more ports are allowed, and entities can be link= ed in > complex non-linear pipelines. >=20 > Readers might think that this model if extremely similar to the media > controller graph model. They would be right, and given my background th= is is > most probably not a coincidence. The CDF v3 implementation uses the in-= kernel > media controller framework to model the graph of display entities, with= the > display entity data structure inheriting from the media entity structur= e. The > display pipeline graph topology will be automatically exposed to usersp= ace > through the media controller API as an added bonus. However, ussage of = the > media controller userspace API in applications is *not* mandatory, and = the > current CDF implementation doesn't use the media controller link setup > userspace API to configure the display pipelines. I have yet to look at the code. I'm just wondering, do you see any downsides on using the media controller here, instead a CDF specific enti= ty? > While some display entities don't require any configuration (DPI panels= are a > good example), many of them are connected to a control bus accessible t= o the > CPU. Control requests can be sent on a dedicated control bus (such as I= 2C or > SPI) or multiplexed on a mixed control and data bus (such as DBI or DSI= ). To > support both options the CDF display entity model separates the control= and > data busses in different APIs. >=20 > Display entities are abstract object that must be implemented by a real= > device. The device sits on its control bus and is registered with the L= inux > device core and matched with his driver using the control bus specific = API. > The CDF doesn't create a display entity class or bus, display entity dr= ivers > thus standard Linux kernel drivers using existing busses. A DBI bus is = added I have no idea what the above means =3D). I guess the point is that CDF doesn't create the display entities or devices or busses. It's the standard linux drivers will create CDF display entities? > as part of this patch set, but strictly speaking this isn't part of CDF= =2E >=20 > When a display entity driver probes a device it must create an instance= of the > display_entity structure, initialize it and add it to the CDF core enti= ties > pool. The display entity exposes abstract operations through function > pointers, and the entity driver must implement those operations. Those > operations can act on either the whole entity or on a given port, depen= ding on > the operation. They are divided in two groups, control operations and v= ideo > operations. >=20 >=20 > Control Operations > ------------------ >=20 > Control operations are called by upper-level drivers, usually in respon= se to a > request originating from userspace. They query or control the display e= ntity > state and operation. Currently defined control operations are >=20 > - get_size(), to retrieve the entity physical size (applicable to panel= s only) > - get_modes(), to retrieve the video modes supported at an entity port > - get_params(), to retrieve the data bus parameters at an entity port >=20 > - set_state(), to control the state of the entity (off, standby or on) > - update(), to trigger a display update (for entities that implement ma= nual > update, such as manual-update panels that store frames in their inter= nal > frame buffer) >=20 > The last two operations have been carried from v2 and will be reworked.= >=20 >=20 > Pipeline Control > ---------------- >=20 > The figure on page 4 shows the control model for a linear pipeline. Thi= s > differs significantly from CDF v2 where calls where forwarded from enti= ty to > entity using a Russian dolls model. v3 removes the need of neighbour aw= areness > from entity drivers, simplifying the entity drivers. The complexity of = pipeline > configuration is moved to a central location called a pipeline controll= er > instead of being spread out to all drivers. >=20 > Pipeline controllers provide library functions that display drivers can= use to > control a pipeline. Several controllers can be implemented to accomodat= e the > needs of various pipeline topologies and complexities, and display driv= ers can > even implement their own pipeline control algorithm if needed. I'm work= ing on a > linear pipeline controller for the next version of the patch set. >=20 > If pipeline controllers are responsible for propagating a pipeline conf= iguration > on all entity ports in the pipeline, entity drivers are responsible for= > propagating the configuration inside entities, from sink (input) to sou= rce > (output) ports as illustrated on page 5. The rationale behind this is t= hat > knowledge of the entity internals is located in the entity driver, whil= e > knowledge of the pipeline belongs to the pipeline controller. The contr= oller > will thus configure the pipeline by performing the following steps: >=20 > - applying a configuration on sink ports of an entity > - read the configuration that has been propagated by the entity driver = on its > source ports > - optionally, modify the source port configuration (to configure custom= timings, > scaling or other parameters, if supported by the entity) > - propagate the source port configuration to the sink ports of the next= entities > in the pipeline and start over First, I find "sink" and "source" somewhat confusing here. Maybe it's just me... If I understand correctly, "sink port of an entity" means a port on an entity which is receiving data, so, say, the port on a panel. And "source port of an entity" is a port to which an entity writes data. Wouldn't "input port" and "output port" be more clear? Another thing. We have discussed this a few times, and also discussed it the last time you were in Helsinki. But it's still a bit unclear to me should the configuration go "downstream", as you describe, or "upstream", as the omapdss does. If we look at a single entity in the pipeline, I think we can describe the two different approaches like this: Downstream model: "Hey entity, here's the video format you will be getting. What kind of output video format do you give for that input?" Upstream model: "Hey entity, we need this video output from you. What kind of input do you need to produce the output?" I think both models have complexities/issues, but if I forget the issues, I think the upstream model is more powerful, and maybe even more "correct": In the end of the pipeline, we have a monitor or a panel. We want the monitor/panel to show a picture with some particular video mode. So the job for the pipeline controller is to find out settings for each display entity to produce that video mode in the end. With the downstream model you'll start from the SoC side with some video mode, and hope that the end result will be what's needed by the monitor/panel. As an example, DSI video mode on OMAP (although I think the same applies to any other SoC with DSI). The pipeline we have is like this, -> showing the video data flow: DISPC -> DSI -> Panel The DISPC-DSI link is raw parallel RGB. With the DSI transfer we can have either burst or non-burst mode. When in non-burst mode, the DSI transfer looks pretty much like normal parallel RGB, except that it's in serial format. With burst mode, however, the DSI clock and horizontal timings can be changed. The idea with burst mode is to increase the time spent in horizontal blank period, thus reducing the time the DISPC and DSI blocks need to be active. So we could, say, double the DSI clock, and increase the horizontal blank accordingly. What this means in the context of pipeline configuration is that the DISPC needs to be configured properly to produce pixels for the DSI. If the DSI clock and horizontal blank are increased, the DISPC pixel clock and blank need to be increased also. Even in non-burst mode the video mode programmed to DISPC is not always quite the as the resulting video mode received by the panel, because the DISPC uses pixel clock and the transfer unit is a pixel, DSI uses DSI bus clock and a transfer unit is a byte, and these are not always in 1-to-1 match. So if one wants exactly certain DSI video mode timings, this discrepancy needs to be taken into consideration when programming DISPC timings. Then again, as I said, upstream model is not without its issues either. Say, if we want a particular output from DSI, with burst mode. The DSI should somehow know how high pixel clock and horizontal timings DISPC can produce before it can calculate the video mode. I guess the only way to avoid the issues is to add all this logic into the pipeline control? And if so, then it doesn't really matter if the configuration is done with downstream or upstream model. I fear a bit that adding this kind of logic into the controller means we will add display entity specific things into the controllers. So if I create a generic OMAP pipeline controller, which works for all the currnet boards, and then somebody creates a new OMAP board with some funny encoder, he'll have to create a new controller, almost like the generic OMAP one, but with support for the funny encoder. > Beside modifying the active configuration, the entities API will allow = trying > configurations without applying them to the hardware. As configuration = of a port > possibly depend on the configurations of the other ports, trying a conf= iguration > must be done at the entity level instead of the port level. The impleme= ntation > will be based on the concept of configuration store objects that will s= tore the > configuration of all ports for a given entity. Each entity will have a = single > active configuration store, and test configuration stores will be creat= ed > dynamically to try a configuration on an entity. The get and set operat= ions > implemented by the entity will receive a configuration store pointer, a= nd active > and test code paths in entity drivers will be identical, except for app= lying the > configuration to the hardware for the active code path. >=20 >=20 > Video Operations > ---------------- >=20 > Video operations control the video stream state on entity ports. The on= ly > currently defined video operation is >=20 > - set_stream(), to start (in continuous or single-shot mode) the video = stream > on an entity port >=20 > The call model for video operations differ from the control operations = model > described above. The set_stream() operation is called directly by downs= tream > entities on upstream entities (from a video data bus point of view). > Terminating entities in a pipeline (such as panels) will usually call t= he > set_stream() operation in their set_state() handler, and intermediate e= ntities > will forward the set_stream() call upstream. >=20 >=20 > Integration > ----------- >=20 > The figure on page 8 describes how a panel driver, implemented using CD= F as a > display entity, interacts with the other components in the system. The = use case > is a simple pipeline made of a display controller and a panel. >=20 > The display controller driver receives control request from userspace t= hrough > DRM (or FBDEV) API calls. It processes the request and calls the panel = driver > through the CDF control operations API. The panel driver will then issu= e > requests on its control bus (several possible control busses are shown = on the > figure, panel drivers typically use one of them only) and call video op= erations > of the display controller on its left side to control the video stream.= >=20 >=20 > Registration and Notification > ----------------------------- >=20 > Due to possibly complex dependencies between entities we can't guarante= e that > all entities part of the display pipeline will have been successfully p= robed > when the master display controller driver is probed. For instance a pan= el can > be a child of the DBI or DSI bus controlled by the display device, or u= se a > clock provided by that device. We can't defer the display device probe = until > the panel is probed and also defer the panel device probe until the dis= play > device is probed. For this reason we need a notification system that al= lows > entities to register themselves with the CDF core, and display controll= er > drivers to get notified when entities they need are available. I don't understand this one. Do you have a example setup that shows the problem? I think we can just use the EPROBE_DEFER here. A display entity requires some resources, like GPIOs and regulators, and with CDF, video sources. If those resources are not yet available, the driver can just return EPROBE_DEFER. Or is there some kind of two-way dependency in your model, when using DBI? We don't have such in omapdss, so I may not quite understand the issue or the need for the two-way dependency. The only case where I can see a dependency problem is when two display entities produce a resource, used by the other. So, say, we have an encoder and a panel, and the panel produces a clock used by the encoder, and the encoder produces a video signal used by the panel. I haven't seen such setups in real hardware, though > The notification system has been completely redesigned in v3. This vers= ion is > based on the V4L2 asynchronous probing notification code, with large pa= rts of > the code shamelessly copied. This is an interim solution to let me play= with > the notification code as needed by CDF. I'm not a fan of code duplicati= on, and > will work on merging the CDF and V4L2 implementations in a later stage = when > CDF will reach a mature enough state. >=20 > CDF manages a pool of entities and a list of notifiers. Notifiers are > registered by master display drivers with an array of entities match > descriptors. When an entity is added to the CDF entities pool, all noti= fiers > are searched for a match. If a match is found, the corresponding notifi= er is > called to notify the master display driver. >=20 > The two currently supported match methods are platform match, which use= s > device names, and DT match, which uses DT node pointers. More match met= hod > might be added later if needed. Two helper functions exist to build a n= otifier > from a list of platform device names (in the non-DT case) or a DT > representation of the display pipeline topology. >=20 > Once all required entities have been successfully found, the master dis= play > driver is responsible for creating media controller links between all e= ntities > in the pipeline. Two helper functions are also available to automate th= at > process, one for the non-DT case and one for the DT case. Once again so= me > DT-related code has been copied from the V4L2 DT code, I will work on m= erging > both in a future version. >=20 > Note that notification brings a different issue after registration, as = display > controller and display entity drivers would take a reference to each ot= her. > Those circular references would make driver unloading impossible. One p= ossible > solution to this problem would be to simulate an unplug event for the d= isplay > entity, to force the display driver to release the dislay entities it u= ses. We > would need a userspace API for that though. Better solutions would of c= ourse > be welcome. >=20 >=20 > Device Tree Bindings > -------------------- >=20 > CDF entities device tree bindings are not documented yet. They describe= both > the graph topology and entity-specific information. The graph descripti= on uses > the V4L2 DT bindings (which are actually not V4L2-specific) specified a= t > Documentation/devicetree/bindings/media/video-interfaces.txt. Entity-sp= ecific > information will be described in individual DT bindings documentation. = The DPI > panel driver uses the display timing bindings documented in > Documentation/devicetree/bindings/video/display-timing.txt. >=20 >=20 >=20 >=20 > Please note that most of the display entities on devices I own are just= dumb > panels with no control bus, and are thus not the best candidates to des= ign a > framework that needs to take complex panels' needs into account. This i= s why I > hope to see you using the CDF with your display device and tell me what= needs to > be modified/improved/redesigned. >=20 > This patch set is split as follows: >=20 > - The first patch fixes a Kconfig namespace issue with the OMAP DSS pan= els. It > could be applied already independently of this series. > - Patches 02/19 to 07/19 add the CDF core, including the notification s= ystem > and the graph and OF helpers. > - Patch 08/19 adds a MIPI DBI bus. This isn't part of CDF strictly spea= king, > but is needed for the DBI panel drivers. > - Patches 09/19 to 13/19 add panel drivers, a VGA DAC driver and a VGA > connector driver. > - Patches 14/19 to 18/19 add CDF-compliant reference board code and DT = for the > Renesas Marzen and Lager boards. > - Patch 19/19 port the Renesas R-Car Display Unit driver to CDF. >=20 > The patches are available in my git tree at >=20 > git://linuxtv.org/pinchartl/fbdev.git cdf/v3 > http://git.linuxtv.org/pinchartl/fbdev.git/shortlog/refs/heads/cdf/= v3 >=20 > For convenience I've included modifications to the Renesas R-Car Displa= y Unit > driver to use the CDF. You can read the code to see how the driver uses= CDF to > interface panels. Please note that the rcar-du-drm implementation is st= ill > work in progress, its set_stream operation implementation doesn't enabl= e and > disable the video stream yet as it should. >=20 > As already mentioned in v2, I will appreciate all reviews, comments, > criticisms, ideas, remarks, ... If you can find a clever way to solve t= he > cyclic references issue described above I'll buy you a beer at the next= > conference we will both attend. If you think the proposed solution is t= oo > complex, or too simple, I'm all ears, but I'll have more arguments this= time > than I had with v2 :-) I'll tweak this to work with omapdss, like I did for the v2. Although I'll probably remove at least the DBI bus and the notification system as the first thing I do, unless you can convince me otherwise =3D). Tomi --fkGLxRNQV3QqSV1lQ5bmvUbnPk5Xj8fJn Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJSJHEcAAoJEPo9qoy8lh71ZU0P/27BXEQI/r1Q7uTNMvFzbMmO Jj6YsyKnWOZBZe1oEH9YWHDJjSVeWjGqs/7O3dDr7OHW6akM/P/Yg3Q+P4T0oYul Hym1gTqEQsWmeIhnHi9Sjx3Hqs0TvEQGnJH6YKuwIuZtsQQFdWJRNlPdqgBPVtbt bnMoSkb1yWZTFjgTEl55uR14fc9pwDM6JbDZfyQbu8NW0ZSLhqkZnuu2DEz/fwLj byi/ZVVMFDkK4JC+1Zoxpi+B/ZeqMXrw5cZRtNMkRXa61tqig1mHLNDDJRugBc/t bv4KKzDfUOtq7+zRvi4+ucXf74EJM8Wos50U1HRF+lXr56cydpmSMwWnPltP0sBp PTRDxuEd3tWxyUdZpjdLEE/P5phLn9jaFSz0T3iuLnBP6JrfLRvy1YfqmUA1OTlN fCYMi0XWqlN1H6a7Z4vLf+qPwHsl08SFcBq1dEBCYljBMgqqv178F6TeA0ZJeeuI U5xZf9lhB88u0t7UcEuQWCvA6fsxjpjPMiy22YaQ+ZCWRnmsSII9xUDTMZC3zhxV yHB0goCZ+FEa+5KljgZkFePo+uVAjqJ0XyNy140CpADoMnLkm2z7/hI86wRpzfFa AMCUeHFruhrEeAT6S3Mz54R/KoF+xP5Jt6mJR2ZKMVVCGXV9EDogWIdrVCBEJTuI v9GUPqvahIzWRKXjMm1R =FGnX -----END PGP SIGNATURE----- --fkGLxRNQV3QqSV1lQ5bmvUbnPk5Xj8fJn-- --===============1720294601== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/dri-devel --===============1720294601==--