On Wed, 2 Jun 2021 19:42:19 -0400 Harry Wentland wrote: > On 2021-06-02 4:22 p.m., Shankar, Uma wrote: > > > > > >> -----Original Message----- > >> From: Pekka Paalanen > >> Sent: Wednesday, June 2, 2021 2:59 PM > >> To: Shankar, Uma > >> Cc: intel-gfx@lists.freedesktop.org; dri-devel@lists.freedesktop.org; Modem, > >> Bhanuprakash ; Harry Wentland > >> > >> Subject: Re: [PATCH 00/21] Add Support for Plane Color Lut and CSC features > >> > >> On Tue, 1 Jun 2021 16:21:57 +0530 > >> Uma Shankar wrote: > >> > >>> This is how a typical display color hardware pipeline looks like: ... > >>> This patch series adds properties for plane color features. It adds > >>> properties for degamma used to linearize data and CSC used for gamut > >>> conversion. It also includes Gamma support used to again non-linearize > >>> data as per panel supported color space. These can be utilize by user > >>> space to convert planes from one format to another, one color space to > >>> another etc. > >> > >> This is very much welcome! > >> > >> There is also the thread: > >> https://lists.freedesktop.org/archives/dri-devel/2021-May/306726.html>>> > >> Everything mentioned will interact with each other by changing what the abstract > >> KMS pixel pipeline does. I think you and Harry should probably look at each others' > >> suggestions and see how to fit them all into a single abstract KMS pipeline. > >> > >> People are adding new pieces into KMS left and right, and I fear we lose sight of how > >> everything will actually work together when all KMS properties are supposed to be > >> generic and potentially present simultaneously. This is why I would very much like to > >> have that *whole* abstract KMS pipeline documented with *everything*. Otherwise > >> it is coming really hard fast to figure out how generic userspace should use all these > >> KMS properties together. > >> > >> Or if there cannot be a single abstract KMS pipeline, then sure, have multiple, as long > >> as they are documented and how userspace will know which pipeline it is dealing > >> with, and what things are mutually exclusive so we can avoid writing userspace code > >> for combinations that will never exist. > > > > This is a good suggestion to have the whole pipeline and properties documented along with > > the exact usages. We may end with 2 properties almost doing similar work but needed due to > > underlying hardware, but we can get that properly documented and defined. > > > > I will discuss with Harry and Ville as well to define this. > > > > Just wanted to let you know that I've seen and read through both of Shankar's patchsets > and had some thoughts but haven't found the time to respond. I will respond soon. Hi Harry, awesome! > I very much agree with Pekka. We need to make sure this all plays well together and is > well documented. Maybe a library to deal with DRM KMS color management/HDR would even > be helpful. Not sure yet how I feel about that. That is an excellent question. While I am working on Weston CM&HDR, I already have issues with how to represent the color related transformations. These new hardware features exposed here are nothing I have prepared for, and would probably need changes to accommodate. The main Weston roadmap is drafted in https://gitlab.freedesktop.org/wayland/weston/-/issues/467 The MR that introduces the concept of a color transformation, and also the whole beginnings of color management, is https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/582 In that MR, there is a patch introducing struct weston_color_transform: https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/582/diffs?commit_id=cffbf7c6b2faf7391b73ff9202774f660343bd34#ba0b86259533d5000d81c9c88109c9010eb0f641_0_77 The design idea there is that libweston shall have what I call "color manager" module. That module handles all the policy decisions about color, it uses a CMM (Little CMS 2 in this case) for all the color profile computations, and based on all information it has available from display EDID, ICC profile files, Wayland clients via the CM&HDR protocol extension and more, it will ultimately produce weston_color_transform objects. weston_color_transform is a complete description of how to map a pixel in one color model/space/encoding into another, maybe with user preferred tuning/tone-mapping. E.g. from client content to the output's blending space (output space but light-linear), or from output's blending space to output's framebuffer space or maybe even monitor wire space. The mapping described by weston_color_transform shall be implemented by libweston's GL-renderer or by the DRM-backend using KMS properties, whatever works for each case. So the description cannot be opaque, it has to map to GLSL shaders (easy) and KMS properties (???). Now the problem is, what should weston_color_transform look like? The current design has two steps in a color transform: - Transfer function: identity, the traditional set of three 1D LUTs, or something else. - Color mapping: identity, a 3D LUT, or something else. "Something else" is a placeholder for whatever we want to have, but the problem in adding new types of transfer function or color mapping representations (e.g. the fancy new GAMMA_MODEs) is how will the color manager create the parameters for those? If we have ICC profiles as the original data, then we are probably limited to what LCMS2 can produce. The issue with ICC profiles is that they may contain 3D LUTs themselves, so not what I would call a parametric model. OTOH, if we have, say, enumerated operations defined by various HDR standards, we have to code those ourselves and then producing whatever fancy representation is less of a problem. Maybe that is how it has to be. If the color transformations are defined by ICC profiles, we might be stuck with old-school KMS color properties, but HDR stuff that doesn't rely on ICC can use the fancier KMS properties. I'm sure interesting questions will arise when e.g. you have the monitor in HDR mode, described with standard HDR terms, and then you have application content described with an ICC profile (maybe SDR, maybe not). We can always get a 3D LUT out of LCMS2, so theoretically it would be possible to get a huge LUT and then optimise whatever parameterised model you have to that data set. But I worry that might be too costly to do in-flight, at least in a way that blocks the compositor. Maybe do what I hear shader compilers do: produce an unoptimal model fast, then compute an optimised model asynchronously and replace when ready. And disk cache(?). A library probably makes sense in the long run, but for now, I would have no idea at all what it should look like. Thanks, pq