On Mon, 17 May 2021 15:39:03 -0400 Vitaly Prosyak wrote: > On 2021-05-17 12:48 p.m., Sebastian Wick wrote: > > On 2021-05-17 10:57, Pekka Paalanen wrote: > >> On Fri, 14 May 2021 17:05:11 -0400 > >> Harry Wentland wrote: > >> > >>> On 2021-04-27 10:50 a.m., Pekka Paalanen wrote: > >>> > On Mon, 26 Apr 2021 13:38:49 -0400 > >>> > Harry Wentland wrote: > >> > >> ... > >> > >>> >> ## Mastering Luminances > >>> >> > >>> >> Now we are able to use the PQ 2084 EOTF to define the luminance of > >>> >> pixels in absolute terms. Unfortunately we're again presented with > >>> >> physical limitations of the display technologies on the market > >>> today. > >>> >> Here are a few examples of luminance ranges of displays. > >>> >> > >>> >> | Display                  | Luminance range in nits | > >>> >> | ------------------------ | ----------------------- | > >>> >> | Typical PC display       | 0.3 - 200 | > >>> >> | Excellent LCD HDTV       | 0.3 - 400 | > >>> >> | HDR LCD w/ local dimming | 0.05 - 1,500 | > >>> >> > >>> >> Since no display can currently show the full 0.0005 to 10,000 nits > >>> >> luminance range the display will need to tonemap the HDR content, > >>> i.e > >>> >> to fit the content within a display's capabilities. To assist with > >>> >> tonemapping HDR content is usually accompanied with a metadata that > >>> >> describes (among other things) the minimum and maximum mastering > >>> >> luminance, i.e. the maximum and minimum luminance of the display > >>> that > >>> >> was used to master the HDR content. > >>> >> > >>> >> The HDR metadata is currently defined on the drm_connector via the > >>> >> hdr_output_metadata blob property. > >>> >> > >>> >> It might be useful to define per-plane hdr metadata, as different > >>> >> planes might have been mastered differently. > >>> > > >>> > I don't think this would directly help with the dynamic range > >>> blending > >>> > problem. You still need to establish the mapping between the optical > >>> > values from two different EOTFs and dynamic ranges. Or can you know > >>> > which optical values match the mastering display maximum and minimum > >>> > luminances for not-PQ? > >>> > > >>> > >>> My understanding of this is probably best illustrated by this example: > >>> > >>> Assume HDR was mastered on a display with a maximum white level of 500 > >>> nits and played back on a display that supports a max white level of > >>> 400 > >>> nits. If you know the mastering white level of 500 you know that > >>> this is > >>> the maximum value you need to compress down to 400 nits, allowing > >>> you to > >>> use the full extent of the 400 nits panel. > >> > >> Right, but in the kernel, where do you get these nits values from? > >> > >> hdr_output_metadata blob is infoframe data to the monitor. I think this > >> should be independent of the metadata used for color transformations in > >> the display pipeline before the monitor. > >> > >> EDID may tell us the monitor HDR metadata, but again what is used in > >> the color transformations should be independent, because EDIDs lie, > >> lighting environments change, and users have different preferences. > >> > >> What about black levels? > >> > >> Do you want to do black level adjustment? > >> > >> How exactly should the compression work? > >> > >> Where do you map the mid-tones? > >> > >> What if the end user wants something different? > > > > I suspect that this is not about tone mapping at all. The use cases > > listed always have the display in PQ mode and just assume that no > > content exceeds the PQ limitations. Then you can simply bring all > > content to the color space with a matrix multiplication and then map the > > linear light content somewhere into the PQ range. Tone mapping is > > performed in the display only. The use cases do use the word "desktop" though. Harry, could you expand on this, are you seeking a design that is good for generic desktop compositors too, or one that is more tailored to "embedded" video player systems taking the most advantage of (potentially fixed-function) hardware? What matrix would one choose? Which render intent would it correspond to? If you need to adapt different dynamic ranges into the blending dynamic range, would a simple linear transformation really be enough? > > From a generic wayland compositor point of view this is uninteresting. > > > It a compositor's decision to provide or not the metadata property to > the kernel. The metadata can be available from one or multiple clients > or most likely not available at all. > > Compositors may put a display in HDR10 ( when PQ 2084 INV EOTF and TM > occurs in display ) or NATIVE mode and do not attach any metadata to the > connector and do TM in compositor. > > It is all about user preference or compositor design, or a combination > of both options. Indeed. The thing here is that you cannot just add KMS UAPI, you also need to have the FOSS userspace to go with it. So you need to have your audience defined, userspace patches written and reviewed and agreed to be a good idea. I'm afraid this particular UAPI design would be difficult to justify with Weston. Maybe Kodi is a better audience? But then again, one also needs to consider whether it is enough for a new UAPI to satisfy only part of the possible audience and then need yet another new UAPI to satisfy the rest. Adding new UAPI requires defining the interactions with all existing UAPI as well. Maybe we do need several different UAPIs for the "same" things if the hardware designs are too different to cater with just one. Thanks, pq