On Mon, 7 Jun 2021 18:07:23 +0000 "Shankar, Uma" wrote: > > -----Original Message----- > > From: dri-devel On Behalf Of Pekka > > Paalanen > > Sent: Monday, June 7, 2021 1:00 PM > > To: Harry Wentland > > Cc: intel-gfx@lists.freedesktop.org; Shankar, Uma ; > > Sebastian Wick ; dri-devel@lists.freedesktop.org; > > Modem, Bhanuprakash > > Subject: Re: [PATCH 0/9] Enhance pipe color support for multi segmented luts > > > > On Fri, 4 Jun 2021 14:51:25 -0400 > > Harry Wentland wrote: > > > > > On 2021-06-01 6:41 a.m., Uma Shankar wrote: > > > > Modern hardwares have multi segmented lut approach to prioritize the > > > > darker regions of the spectrum. This series introduces a new UAPI to > > > > define the lut ranges supported by the respective hardware. > > > > > > > > This also enables Pipe Color Management Support for Intel's XE_LPD hw. > > > > Enable Support for Pipe Degamma with the increased lut samples > > > > supported by hardware. This also adds support for newly introduced > > > > Logarithmic Gamma for XE_LPD. Also added the gamma readout support. > > > > > > > > The Logarithmic gamma implementation on XE_LPD is non linear and > > > > adds 25 segments with non linear lut samples in each segment. The > > > > expectation is userspace will create the luts as per this > > > > distribution and pass the final samples to driver to be programmed in hardware. > > > > > > > > > > Is this design targetting Intel XE_LPD HW in particular or is it > > > intended to be generic? > > > > > > If this is intended to be generic I think it would benefit from a lot > > > more documentation. At this point it's difficult for me to see how to > > > adapt this to AMD HW. It would take me a while to be comfortable to > > > make a call on whether we can use it or not. And what about other vendors? > > > > > > I think we need to be cautious in directly exposing HW functionality > > > through UAPI. The CM parts of AMD HW seem to be changing in some way > > > each generation and it looks like the same is true for Intel. The > > > trouble we have with adapting the old gamma/degamma properties to > > > modern HW is some indication to me that this approach is somewhat problematic. > > > > > > It would be useful to understand and document the specific use-cases > > > we want to provide to userspace implementers with this functionality. > > > Do we want to support modern transfer functions such as PQ or HLG? If > > > so, it might be beneficial to have an API to explicitly specify that, > > > and then use LUT tables in drivers that are optimized for the implementing HW. > > > > Hi Harry, > > > > from my very limited understanding so far, enum might be fine for PQ, but HLG is not > > just one transfer function, although it may often be confused as one. PQ and HLG > > are fundamentally different designs to HDR broadcasting I believe. It would be > > unfortunate to make a mistake here, engraving it into UAPI. > > Yes Pekka, putting this in UAPI may limit us. > > > > Or is the use case tone mapping? If so, would a parametric definition > > > of tone mapping be easier to manage? > > > > A very good question at least I have no idea about. > > Responded on earlier mail in thread. For non linear lut (gamma > block), usecase is primarily tone mapping but there are > implementations where non linear blending is seeked (AFAIR Android > does that), so it leaves room for those usecases as well. Yes, non-linear blending is a thing, unfortunately. Developers do not usually understand what could be wrong with simply blending "RGBA values", so most software just does that. It produces *a* result, and if all you use it for is shades of black (shadows) or rounded window corners, you never even see anything wrong with it. So the world has accustomed to seeing "incorrect blending" so much that they think doing anything else is wrong and complain if you try to move to physically correct blending, because it changes the strength of shadows. Hence any software migrating to a more correct blending formula may be met with bug reports. What's worse, pre-multiplied alpha is used as an optimization, as implemented everywhere including Wayland, in a way that is actually a step *away* from correct blending. If one wants to do correct blending, you first need to divide out the pre-multiplied alpha, then linearize, then blend. Luckily(?), non-linear blending of HDR content will probably look a lot worse than the same mistake on SDR content. Thanks, pq