All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Navare, Manasi" <manasi.d.navare@intel.com>
To: "Kulkarni, Vandita" <vandita.kulkarni@intel.com>
Cc: "Nikula, Jani" <jani.nikula@intel.com>,
	"intel-gfx@lists.freedesktop.org"
	<intel-gfx@lists.freedesktop.org>
Subject: Re: [Intel-gfx] [PATCH] drm/i915/dsc: Fix the usage of uncompressed bpp
Date: Tue, 2 Nov 2021 08:09:51 -0700	[thread overview]
Message-ID: <20211102150946.GA17970@labuser-Z97X-UD5H> (raw)
In-Reply-To: <e7c3f25dc1374a7383bf554ee7a31cac@intel.com>

On Mon, Nov 01, 2021 at 09:35:32PM -0700, Kulkarni, Vandita wrote:
> > -----Original Message-----
> > From: Navare, Manasi D <manasi.d.navare@intel.com>
> > Sent: Tuesday, November 2, 2021 10:11 AM
> > To: Kulkarni, Vandita <vandita.kulkarni@intel.com>
> > Cc: intel-gfx@lists.freedesktop.org; Nikula, Jani <jani.nikula@intel.com>
> > Subject: Re: [PATCH] drm/i915/dsc: Fix the usage of uncompressed bpp
> > 
> > On Mon, Nov 01, 2021 at 09:45:23AM -0700, Kulkarni, Vandita wrote:
> > > > -----Original Message-----
> > > > From: Navare, Manasi D <manasi.d.navare@intel.com>
> > > > Sent: Saturday, October 30, 2021 4:43 AM
> > > > To: Kulkarni, Vandita <vandita.kulkarni@intel.com>
> > > > Cc: intel-gfx@lists.freedesktop.org; Nikula, Jani
> > > > <jani.nikula@intel.com>
> > > > Subject: Re: [PATCH] drm/i915/dsc: Fix the usage of uncompressed bpp
> > > >
> > > > On Wed, Oct 27, 2021 at 09:37:10PM -0700, Kulkarni, Vandita wrote:
> > > > > > -----Original Message-----
> > > > > > From: Navare, Manasi D <manasi.d.navare@intel.com>
> > > > > > Sent: Thursday, October 28, 2021 12:57 AM
> > > > > > To: Kulkarni, Vandita <vandita.kulkarni@intel.com>
> > > > > > Cc: intel-gfx@lists.freedesktop.org; Nikula, Jani
> > > > > > <jani.nikula@intel.com>
> > > > > > Subject: Re: [PATCH] drm/i915/dsc: Fix the usage of uncompressed
> > > > > > bpp
> > > > > >
> > > > > > On Wed, Oct 27, 2021 at 03:23:16PM +0530, Vandita Kulkarni wrote:
> > > > > > > DP 1.4 spec limits max compression bpp to uncompressed bpp -1,
> > > > > > > which is supported from XELPD onwards.
> > > > > > > Instead of uncompressed bpp, max dsc input bpp was being used
> > > > > > > to limit the max compression bpp.
> > > > > >
> > > > > > So the input Pipe BPP which is the uncompressed bpp is decided
> > > > > > by the input bpc and when this was initially written, we had
> > > > > > designed it to respect the max_req_bpc by the user.
> > > > > > So that is what we use to decide the input bpc and hence the
> > > > > > pipe_bpp This input pipe_bpp decides the compressed bpp that we
> > > > > > calculate based on all the supported output bpps which are
> > > > > > supported all the way upto uncompressed_output_bpp - 1.
> > > > > >
> > > > > > So I dont see the need to change the logic here. Moreover I dont
> > > > > > see any change in the dsc_compute_bpp function So I dont
> > > > > > understand the purpose of introducing the new max_dsc_pipe_bpp
> > > > > > variable here
> > > > >
> > > > > Thanks for the comments, I had few more opens around this along
> > > > > with
> > > > this patch.
> > > > >
> > > > > AFAIU about max_requested_bpc it is to limit the max_bpc
> > > > > "drm: Add connector property to limit max bpc"
> > > > > And the driver is supposed to program the default bpc as per the
> > > > > connector
> > > > limitation.
> > > > > Which is 12 as per the current driver implementation.
> > > > >
> > > > > I had few queries around this design:
> > > > > So it means that max_dsc_input_bpp would be set to 36 if supported
> > > > > by the
> > > > sink and the platform.
> > > > > And now we make this as our pipe_bpp, 1. Does this mean that we
> > > > > are assuming 12bpc as i/p always?
> > > > > 2. What happens to those with formats 8bpc, 10 bpc?
> > > > >
> > > >
> > > > Yes so this driver policy was decided based on some feedback that I
> > > > had got from the community as well as internal feedback from Ville
> > > > that the decision of input_bpc should be based on max_bpc requested
> > > > by the user through the connector property and max_bpc supported by
> > the platform.
> > > > Here we take the min of the two so that we dont violate either of
> > > > the max constrains.
> > > > This was primarily suggested by Ville since he said that we should
> > > > always respect what user has set as the upper limit in the bpc
> > > > because this could be for example driven by the fact that OEM's
> > > > panel has a limitation or issues with higher bpcs or a display requirement
> > for certain use case.
> > > > Hence while we want to support as high bpc as supported by the
> > > > platform and sink to have better display quality, it should not
> > > > exceed the max limit set by the user through the property.
> > > >
> > > > 8 and 10bpc will be supported but we want to start with supporting
> > > > the max we can, going down from there if needed.
> > > > So for compression, we chose the maximum input bpc and determine the
> > > > compressed bpp for that.
> > > >
> > > >
> > > > > We do not consider the actual pipe_bpp while computing the
> > > > > compression_bpp, We reverse calculate it from the link_rate,  and
> > > > > small
> > > > joiner bpp limits.
> > > > > In cases of forcing dsc, we might have a situation where the link
> > > > > can
> > > > actually support the current bpp, or even more.
> > > > > But we are forcing the dsc enable.
> > > > > In such cases we might end up with a compression bpp which is
> > > > > higher than
> > > > the actual i/p bpp.
> > > >
> > > > Well the only time we force the dsc_enable is for IGT DSC tests and
> > > > thats okay for compression bpp to be higher since there we are just
> > > > validation the functionality and its not for actual user/use case.
> > >
> > > We are having these test cases as part of CI. We hit FIFO underrun in such
> > cases and that's  treated as a fail.
> > 
> > >
> > > >
> > > > In the actual use case we will only enable DSC when the available
> > > > max link rate/lane count does not support the minimum bpp of 18 (min
> > > > bpc of 6 * 3) So then in that case we say that lets keep the
> > > > pipe_bpp or input bpp as max_supported_input_bpc * 3
> > > > >
> > > > > Now, even if we take a min of  higher compression bpp against
> > > > > max_requested_bpc *3 -1, we still have Compression bpp > actual
> > > > > pipe_bpp
> > > > >
> > > > > As per the spec when they say uncompressed bpp, they actually mean
> > > > > 3 * bpc If we go ahead with this approach of using
> > > > > max_requested_bpc , which is 12 always we cannot adhere to the spec.
> > > >
> > > > So the Bspec says that if the input BPP is (8 * 3) = 24, then based
> > > > on the engine's compression ratio capability it can support output
> > > > compressed BPP
> > > > of: 8 - 23 in the increments of 1 BPP And the maximum that we can
> > > > pick between 8 - 23 will depend on the min(max bits_per_pixel that
> > > > the max link rate, lane count can handle, max bits per pixel by small joiner
> > RAM calc).
> > >
> > > In case of force dsc, as part of  the IGT tests, the link can actually
> > > support more than 24bpp, since we are forcing have seen a case where it
> > could support 28bpp.
> > >
> > 
> > From IGT we are forcing DSC for all supported input BPCs right, so for bpc =
> > 8, so input bpp = 24. So IMO in the kernel code, when we calculate pipe_bpp
> > when force_dsc is set in that case we should use the bpc requested from IGT
> > as inpput bpc and not care for max_connector_bpc So then input bpp = 24
> > and the limits of compressed bpp would be 8 - 23
> Right, had thought of this 
> Adding a check on force_dsc_en before doing this
> pipe_config->pipe_bpp = max_dsc_pipe_bpp;
> 
> but since you said that the design was to respect the max_requested_bpc from the user
> should we use max_requested_bpc property in the igt  to say that set the max to 8bpc
> since this test simulates a situation of testing 8bpc panel?
> Similarly for 10bpc?
> 
> Thanks
> Vandita

Yes that would be a better way than adding a special force_dsc_en case
in the kernel.
We can set the max_requested_bpc to 8, 10, 12 then the pipe_bpp will get
set to 8/10/12 respectively as per the current logic and the compressed bpp
will get capped at pipe_bpp -1
So I think here since we would still use max link rate/ max lane count
we will still have much higher available link BW but we will just be 
validating the functionality of 24 bpp to 23 bpp compression.
This sounds like a good approach.

Manasi

> 
> > 
> > Ideally now the link rate and lane count should be lowered to use the least
> > required to support that compressed bpp Then we wont see any underruns.
> > 
> > Manasi
> > 
> > > So min of (28,(36-1)) would be 28 and not as per the spec.
> > >
> > > Hence in either of the places, kernel or igt we need to honour the
> > > limits Of bpc*3 as per the DP 1.4 spec.
> > >
> > > If you look at the IGT we have cases of 8bpc, 10 bpc We need to
> > > address this either in the driver by getting the actual uncompressed
> > > bpp limits Or in the user space by changing the max_requested_bpc for
> > 8bpc and 10bpc cases.
> > >
> > > This patch is tested and resolves the FIFO underruns, in force dsc cases.
> > > The problem is only when we are forcing DSC and the link rate can actually
> > support higher bpp.
> > > And we have multiple igts trying to use 8bpc and 10bpc.
> > >
> > > Thanks
> > > Vandita
> > >
> > > >
> > > > So we are good here in terms of how we calculate our compressed bpp.
> > > > However what you should once double check is for XeLPD (>=13)
> > > > platforms, you just pick min (bits_per_pixel, pipe_bpp - 1) But I
> > > > remember seeing that the spec still says target bpp has to be 8 to
> > > > 27 even when supported input bpc are 8/10/12 (please check if it
> > > > actually goes all the way from 8 to 35)
> > > >
> > > > Manasi
> > > >
> > > > >
> > > > > Thanks,
> > > > > Vandita
> > > > >
> > > > > > Manasi
> > > > > >
> > > > > > >
> > > > > > > Fixes: 831d5aa96c97 ("drm/i915/xelpd: Support DP1.4
> > > > > > > compression
> > > > > > > BPPs")
> > > > > > > Signed-off-by: Vandita Kulkarni <vandita.kulkarni@intel.com>
> > > > > > > ---
> > > > > > >  drivers/gpu/drm/i915/display/intel_dp.c | 7 ++++---
> > > > > > >  1 file changed, 4 insertions(+), 3 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> > > > > > > b/drivers/gpu/drm/i915/display/intel_dp.c
> > > > > > > index 9d8132dd4cc5..1f7e666ae490 100644
> > > > > > > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > > > > > > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > > > > > > @@ -1322,7 +1322,7 @@ static int
> > > > > > > intel_dp_dsc_compute_config(struct
> > > > > > intel_dp *intel_dp,
> > > > > > >  	struct drm_i915_private *dev_priv = to_i915(dig_port-
> > > > > > >base.base.dev);
> > > > > > >  	const struct drm_display_mode *adjusted_mode =
> > > > > > >  		&pipe_config->hw.adjusted_mode;
> > > > > > > -	int pipe_bpp;
> > > > > > > +	int pipe_bpp, max_dsc_pipe_bpp;
> > > > > > >  	int ret;
> > > > > > >
> > > > > > >  	pipe_config->fec_enable = !intel_dp_is_edp(intel_dp) &&
> > @@ -
> > > > > > 1331,7
> > > > > > > +1331,8 @@ static int intel_dp_dsc_compute_config(struct
> > > > > > > +intel_dp
> > > > > > *intel_dp,
> > > > > > >  	if (!intel_dp_supports_dsc(intel_dp, pipe_config))
> > > > > > >  		return -EINVAL;
> > > > > > >
> > > > > > > -	pipe_bpp = intel_dp_dsc_compute_bpp(intel_dp, conn_state-
> > > > > > >max_requested_bpc);
> > > > > > > +	pipe_bpp = pipe_config->pipe_bpp;
> > > > > > > +	max_dsc_pipe_bpp = intel_dp_dsc_compute_bpp(intel_dp,
> > > > > > > +conn_state->max_requested_bpc);
> > > > > > >
> > > > > > >  	/* Min Input BPC for ICL+ is 8 */
> > > > > > >  	if (pipe_bpp < 8 * 3) {
> > > > > > > @@ -1345,7 +1346,7 @@ static int
> > > > > > > intel_dp_dsc_compute_config(struct
> > > > > > intel_dp *intel_dp,
> > > > > > >  	 * Optimize this later for the minimum possible link rate/lane
> > count
> > > > > > >  	 * with DSC enabled for the requested mode.
> > > > > > >  	 */
> > > > > > > -	pipe_config->pipe_bpp = pipe_bpp;
> > > > > > > +	pipe_config->pipe_bpp = max_dsc_pipe_bpp;
> > > > > > >  	pipe_config->port_clock = limits->max_rate;
> > > > > > >  	pipe_config->lane_count = limits->max_lane_count;
> > > > > > >
> > > > > > > --
> > > > > > > 2.32.0
> > > > > > >

      reply	other threads:[~2021-11-02 14:57 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-27  9:53 [Intel-gfx] [PATCH] drm/i915/dsc: Fix the usage of uncompressed bpp Vandita Kulkarni
2021-10-27 14:37 ` [Intel-gfx] ✓ Fi.CI.BAT: success for " Patchwork
2021-10-27 18:02 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2021-10-27 19:26 ` [Intel-gfx] [PATCH] " Navare, Manasi
2021-10-28  4:37   ` Kulkarni, Vandita
2021-10-29 23:12     ` Navare, Manasi
2021-11-01 16:45       ` Kulkarni, Vandita
2021-11-02  4:40         ` Navare, Manasi
2021-11-02  4:35           ` Kulkarni, Vandita
2021-11-02 15:09             ` Navare, Manasi [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211102150946.GA17970@labuser-Z97X-UD5H \
    --to=manasi.d.navare@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jani.nikula@intel.com \
    --cc=vandita.kulkarni@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.