All of lore.kernel.org
 help / color / mirror / Atom feed
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: Jani Nikula <jani.nikula@intel.com>,
	"Atwood, Matthew S" <matthew.s.atwood@intel.com>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [RFC] drm/i915/dp: optimize eDP 1.4+ link config fast and narrow
Date: Wed, 9 May 2018 05:21:25 -0700	[thread overview]
Message-ID: <20180509122125.GB2552@intel.com> (raw)
In-Reply-To: <20180509071321.28563-1-jani.nikula@intel.com>

On Wed, May 09, 2018 at 10:13:21AM +0300, Jani Nikula wrote:
> We've opted to use the maximum link rate and lane count for eDP panels,
> because typically the maximum supported configuration reported by the
> panel has matched the native resolution requirements of the panel, and
> optimizing the link has lead to problems.
> 
> With eDP 1.4 rate select method and DSC features, this is decreasingly
> the case. There's a need to optimize the link parameters. Moreover,
> already eDP 1.3 states fast link with fewer lanes is preferred over the
> wide and slow. (Wide and slow should still be more reliable for longer
> cable lengths.)
> 
> Additionally, there have been reports of panels failing on arbitrary
> link configurations, although arguably all configurations they claim to
> support should work.
> 
> Optimize eDP 1.4+ link config fast and narrow.
> 
> Side note: The implementation has a near duplicate of the link config
> function, with just the two inner for loops turned inside out. Perhaps
> there'd be a way to make this, say, more table driven to reduce the
> duplication, but seems like that would lead to duplication in the table
> generation. We'll also have to see how the link config optimization for
> DSC turns out.
> 
> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
> Cc: Manasi Navare <manasi.d.navare@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>

Cc: Matt Atwood <matthew.s.atwood@intel.com>

I believe Matt is interested on this and know who could test this for us.

> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105267
> Signed-off-by: Jani Nikula <jani.nikula@intel.com>

This matches my understand of the eDP 1.4 spec I believe this is the
way to go, so

Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

but probably better to get a proper review and wait for someone
to test...

> 
> ---
> 
> Untested. It's possible this helps the referenced bug. The downside is
> that this patch has a bunch of dependencies that are too much to
> backport to stable kernels. If the patch works, we may need to consider
> hacking together an uglier backport.
> ---
>  drivers/gpu/drm/i915/intel_dp.c | 73 ++++++++++++++++++++++++++++++++++-------
>  1 file changed, 62 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index dde92e4af5d3..1ec62965ece3 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -1768,6 +1768,42 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
>  	return false;
>  }
>  
> +/* Optimize link config in order: max bpp, min lanes, min clock */
> +static bool
> +intel_dp_compute_link_config_fast(struct intel_dp *intel_dp,
> +				  struct intel_crtc_state *pipe_config,
> +				  const struct link_config_limits *limits)
> +{
> +	struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
> +	int bpp, clock, lane_count;
> +	int mode_rate, link_clock, link_avail;
> +
> +	for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
> +		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
> +						   bpp);
> +
> +		for (lane_count = limits->min_lane_count;
> +		     lane_count <= limits->max_lane_count;
> +		     lane_count <<= 1) {
> +			for (clock = limits->min_clock; clock <= limits->max_clock; clock++) {
> +				link_clock = intel_dp->common_rates[clock];
> +				link_avail = intel_dp_max_data_rate(link_clock,
> +								    lane_count);
> +
> +				if (mode_rate <= link_avail) {
> +					pipe_config->lane_count = lane_count;
> +					pipe_config->pipe_bpp = bpp;
> +					pipe_config->port_clock = link_clock;
> +
> +					return true;
> +				}
> +			}
> +		}
> +	}
> +
> +	return false;
> +}
> +
>  static bool
>  intel_dp_compute_link_config(struct intel_encoder *encoder,
>  			     struct intel_crtc_state *pipe_config)
> @@ -1792,13 +1828,15 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
>  	limits.min_bpp = 6 * 3;
>  	limits.max_bpp = intel_dp_compute_bpp(intel_dp, pipe_config);
>  
> -	if (intel_dp_is_edp(intel_dp)) {
> +	if (intel_dp_is_edp(intel_dp) && intel_dp->edp_dpcd[0] < DP_EDP_14) {
>  		/*
>  		 * Use the maximum clock and number of lanes the eDP panel
> -		 * advertizes being capable of. The panels are generally
> -		 * designed to support only a single clock and lane
> -		 * configuration, and typically these values correspond to the
> -		 * native resolution of the panel.
> +		 * advertizes being capable of. The eDP 1.3 and earlier panels
> +		 * are generally designed to support only a single clock and
> +		 * lane configuration, and typically these values correspond to
> +		 * the native resolution of the panel. With eDP 1.4 rate select
> +		 * and DSC, this is decreasingly the case, and we need to be
> +		 * able to select less than maximum link config.
>  		 */
>  		limits.min_lane_count = limits.max_lane_count;
>  		limits.min_clock = limits.max_clock;
> @@ -1812,12 +1850,25 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
>  		      intel_dp->common_rates[limits.max_clock],
>  		      limits.max_bpp, adjusted_mode->crtc_clock);
>  
> -	/*
> -	 * Optimize for slow and wide. This is the place to add alternative
> -	 * optimization policy.
> -	 */
> -	if (!intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits))
> -		return false;
> +	if (intel_dp_is_edp(intel_dp)) {
> +		/*
> +		 * Optimize for fast and narrow. eDP 1.3 section 3.3 and eDP 1.4
> +		 * section A.1: "It is recommended that the minimum number of
> +		 * lanes be used, using the minimum link rate allowed for that
> +		 * lane configuration."
> +		 *
> +		 * Note that we use the max clock and lane count for eDP 1.3 and
> +		 * earlier, and fast vs. wide is irrelevant.
> +		 */
> +		if (!intel_dp_compute_link_config_fast(intel_dp, pipe_config,
> +						       &limits))
> +			return false;
> +	} else {
> +		/* Optimize for slow and wide. */
> +		if (!intel_dp_compute_link_config_wide(intel_dp, pipe_config,
> +						       &limits))
> +			return false;
> +	}
>  
>  	DRM_DEBUG_KMS("DP lane count %d clock %d bpp %d\n",
>  		      pipe_config->lane_count, pipe_config->port_clock,
> -- 
> 2.11.0
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  parent reply	other threads:[~2018-05-09 12:21 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-09  7:13 [RFC] drm/i915/dp: optimize eDP 1.4+ link config fast and narrow Jani Nikula
2018-05-09  8:35 ` ✓ Fi.CI.BAT: success for " Patchwork
2018-05-09 10:08 ` ✓ Fi.CI.IGT: " Patchwork
2018-05-09 12:21 ` Rodrigo Vivi [this message]
2018-05-09 15:09   ` [RFC] " Atwood, Matthew S
2018-05-09 15:30     ` Atwood, Matthew S
2018-05-09 18:58 ` Manasi Navare

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180509122125.GB2552@intel.com \
    --to=rodrigo.vivi@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jani.nikula@intel.com \
    --cc=matthew.s.atwood@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.