All of lore.kernel.org
 help / color / mirror / Atom feed
From: Imre Deak <imre.deak@intel.com>
To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: Uma Shankar <uma.shankar@intel.com>
Subject: [PATCH v2 05/21] drm/i915/dp: Use drm_dp_max_dprx_data_rate()
Date: Tue, 20 Feb 2024 23:18:25 +0200	[thread overview]
Message-ID: <20240220211841.448846-6-imre.deak@intel.com> (raw)
In-Reply-To: <20240220211841.448846-1-imre.deak@intel.com>

Instead of intel_dp_max_data_rate() use the equivalent
drm_dp_max_dprx_data_rate() which was copied from the former one in a
previous patch.

Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Uma Shankar <uma.shankar@intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c |  2 +-
 drivers/gpu/drm/i915/display/intel_dp.c      | 62 +++-----------------
 drivers/gpu/drm/i915/display/intel_dp.h      |  1 -
 drivers/gpu/drm/i915/display/intel_dp_mst.c  |  2 +-
 4 files changed, 10 insertions(+), 57 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 2ee26d19c200b..e1a4200f67a7e 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -2478,7 +2478,7 @@ intel_link_compute_m_n(u16 bits_per_pixel_x16, int nlanes,
 	u32 link_symbol_clock = intel_dp_link_symbol_clock(link_clock);
 	u32 data_m = intel_dp_effective_data_rate(pixel_clock, bits_per_pixel_x16,
 						  bw_overhead);
-	u32 data_n = intel_dp_max_data_rate(link_clock, nlanes);
+	u32 data_n = drm_dp_max_dprx_data_rate(link_clock, nlanes);
 
 	/*
 	 * Windows/BIOS uses fixed M/N values always. Follow suit.
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 88606e336a920..0a2ee43392142 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -383,52 +383,6 @@ int intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
 				1000000 * 16 * 8);
 }
 
-/*
- * Given a link rate and lanes, get the data bandwidth.
- *
- * Data bandwidth is the actual payload rate, which depends on the data
- * bandwidth efficiency and the link rate.
- *
- * For 8b/10b channel encoding, SST and non-FEC, the data bandwidth efficiency
- * is 80%. For example, for a 1.62 Gbps link, 1.62*10^9 bps * 0.80 * (1/8) =
- * 162000 kBps. With 8-bit symbols, we have 162000 kHz symbol clock. Just by
- * coincidence, the port clock in kHz matches the data bandwidth in kBps, and
- * they equal the link bit rate in Gbps multiplied by 100000. (Note that this no
- * longer holds for data bandwidth as soon as FEC or MST is taken into account!)
- *
- * For 128b/132b channel encoding, the data bandwidth efficiency is 96.71%. For
- * example, for a 10 Gbps link, 10*10^9 bps * 0.9671 * (1/8) = 1208875
- * kBps. With 32-bit symbols, we have 312500 kHz symbol clock. The value 1000000
- * does not match the symbol clock, the port clock (not even if you think in
- * terms of a byte clock), nor the data bandwidth. It only matches the link bit
- * rate in units of 10000 bps.
- */
-int
-intel_dp_max_data_rate(int max_link_rate, int max_lanes)
-{
-	int ch_coding_efficiency =
-		drm_dp_bw_channel_coding_efficiency(drm_dp_is_uhbr_rate(max_link_rate));
-	int max_link_rate_kbps = max_link_rate * 10;
-
-	/*
-	 * UHBR rates always use 128b/132b channel encoding, and have
-	 * 97.71% data bandwidth efficiency. Consider max_link_rate the
-	 * link bit rate in units of 10000 bps.
-	 */
-	/*
-	 * Lower than UHBR rates always use 8b/10b channel encoding, and have
-	 * 80% data bandwidth efficiency for SST non-FEC. However, this turns
-	 * out to be a nop by coincidence:
-	 *
-	 *	int max_link_rate_kbps = max_link_rate * 10;
-	 *	max_link_rate_kbps = DIV_ROUND_DOWN_ULL(max_link_rate_kbps * 8, 10);
-	 *	max_link_rate = max_link_rate_kbps / 8;
-	 */
-	return DIV_ROUND_DOWN_ULL(mul_u32_u32(max_link_rate_kbps * max_lanes,
-					      ch_coding_efficiency),
-				  1000000 * 8);
-}
-
 bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
@@ -658,7 +612,7 @@ static bool intel_dp_can_link_train_fallback_for_edp(struct intel_dp *intel_dp,
 	int mode_rate, max_rate;
 
 	mode_rate = intel_dp_link_required(fixed_mode->clock, 18);
-	max_rate = intel_dp_max_data_rate(link_rate, lane_count);
+	max_rate = drm_dp_max_dprx_data_rate(link_rate, lane_count);
 	if (mode_rate > max_rate)
 		return false;
 
@@ -1262,7 +1216,7 @@ intel_dp_mode_valid(struct drm_connector *_connector,
 	max_link_clock = intel_dp_max_link_rate(intel_dp);
 	max_lanes = intel_dp_max_lane_count(intel_dp);
 
-	max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes);
+	max_rate = drm_dp_max_dprx_data_rate(max_link_clock, max_lanes);
 	mode_rate = intel_dp_link_required(target_clock,
 					   intel_dp_mode_min_output_bpp(connector, mode));
 
@@ -1612,8 +1566,8 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
 			for (lane_count = limits->min_lane_count;
 			     lane_count <= limits->max_lane_count;
 			     lane_count <<= 1) {
-				link_avail = intel_dp_max_data_rate(link_rate,
-								    lane_count);
+				link_avail = drm_dp_max_dprx_data_rate(link_rate,
+								       lane_count);
 
 				if (mode_rate <= link_avail) {
 					pipe_config->lane_count = lane_count;
@@ -2467,8 +2421,8 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
 			    "DP link rate required %i available %i\n",
 			    intel_dp_link_required(adjusted_mode->crtc_clock,
 						   to_bpp_int_roundup(pipe_config->dsc.compressed_bpp_x16)),
-			    intel_dp_max_data_rate(pipe_config->port_clock,
-						   pipe_config->lane_count));
+			    drm_dp_max_dprx_data_rate(pipe_config->port_clock,
+						      pipe_config->lane_count));
 	} else {
 		drm_dbg_kms(&i915->drm, "DP lane count %d clock %d bpp %d\n",
 			    pipe_config->lane_count, pipe_config->port_clock,
@@ -2478,8 +2432,8 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
 			    "DP link rate required %i available %i\n",
 			    intel_dp_link_required(adjusted_mode->crtc_clock,
 						   pipe_config->pipe_bpp),
-			    intel_dp_max_data_rate(pipe_config->port_clock,
-						   pipe_config->lane_count));
+			    drm_dp_max_dprx_data_rate(pipe_config->port_clock,
+						      pipe_config->lane_count));
 	}
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index e2875e03afba0..20252984b3b04 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -111,7 +111,6 @@ bool intel_dp_get_colorimetry_status(struct intel_dp *intel_dp);
 int intel_dp_link_required(int pixel_clock, int bpp);
 int intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
 				 int bw_overhead);
-int intel_dp_max_data_rate(int max_link_rate, int max_lanes);
 bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp);
 bool intel_dp_needs_vsc_sdp(const struct intel_crtc_state *crtc_state,
 			    const struct drm_connector_state *conn_state);
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index c685b64bb7810..b4d4bb90126e9 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -1299,7 +1299,7 @@ intel_dp_mst_mode_valid_ctx(struct drm_connector *connector,
 	max_link_clock = intel_dp_max_link_rate(intel_dp);
 	max_lanes = intel_dp_max_lane_count(intel_dp);
 
-	max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes);
+	max_rate = drm_dp_max_dprx_data_rate(max_link_clock, max_lanes);
 	mode_rate = intel_dp_link_required(mode->clock, min_bpp);
 
 	ret = drm_modeset_lock(&mgr->base.lock, ctx);
-- 
2.39.2


  parent reply	other threads:[~2024-02-20 21:18 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-20 21:18 [PATCH v2 00/21] drm/i915: Add Display Port tunnel BW allocation support Imre Deak
2024-02-20 21:18 ` [PATCH v2 01/21] drm/dp: Add drm_dp_max_dprx_data_rate() Imre Deak
2024-02-26 18:52   ` [PATCH v3 " Imre Deak
2024-02-20 21:18 ` [PATCH v2 02/21] drm/dp: Add support for DP tunneling Imre Deak
2024-02-23  6:25   ` Shankar, Uma
2024-02-23 14:33     ` Imre Deak
2024-02-23 21:32   ` Ville Syrjälä
2024-02-26 11:40     ` Imre Deak
2024-02-26 18:52   ` [PATCH v3 " Imre Deak
2024-02-20 21:18 ` [PATCH v2 03/21] drm/i915: Fix display bpp limit computation during system resume Imre Deak
2024-02-23  6:38   ` Shankar, Uma
2024-02-20 21:18 ` [PATCH v2 04/21] drm/i915/dp: Add support to notify MST connectors to retry modesets Imre Deak
2024-02-23  7:59   ` Shankar, Uma
2024-02-20 21:18 ` Imre Deak [this message]
2024-02-20 21:18 ` [PATCH v2 06/21] drm/i915/dp: Factor out intel_dp_config_required_rate() Imre Deak
2024-02-20 21:18 ` [PATCH v2 07/21] drm/i915/dp: Export intel_dp_max_common_rate/lane_count() Imre Deak
2024-02-20 21:18 ` [PATCH v2 08/21] drm/i915/dp: Factor out intel_dp_update_sink_caps() Imre Deak
2024-02-20 21:18 ` [PATCH v2 09/21] drm/i915/dp: Factor out intel_dp_read_dprx_caps() Imre Deak
2024-02-20 21:18 ` [PATCH v2 10/21] drm/i915/dp: Add intel_dp_max_link_data_rate() Imre Deak
2024-02-20 21:18 ` [PATCH v2 11/21] drm/i915/dp: Add way to get active pipes with syncing commits Imre Deak
2024-02-23  8:10   ` Shankar, Uma
2024-02-23 21:11   ` Ville Syrjälä
2024-02-23 22:09     ` Imre Deak
2024-02-23 22:13       ` Ville Syrjälä
2024-02-26 18:52   ` [PATCH v3 11/21] drm/i915/dp: Sync instead of try-sync commits when getting active pipes Imre Deak
2024-02-20 21:18 ` [PATCH v2 12/21] drm/i915/dp: Add support for DP tunnel BW allocation Imre Deak
2024-02-23 21:37   ` Ville Syrjälä
2024-02-26 11:42     ` Imre Deak
2024-02-26 10:47   ` Shankar, Uma
2024-02-26 18:52   ` [PATCH v3 " Imre Deak
2024-02-20 21:18 ` [PATCH v2 13/21] drm/i915/dp: Add DP tunnel atomic state and check BW limit Imre Deak
2024-02-23 10:13   ` Shankar, Uma
2024-02-20 21:18 ` [PATCH v2 14/21] drm/i915/dp: Account for tunnel BW limit in intel_dp_max_link_data_rate() Imre Deak
2024-02-20 21:18 ` [PATCH v2 15/21] drm/i915/dp: Compute DP tunnel BW during encoder state computation Imre Deak
2024-02-20 21:18 ` [PATCH v2 16/21] drm/i915/dp: Allocate/free DP tunnel BW in the encoder enable/disable hooks Imre Deak
2024-02-23 21:25   ` Ville Syrjälä
2024-02-20 21:18 ` [PATCH v2 17/21] drm/i915/dp: Handle DP tunnel IRQs Imre Deak
2024-02-23 10:19   ` Shankar, Uma
2024-02-20 21:18 ` [PATCH v2 18/21] drm/i915/dp: Call intel_dp_sync_state() always for DDI DP encoders Imre Deak
2024-02-20 21:18 ` [PATCH v2 19/21] drm/i915/dp: Suspend/resume DP tunnels Imre Deak
2024-02-23 10:23   ` Shankar, Uma
2024-02-20 21:18 ` [PATCH v2 20/21] drm/i915/dp: Read DPRX for all long HPD pulses Imre Deak
2024-02-23 10:33   ` Shankar, Uma
2024-02-20 21:18 ` [PATCH v2 21/21] drm/i915/dp: Enable DP tunnel BW allocation mode Imre Deak
2024-02-23 10:36   ` Shankar, Uma
2024-02-21  1:49 ` ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Add Display Port tunnel BW allocation support (rev2) Patchwork
2024-02-21  1:49 ` ✗ Fi.CI.SPARSE: " Patchwork
2024-02-21  2:08 ` ✓ Fi.CI.BAT: success " Patchwork
2024-02-21  5:25 ` ✗ Fi.CI.IGT: failure " Patchwork
2024-02-23 22:14 ` [PATCH v2 00/21] drm/i915: Add Display Port tunnel BW allocation support Ville Syrjälä
2024-02-26 13:54 ` Jani Nikula
2024-02-26 13:59   ` Maxime Ripard
2024-02-27  1:02 ` ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Add Display Port tunnel BW allocation support (rev6) Patchwork
2024-02-27  1:02 ` ✗ Fi.CI.SPARSE: " Patchwork
2024-02-27  1:18 ` ✗ Fi.CI.BAT: failure " Patchwork
2024-02-27 10:59   ` Imre Deak
2024-02-27 11:34     ` Illipilli, TejasreeX
2024-02-27 11:32 ` ✓ Fi.CI.BAT: success " Patchwork
2024-02-27 14:21 ` ✗ Fi.CI.IGT: failure " Patchwork
2024-02-27 15:51   ` Imre Deak
2024-02-28  5:55 ` ✓ Fi.CI.IGT: success " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240220211841.448846-6-imre.deak@intel.com \
    --to=imre.deak@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=uma.shankar@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.