intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work
@ 2021-08-18 18:10 Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 01/17] drm/dp: add DP 2.0 UHBR link rate and bw code conversions Jani Nikula
                   ` (19 more replies)
  0 siblings, 20 replies; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

Start enabling DP 2.0 features. It's not complete, but it's a good
start, and should not conflict with anything existing.

Jani Nikula (17):
  drm/dp: add DP 2.0 UHBR link rate and bw code conversions
  drm/dp: use more of the extended receiver cap
  drm/dp: add LTTPR DP 2.0 DPCD addresses
  drm/dp: add helper for extracting adjust 128b/132b TX FFE preset
  drm/i915/dp: use actual link rate values in struct link_config_limits
  drm/i915/dp: read sink UHBR rates
  drm/i915/dg2: add TRANS_DP2_CTL register definition
  drm/i915/dg2: add DG2+ TRANS_DDI_FUNC_CTL DP 2.0 128b/132b mode
  drm/i915/dg2: add TRANS_DP2_VFREQHIGH and TRANS_DP2_VFREQLOW
  drm/i915/dg2: add DG2 UHBR source rates
  drm/i915/dp: add max data rate calculation for UHBR rates
  drm/i915/dp: use 128b/132b TPS2 for UHBR+ link rates
  drm/i915/dp: select 128b/132b channel encoding for UHBR rates
  drm/i915/dg2: configure TRANS_DP2_CTL for DP 2.0
  drm/i915/dg2: use 128b/132b transcoder DDI mode
  drm/i915/dg2: configure TRANS_DP2_VFREQ{HIGH,LOW} for 128b/132b
  drm/i915/dg2: update link training for 128b/132b

 drivers/gpu/drm/drm_dp_helper.c               |  42 ++++++-
 drivers/gpu/drm/i915/display/intel_ddi.c      |  61 +++++++---
 drivers/gpu/drm/i915/display/intel_dp.c       | 109 ++++++++++++++----
 drivers/gpu/drm/i915/display/intel_dp.h       |   4 +-
 .../drm/i915/display/intel_dp_link_training.c |  99 +++++++++++-----
 drivers/gpu/drm/i915/display/intel_dp_mst.c   |  17 ++-
 drivers/gpu/drm/i915/i915_reg.h               |  25 +++-
 include/drm/drm_dp_helper.h                   |   6 +
 8 files changed, 289 insertions(+), 74 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 01/17] drm/dp: add DP 2.0 UHBR link rate and bw code conversions
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-19 16:51   ` Ville Syrjälä
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 02/17] drm/dp: use more of the extended receiver cap Jani Nikula
                   ` (18 subsequent siblings)
  19 siblings, 1 reply; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala, dri-devel

The bw code equals link_rate / 0.27 Gbps only for 8b/10b link
rates. Handle DP 2.0 UHBR rates as special cases, though this is not
pretty.

Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/drm_dp_helper.c | 26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
index 6d0f2c447f3b..9b2a2961fca8 100644
--- a/drivers/gpu/drm/drm_dp_helper.c
+++ b/drivers/gpu/drm/drm_dp_helper.c
@@ -207,15 +207,33 @@ EXPORT_SYMBOL(drm_dp_lttpr_link_train_channel_eq_delay);
 
 u8 drm_dp_link_rate_to_bw_code(int link_rate)
 {
-	/* Spec says link_bw = link_rate / 0.27Gbps */
-	return link_rate / 27000;
+	switch (link_rate) {
+	case 1000000:
+		return DP_LINK_BW_10;
+	case 1350000:
+		return DP_LINK_BW_13_5;
+	case 2000000:
+		return DP_LINK_BW_20;
+	default:
+		/* Spec says link_bw = link_rate / 0.27Gbps */
+		return link_rate / 27000;
+	}
 }
 EXPORT_SYMBOL(drm_dp_link_rate_to_bw_code);
 
 int drm_dp_bw_code_to_link_rate(u8 link_bw)
 {
-	/* Spec says link_rate = link_bw * 0.27Gbps */
-	return link_bw * 27000;
+	switch (link_bw) {
+	case DP_LINK_BW_10:
+		return 1000000;
+	case DP_LINK_BW_13_5:
+		return 1350000;
+	case DP_LINK_BW_20:
+		return 2000000;
+	default:
+		/* Spec says link_rate = link_bw * 0.27Gbps */
+		return link_bw * 27000;
+	}
 }
 EXPORT_SYMBOL(drm_dp_bw_code_to_link_rate);
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 02/17] drm/dp: use more of the extended receiver cap
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 01/17] drm/dp: add DP 2.0 UHBR link rate and bw code conversions Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 03/17] drm/dp: add LTTPR DP 2.0 DPCD addresses Jani Nikula
                   ` (17 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala, dri-devel

Extend the use of extended receiver cap at 0x2200 to cover
MAIN_LINK_CHANNEL_CODING_CAP in 0x2206, in case an implementation hides
the DP 2.0 128b/132b channel encoding cap.

Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/drm_dp_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
index 9b2a2961fca8..9389f92cb944 100644
--- a/drivers/gpu/drm/drm_dp_helper.c
+++ b/drivers/gpu/drm/drm_dp_helper.c
@@ -608,7 +608,7 @@ static u8 drm_dp_downstream_port_count(const u8 dpcd[DP_RECEIVER_CAP_SIZE])
 static int drm_dp_read_extended_dpcd_caps(struct drm_dp_aux *aux,
 					  u8 dpcd[DP_RECEIVER_CAP_SIZE])
 {
-	u8 dpcd_ext[6];
+	u8 dpcd_ext[DP_MAIN_LINK_CHANNEL_CODING + 1];
 	int ret;
 
 	/*
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 03/17] drm/dp: add LTTPR DP 2.0 DPCD addresses
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 01/17] drm/dp: add DP 2.0 UHBR link rate and bw code conversions Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 02/17] drm/dp: use more of the extended receiver cap Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 04/17] drm/dp: add helper for extracting adjust 128b/132b TX FFE preset Jani Nikula
                   ` (16 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala, dri-devel

DP 2.0 brings some new DPCD addresses for PHY repeaters.

Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 include/drm/drm_dp_helper.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h
index 1d5b3dbb6e56..f3a61341011d 100644
--- a/include/drm/drm_dp_helper.h
+++ b/include/drm/drm_dp_helper.h
@@ -1319,6 +1319,10 @@ struct drm_panel;
 #define DP_MAX_LANE_COUNT_PHY_REPEATER			    0xf0004 /* 1.4a */
 #define DP_Repeater_FEC_CAPABILITY			    0xf0004 /* 1.4 */
 #define DP_PHY_REPEATER_EXTENDED_WAIT_TIMEOUT		    0xf0005 /* 1.4a */
+#define DP_MAIN_LINK_CHANNEL_CODING_PHY_REPEATER	    0xf0006 /* 2.0 */
+# define DP_PHY_REPEATER_128B132B_SUPPORTED		    (1 << 0)
+/* See DP_128B132B_SUPPORTED_LINK_RATES for values */
+#define DP_PHY_REPEATER_128B132B_RATES			    0xf0007 /* 2.0 */
 
 enum drm_dp_phy {
 	DP_PHY_DPRX,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 04/17] drm/dp: add helper for extracting adjust 128b/132b TX FFE preset
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (2 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 03/17] drm/dp: add LTTPR DP 2.0 DPCD addresses Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-19 17:30   ` Ville Syrjälä
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 05/17] drm/i915/dp: use actual link rate values in struct link_config_limits Jani Nikula
                   ` (15 subsequent siblings)
  19 siblings, 1 reply; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala, dri-devel

The DP 2.0 128b/132b channel coding uses TX FFE presets instead of
vswing and pre-emphasis.

Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/drm_dp_helper.c | 14 ++++++++++++++
 include/drm/drm_dp_helper.h     |  2 ++
 2 files changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
index 9389f92cb944..2843238a78e6 100644
--- a/drivers/gpu/drm/drm_dp_helper.c
+++ b/drivers/gpu/drm/drm_dp_helper.c
@@ -130,6 +130,20 @@ u8 drm_dp_get_adjust_request_pre_emphasis(const u8 link_status[DP_LINK_STATUS_SI
 }
 EXPORT_SYMBOL(drm_dp_get_adjust_request_pre_emphasis);
 
+/* DP 2.0 128b/132b */
+u8 drm_dp_get_adjust_tx_ffe_preset(const u8 link_status[DP_LINK_STATUS_SIZE],
+				   int lane)
+{
+	int i = DP_ADJUST_REQUEST_LANE0_1 + (lane >> 1);
+	int s = ((lane & 1) ?
+		 DP_ADJUST_TX_FFE_PRESET_LANE1_SHIFT :
+		 DP_ADJUST_TX_FFE_PRESET_LANE0_SHIFT);
+	u8 l = dp_link_status(link_status, i);
+
+	return (l >> s) & 0xf;
+}
+EXPORT_SYMBOL(drm_dp_get_adjust_tx_ffe_preset);
+
 u8 drm_dp_get_adjust_request_post_cursor(const u8 link_status[DP_LINK_STATUS_SIZE],
 					 unsigned int lane)
 {
diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h
index f3a61341011d..3ee0b3ffb8a5 100644
--- a/include/drm/drm_dp_helper.h
+++ b/include/drm/drm_dp_helper.h
@@ -1494,6 +1494,8 @@ u8 drm_dp_get_adjust_request_voltage(const u8 link_status[DP_LINK_STATUS_SIZE],
 				     int lane);
 u8 drm_dp_get_adjust_request_pre_emphasis(const u8 link_status[DP_LINK_STATUS_SIZE],
 					  int lane);
+u8 drm_dp_get_adjust_tx_ffe_preset(const u8 link_status[DP_LINK_STATUS_SIZE],
+				   int lane);
 u8 drm_dp_get_adjust_request_post_cursor(const u8 link_status[DP_LINK_STATUS_SIZE],
 					 unsigned int lane);
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 05/17] drm/i915/dp: use actual link rate values in struct link_config_limits
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (3 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 04/17] drm/dp: add helper for extracting adjust 128b/132b TX FFE preset Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-19 17:34   ` Ville Syrjälä
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 06/17] drm/i915/dp: read sink UHBR rates Jani Nikula
                   ` (14 subsequent siblings)
  19 siblings, 1 reply; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

The MST code uses actual link rates in the limits struct, while the DP
code in general uses indexes to the ->common_rates[] array. Fix the
confusion by using actual link rate values everywhere. This is a better
abstraction than some obscure index.

Rename the struct members while at it to ensure all the places are
covered.

Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c     | 30 ++++++++++++---------
 drivers/gpu/drm/i915/display/intel_dp.h     |  2 +-
 drivers/gpu/drm/i915/display/intel_dp_mst.c |  6 ++---
 3 files changed, 21 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 75d4ebc66941..d273b3848785 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -1044,7 +1044,8 @@ intel_dp_adjust_compliance_config(struct intel_dp *intel_dp,
 						    intel_dp->num_common_rates,
 						    intel_dp->compliance.test_link_rate);
 			if (index >= 0)
-				limits->min_clock = limits->max_clock = index;
+				limits->min_rate = limits->max_rate =
+					intel_dp->compliance.test_link_rate;
 			limits->min_lane_count = limits->max_lane_count =
 				intel_dp->compliance.test_lane_count;
 		}
@@ -1058,8 +1059,8 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
 				  const struct link_config_limits *limits)
 {
 	struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
-	int bpp, clock, lane_count;
-	int mode_rate, link_clock, link_avail;
+	int bpp, i, lane_count;
+	int mode_rate, link_rate, link_avail;
 
 	for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
 		int output_bpp = intel_dp_output_bpp(pipe_config->output_format, bpp);
@@ -1067,18 +1068,22 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
 		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
 						   output_bpp);
 
-		for (clock = limits->min_clock; clock <= limits->max_clock; clock++) {
+		for (i = 0; i < intel_dp->num_common_rates; i++) {
+			link_rate = intel_dp->common_rates[i];
+			if (link_rate < limits->min_rate ||
+			    link_rate > limits->max_rate)
+				continue;
+
 			for (lane_count = limits->min_lane_count;
 			     lane_count <= limits->max_lane_count;
 			     lane_count <<= 1) {
-				link_clock = intel_dp->common_rates[clock];
-				link_avail = intel_dp_max_data_rate(link_clock,
+				link_avail = intel_dp_max_data_rate(link_rate,
 								    lane_count);
 
 				if (mode_rate <= link_avail) {
 					pipe_config->lane_count = lane_count;
 					pipe_config->pipe_bpp = bpp;
-					pipe_config->port_clock = link_clock;
+					pipe_config->port_clock = link_rate;
 
 					return 0;
 				}
@@ -1212,7 +1217,7 @@ static int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 	 * with DSC enabled for the requested mode.
 	 */
 	pipe_config->pipe_bpp = pipe_bpp;
-	pipe_config->port_clock = intel_dp->common_rates[limits->max_clock];
+	pipe_config->port_clock = limits->max_rate;
 	pipe_config->lane_count = limits->max_lane_count;
 
 	if (intel_dp_is_edp(intel_dp)) {
@@ -1321,8 +1326,8 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
 	/* No common link rates between source and sink */
 	drm_WARN_ON(encoder->base.dev, common_len <= 0);
 
-	limits.min_clock = 0;
-	limits.max_clock = common_len - 1;
+	limits.min_rate = intel_dp->common_rates[0];
+	limits.max_rate = intel_dp->common_rates[common_len - 1];
 
 	limits.min_lane_count = 1;
 	limits.max_lane_count = intel_dp_max_lane_count(intel_dp);
@@ -1340,15 +1345,14 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
 		 * values correspond to the native resolution of the panel.
 		 */
 		limits.min_lane_count = limits.max_lane_count;
-		limits.min_clock = limits.max_clock;
+		limits.min_rate = limits.max_rate;
 	}
 
 	intel_dp_adjust_compliance_config(intel_dp, pipe_config, &limits);
 
 	drm_dbg_kms(&i915->drm, "DP link computation with max lane count %i "
 		    "max rate %d max bpp %d pixel clock %iKHz\n",
-		    limits.max_lane_count,
-		    intel_dp->common_rates[limits.max_clock],
+		    limits.max_lane_count, limits.max_rate,
 		    limits.max_bpp, adjusted_mode->crtc_clock);
 
 	if ((adjusted_mode->crtc_clock > i915->max_dotclk_freq ||
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 680631b5b437..1345d588fc6d 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -26,7 +26,7 @@ struct intel_dp;
 struct intel_encoder;
 
 struct link_config_limits {
-	int min_clock, max_clock;
+	int min_rate, max_rate;
 	int min_lane_count, max_lane_count;
 	int min_bpp, max_bpp;
 };
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index 9859c0334ebc..d104441344c0 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -61,7 +61,7 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
 	int bpp, slots = -EINVAL;
 
 	crtc_state->lane_count = limits->max_lane_count;
-	crtc_state->port_clock = limits->max_clock;
+	crtc_state->port_clock = limits->max_rate;
 
 	for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
 		crtc_state->pipe_bpp = bpp;
@@ -131,8 +131,8 @@ static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
 	 * for MST we always configure max link bw - the spec doesn't
 	 * seem to suggest we should do otherwise.
 	 */
-	limits.min_clock =
-	limits.max_clock = intel_dp_max_link_rate(intel_dp);
+	limits.min_rate =
+	limits.max_rate = intel_dp_max_link_rate(intel_dp);
 
 	limits.min_lane_count =
 	limits.max_lane_count = intel_dp_max_lane_count(intel_dp);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 06/17] drm/i915/dp: read sink UHBR rates
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (4 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 05/17] drm/i915/dp: use actual link rate values in struct link_config_limits Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-19 17:45   ` Ville Syrjälä
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 07/17] drm/i915/dg2: add TRANS_DP2_CTL register definition Jani Nikula
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

See if sink supports DP 2.0 128b/132b channel encoding, and update sink
rates accordingly.

FIXME: Also take LTTPR 128b/132b into account.

Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index d273b3848785..079b5b37b85a 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -141,6 +141,24 @@ static void intel_dp_set_sink_rates(struct intel_dp *intel_dp)
 		intel_dp->sink_rates[i] = dp_rates[i];
 	}
 
+	/*
+	 * Sink rates for 128b/132b. If set, sink should support all 8b/10b
+	 * rates and 10 Gbps.
+	 */
+	if (intel_dp->dpcd[DP_MAIN_LINK_CHANNEL_CODING] & DP_CAP_ANSI_128B132B) {
+		u8 uhbr_rates = 0;
+
+		drm_dp_dpcd_readb(&intel_dp->aux,
+				  DP_128B132B_SUPPORTED_LINK_RATES, &uhbr_rates);
+
+		if (uhbr_rates & DP_UHBR10)
+			intel_dp->sink_rates[i++] = 1000000;
+		if (uhbr_rates & DP_UHBR13_5)
+			intel_dp->sink_rates[i++] = 1350000;
+		if (uhbr_rates & DP_UHBR20)
+			intel_dp->sink_rates[i++] = 2000000;
+	}
+
 	intel_dp->num_sink_rates = i;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 07/17] drm/i915/dg2: add TRANS_DP2_CTL register definition
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (5 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 06/17] drm/i915/dp: read sink UHBR rates Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 08/17] drm/i915/dg2: add DG2+ TRANS_DDI_FUNC_CTL DP 2.0 128b/132b mode Jani Nikula
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

This register controls the DP 2.0 datapath.

Bspec: 69967
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/i915_reg.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 72dd3a6d205d..f9d4e908bf93 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -9099,6 +9099,15 @@ enum {
 #define  TRANS_DP_HSYNC_ACTIVE_LOW	0
 #define  TRANS_DP_SYNC_MASK	(3 << 3)
 
+#define _TRANS_DP2_CTL_A			0x600a0
+#define _TRANS_DP2_CTL_B			0x610a0
+#define _TRANS_DP2_CTL_C			0x620a0
+#define _TRANS_DP2_CTL_D			0x630a0
+#define TRANS_DP2_CTL(trans)			_MMIO_TRANS(trans, _TRANS_DP2_CTL_A, _TRANS_DP2_CTL_B)
+#define  TRANS_DP2_128B132B_CHANNEL_CODING	REG_BIT(31)
+#define  TRANS_DP2_PANEL_REPLAY_ENABLE		REG_BIT(30)
+#define  TRANS_DP2_DEBUG_ENABLE			REG_BIT(23)
+
 /* SNB eDP training params */
 /* SNB A-stepping */
 #define  EDP_LINK_TRAIN_400MV_0DB_SNB_A		(0x38 << 22)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 08/17] drm/i915/dg2: add DG2+ TRANS_DDI_FUNC_CTL DP 2.0 128b/132b mode
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (6 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 07/17] drm/i915/dg2: add TRANS_DP2_CTL register definition Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 09/17] drm/i915/dg2: add TRANS_DP2_VFREQHIGH and TRANS_DP2_VFREQLOW Jani Nikula
                   ` (11 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

Unfortunately, the DP 2.0 128b/132b DDI mode selection in the register
conflicts with FDI. Since we have to deal with both meanings in the same
code, for different platforms, clarify the macro name so we don't
forget.

Bspec: 50493
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/display/intel_ddi.c | 6 +++---
 drivers/gpu/drm/i915/i915_reg.h          | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
index 1ef7a65feb66..203a54f905f6 100644
--- a/drivers/gpu/drm/i915/display/intel_ddi.c
+++ b/drivers/gpu/drm/i915/display/intel_ddi.c
@@ -488,7 +488,7 @@ intel_ddi_transcoder_func_reg_val_get(struct intel_encoder *encoder,
 		if (crtc_state->hdmi_high_tmds_clock_ratio)
 			temp |= TRANS_DDI_HIGH_TMDS_CHAR_RATE;
 	} else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_ANALOG)) {
-		temp |= TRANS_DDI_MODE_SELECT_FDI;
+		temp |= TRANS_DDI_MODE_SELECT_FDI_OR_128B132B;
 		temp |= (crtc_state->fdi_lanes - 1) << 1;
 	} else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST)) {
 		temp |= TRANS_DDI_MODE_SELECT_DP_MST;
@@ -678,7 +678,7 @@ bool intel_ddi_connector_get_hw_state(struct intel_connector *intel_connector)
 		ret = false;
 		break;
 
-	case TRANS_DDI_MODE_SELECT_FDI:
+	case TRANS_DDI_MODE_SELECT_FDI_OR_128B132B:
 		ret = type == DRM_MODE_CONNECTOR_VGA;
 		break;
 
@@ -3557,7 +3557,7 @@ static void intel_ddi_read_func_ctl(struct intel_encoder *encoder,
 		pipe_config->output_types |= BIT(INTEL_OUTPUT_HDMI);
 		pipe_config->lane_count = 4;
 		break;
-	case TRANS_DDI_MODE_SELECT_FDI:
+	case TRANS_DDI_MODE_SELECT_FDI_OR_128B132B:
 		pipe_config->output_types |= BIT(INTEL_OUTPUT_ANALOG);
 		break;
 	case TRANS_DDI_MODE_SELECT_DP_SST:
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index f9d4e908bf93..18083ae8a877 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -10167,7 +10167,7 @@ enum skl_power_gate {
 #define  TRANS_DDI_MODE_SELECT_DVI	(1 << 24)
 #define  TRANS_DDI_MODE_SELECT_DP_SST	(2 << 24)
 #define  TRANS_DDI_MODE_SELECT_DP_MST	(3 << 24)
-#define  TRANS_DDI_MODE_SELECT_FDI	(4 << 24)
+#define  TRANS_DDI_MODE_SELECT_FDI_OR_128B132B	(4 << 24)
 #define  TRANS_DDI_BPC_MASK		(7 << 20)
 #define  TRANS_DDI_BPC_8		(0 << 20)
 #define  TRANS_DDI_BPC_10		(1 << 20)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 09/17] drm/i915/dg2: add TRANS_DP2_VFREQHIGH and TRANS_DP2_VFREQLOW
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (7 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 08/17] drm/i915/dg2: add DG2+ TRANS_DDI_FUNC_CTL DP 2.0 128b/132b mode Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 10/17] drm/i915/dg2: add DG2 UHBR source rates Jani Nikula
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

Add the registers for specifying the lower and higher 24 bits of the DP
2.0 pixel clock frequency in Hz.

Bspec: 53326
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/i915_reg.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 18083ae8a877..f377044fb9ac 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -9108,6 +9108,20 @@ enum {
 #define  TRANS_DP2_PANEL_REPLAY_ENABLE		REG_BIT(30)
 #define  TRANS_DP2_DEBUG_ENABLE			REG_BIT(23)
 
+#define _TRANS_DP2_VFREQHIGH_A			0x600a4
+#define _TRANS_DP2_VFREQHIGH_B			0x610a4
+#define _TRANS_DP2_VFREQHIGH_C			0x620a4
+#define _TRANS_DP2_VFREQHIGH_D			0x630a4
+#define TRANS_DP2_VFREQHIGH(trans)		_MMIO_TRANS(trans, _TRANS_DP2_VFREQHIGH_A, _TRANS_DP2_VFREQHIGH_B)
+#define  TRANS_DP2_VFREQ_PIXEL_CLOCK_MASK	REG_GENMASK(31, 8)
+#define  TRANS_DP2_VFREQ_PIXEL_CLOCK(clk_hz)	REG_FIELD_PREP(TRANS_DP2_VFREQ_PIXEL_CLOCK_MASK, (clk_hz))
+
+#define _TRANS_DP2_VFREQLOW_A			0x600a8
+#define _TRANS_DP2_VFREQLOW_B			0x610a8
+#define _TRANS_DP2_VFREQLOW_C			0x620a8
+#define _TRANS_DP2_VFREQLOW_D			0x630a8
+#define TRANS_DP2_VFREQLOW(trans)		_MMIO_TRANS(trans, _TRANS_DP2_VFREQLOW_A, _TRANS_DP2_VFREQLOW_B)
+
 /* SNB eDP training params */
 /* SNB A-stepping */
 #define  EDP_LINK_TRAIN_400MV_0DB_SNB_A		(0x38 << 22)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 10/17] drm/i915/dg2: add DG2 UHBR source rates
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (8 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 09/17] drm/i915/dg2: add TRANS_DP2_VFREQHIGH and TRANS_DP2_VFREQLOW Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 11/17] drm/i915/dp: add max data rate calculation for UHBR rates Jani Nikula
                   ` (9 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

DG2 supports DP 2.0 UHBR and 128b/132b channel encoding.

Bspec: 53657, 54034
Acked-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 079b5b37b85a..38d69e3d55ff 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -240,6 +240,11 @@ bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp)
 		 encoder->port != PORT_A);
 }
 
+static int dg2_max_source_rate(struct intel_dp *intel_dp)
+{
+	return intel_dp_is_edp(intel_dp) ? 810000 : 1350000;
+}
+
 static int icl_max_source_rate(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
@@ -266,7 +271,8 @@ intel_dp_set_source_rates(struct intel_dp *intel_dp)
 {
 	/* The values must be in increasing order */
 	static const int icl_rates[] = {
-		162000, 216000, 270000, 324000, 432000, 540000, 648000, 810000
+		162000, 216000, 270000, 324000, 432000, 540000, 648000, 810000,
+		1000000, 1350000,
 	};
 	static const int bxt_rates[] = {
 		162000, 216000, 243000, 270000, 324000, 432000, 540000
@@ -293,6 +299,8 @@ intel_dp_set_source_rates(struct intel_dp *intel_dp)
 	if (DISPLAY_VER(dev_priv) >= 11) {
 		source_rates = icl_rates;
 		size = ARRAY_SIZE(icl_rates);
+		if (IS_DG2(dev_priv))
+			max_rate = dg2_max_source_rate(intel_dp);
 		if (IS_JSL_EHL(dev_priv))
 			max_rate = ehl_max_source_rate(intel_dp);
 		else
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 11/17] drm/i915/dp: add max data rate calculation for UHBR rates
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (9 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 10/17] drm/i915/dg2: add DG2 UHBR source rates Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 12/17] drm/i915/dp: use 128b/132b TPS2 for UHBR+ link rates Jani Nikula
                   ` (8 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

DP 2.0 UHBR link rates always use 128b/132b channel encoding, which has
a different data bandwidth efficiency from 8b/10b. The computation is
slightly convoluted due to the units we use; this is all explained in
the added comment.

v2: Clarified comment (Manasi)

Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 51 ++++++++++++++++++++++---
 drivers/gpu/drm/i915/display/intel_dp.h |  2 +-
 2 files changed, 46 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 38d69e3d55ff..8e42c30b459c 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -210,6 +210,10 @@ int intel_dp_max_lane_count(struct intel_dp *intel_dp)
 	return intel_dp->max_link_lane_count;
 }
 
+/*
+ * The required data bandwidth for a mode with given pixel clock and bpp. This
+ * is the required net bandwidth independent of the data bandwidth efficiency.
+ */
 int
 intel_dp_link_required(int pixel_clock, int bpp)
 {
@@ -217,16 +221,51 @@ intel_dp_link_required(int pixel_clock, int bpp)
 	return DIV_ROUND_UP(pixel_clock * bpp, 8);
 }
 
+/*
+ * Given a link rate and lanes, get the data bandwidth.
+ *
+ * Data bandwidth is the actual payload rate, which depends on the data
+ * bandwidth efficiency and the link rate.
+ *
+ * For 8b/10b channel encoding, SST and non-FEC, the data bandwidth efficiency
+ * is 80%. For example, for a 1.62 Gbps link, 1.62*10^9 bps * 0.80 * (1/8) =
+ * 162000 kBps. With 8-bit symbols, we have 162000 kHz symbol clock. Just by
+ * coincidence, the port clock in kHz matches the data bandwidth in kBps, and
+ * they equal the link bit rate in Gbps multiplied by 100000. (Note that this no
+ * longer holds for data bandwidth as soon as FEC or MST is taken into account!)
+ *
+ * For 128b/132b channel encoding, the data bandwidth efficiency is 96.71%. For
+ * example, for a 10 Gbps link, 10*10^9 bps * 0.9671 * (1/8) = 1208875
+ * kBps. With 32-bit symbols, we have 312500 kHz symbol clock. The value 1000000
+ * does not match the symbol clock, the port clock (not even if you think in
+ * terms of a byte clock), nor the data bandwidth. It only matches the link bit
+ * rate in units of 10000 bps.
+ */
 int
-intel_dp_max_data_rate(int max_link_clock, int max_lanes)
+intel_dp_max_data_rate(int max_link_rate, int max_lanes)
 {
-	/* max_link_clock is the link symbol clock (LS_Clk) in kHz and not the
-	 * link rate that is generally expressed in Gbps. Since, 8 bits of data
-	 * is transmitted every LS_Clk per lane, there is no need to account for
-	 * the channel encoding that is done in the PHY layer here.
+	if (max_link_rate >= 1000000) {
+		/*
+		 * UHBR rates always use 128b/132b channel encoding, and have
+		 * 97.71% data bandwidth efficiency. Consider max_link_rate the
+		 * link bit rate in units of 10000 bps.
+		 */
+		int max_link_rate_kbps = max_link_rate * 10;
+		max_link_rate_kbps = DIV_ROUND_CLOSEST_ULL(max_link_rate_kbps * 9671, 10000);
+		max_link_rate = max_link_rate_kbps / 8;
+	}
+
+	/*
+	 * Lower than UHBR rates always use 8b/10b channel encoding, and have
+	 * 80% data bandwidth efficiency for SST non-FEC. However, this turns
+	 * out to be a nop by coincidence, and can be skipped:
+	 *
+	 *	int max_link_rate_kbps = max_link_rate * 10;
+	 *	max_link_rate_kbps = DIV_ROUND_CLOSEST_ULL(max_link_rate_kbps * 8, 10);
+	 *	max_link_rate = max_link_rate_kbps / 8;
 	 */
 
-	return max_link_clock * max_lanes;
+	return max_link_rate * max_lanes;
 }
 
 bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp)
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 1345d588fc6d..ae0f776bffab 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -88,7 +88,7 @@ bool intel_dp_source_supports_hbr3(struct intel_dp *intel_dp);
 
 bool intel_dp_get_colorimetry_status(struct intel_dp *intel_dp);
 int intel_dp_link_required(int pixel_clock, int bpp);
-int intel_dp_max_data_rate(int max_link_clock, int max_lanes);
+int intel_dp_max_data_rate(int max_link_rate, int max_lanes);
 bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp);
 bool intel_dp_needs_vsc_sdp(const struct intel_crtc_state *crtc_state,
 			    const struct drm_connector_state *conn_state);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 12/17] drm/i915/dp: use 128b/132b TPS2 for UHBR+ link rates
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (10 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 11/17] drm/i915/dp: add max data rate calculation for UHBR rates Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 13/17] drm/i915/dp: select 128b/132b channel encoding for UHBR rates Jani Nikula
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

128b/132b channel encoding has separate TPS1 and TPS2, although the DPCD
register values coincide with 8b/10b TPS1 and TPS2 values. Use 128b/132b
TPS2 for channel equalization.

Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp_link_training.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
index 053a3c2f7267..031c753fca56 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
@@ -602,9 +602,9 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp,
 }
 
 /*
- * Pick training pattern for channel equalization. Training pattern 4 for HBR3
- * or for 1.4 devices that support it, training Pattern 3 for HBR2
- * or 1.2 devices that support it, Training Pattern 2 otherwise.
+ * Pick Training Pattern Sequence (TPS) for channel equalization. 128b/132b TPS2
+ * for UHBR+, TPS4 for HBR3 or for 1.4 devices that support it, TPS3 for HBR2 or
+ * 1.2 devices that support it, TPS2 otherwise.
  */
 static u32 intel_dp_training_pattern(struct intel_dp *intel_dp,
 				     const struct intel_crtc_state *crtc_state,
@@ -612,6 +612,10 @@ static u32 intel_dp_training_pattern(struct intel_dp *intel_dp,
 {
 	bool source_tps3, sink_tps3, source_tps4, sink_tps4;
 
+	/* UHBR+ use separate 128b/132b TPS2 */
+	if (crtc_state->port_clock >= 1000000)
+		return DP_TRAINING_PATTERN_2;
+
 	/*
 	 * Intel platforms that support HBR3 also support TPS4. It is mandatory
 	 * for all downstream devices that support HBR3. There are no known eDP
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 13/17] drm/i915/dp: select 128b/132b channel encoding for UHBR rates
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (11 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 12/17] drm/i915/dp: use 128b/132b TPS2 for UHBR+ link rates Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-19 17:49   ` Ville Syrjälä
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 14/17] drm/i915/dg2: configure TRANS_DP2_CTL for DP 2.0 Jani Nikula
                   ` (6 subsequent siblings)
  19 siblings, 1 reply; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

UHBR rates and 128b/132b channel encoding go hand in hand.

Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp_link_training.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
index 031c753fca56..01f0adc585d0 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
@@ -495,7 +495,8 @@ intel_dp_prepare_link_train(struct intel_dp *intel_dp,
 				  &rate_select, 1);
 
 	link_config[0] = crtc_state->vrr.enable ? DP_MSA_TIMING_PAR_IGNORE_EN : 0;
-	link_config[1] = DP_SET_ANSI_8B10B;
+	link_config[1] = crtc_state->port_clock > 1000000 ?
+		DP_SET_ANSI_128B132B : DP_SET_ANSI_8B10B;
 	drm_dp_dpcd_write(&intel_dp->aux, DP_DOWNSPREAD_CTRL, link_config, 2);
 
 	intel_dp->DP |= DP_PORT_EN;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 14/17] drm/i915/dg2: configure TRANS_DP2_CTL for DP 2.0
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (12 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 13/17] drm/i915/dp: select 128b/132b channel encoding for UHBR rates Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 15/17] drm/i915/dg2: use 128b/132b transcoder DDI mode Jani Nikula
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

Set the DP 2.0 128b/132b channel encoding for UHBR rates.

Bspec: 54128
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/display/intel_ddi.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
index 203a54f905f6..8b8f5d679b72 100644
--- a/drivers/gpu/drm/i915/display/intel_ddi.c
+++ b/drivers/gpu/drm/i915/display/intel_ddi.c
@@ -407,6 +407,20 @@ static u32 bdw_trans_port_sync_master_select(enum transcoder master_transcoder)
 		return master_transcoder + 1;
 }
 
+static void
+intel_ddi_config_transcoder_dp2(struct intel_encoder *encoder,
+				const struct intel_crtc_state *crtc_state)
+{
+	struct drm_i915_private *i915 = to_i915(encoder->base.dev);
+	enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
+	u32 val = 0;
+
+	if (crtc_state->port_clock > 1000000)
+		val = TRANS_DP2_128B132B_CHANNEL_CODING;
+
+	intel_de_write(i915, TRANS_DP2_CTL(cpu_transcoder), val);
+}
+
 /*
  * Returns the TRANS_DDI_FUNC_CTL value based on CRTC state.
  *
@@ -2375,7 +2389,8 @@ static void dg2_ddi_pre_enable_dp(struct intel_atomic_state *state,
 	 */
 	intel_ddi_enable_pipe_clock(encoder, crtc_state);
 
-	/* 5.b Not relevant to i915 for now */
+	/* 5.b Configure transcoder for DP 2.0 128b/132b */
+	intel_ddi_config_transcoder_dp2(encoder, crtc_state);
 
 	/*
 	 * 5.c Configure TRANS_DDI_FUNC_CTL DDI Select, DDI Mode Select & MST
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 15/17] drm/i915/dg2: use 128b/132b transcoder DDI mode
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (13 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 14/17] drm/i915/dg2: configure TRANS_DP2_CTL for DP 2.0 Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-19 17:54   ` Ville Syrjälä
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 16/17] drm/i915/dg2: configure TRANS_DP2_VFREQ{HIGH, LOW} for 128b/132b Jani Nikula
                   ` (4 subsequent siblings)
  19 siblings, 1 reply; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

128b/132b has a separate transcoder DDI mode, which also requires the
MST transport select to be set. Note that we'll use DP MST also for
single-stream 128b/132b.

Having the FDI and 128b/132b modes share the register mode value
complicates things a bit.

Bspec: 50493
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/display/intel_ddi.c | 27 ++++++++++++++++++------
 1 file changed, 20 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
index 8b8f5d679b72..1ee817348bf5 100644
--- a/drivers/gpu/drm/i915/display/intel_ddi.c
+++ b/drivers/gpu/drm/i915/display/intel_ddi.c
@@ -505,7 +505,10 @@ intel_ddi_transcoder_func_reg_val_get(struct intel_encoder *encoder,
 		temp |= TRANS_DDI_MODE_SELECT_FDI_OR_128B132B;
 		temp |= (crtc_state->fdi_lanes - 1) << 1;
 	} else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST)) {
-		temp |= TRANS_DDI_MODE_SELECT_DP_MST;
+		if (crtc_state->port_clock > 1000000)
+			temp |= TRANS_DDI_MODE_SELECT_FDI_OR_128B132B;
+		else
+			temp |= TRANS_DDI_MODE_SELECT_DP_MST;
 		temp |= DDI_PORT_WIDTH(crtc_state->lane_count);
 
 		if (DISPLAY_VER(dev_priv) >= 12) {
@@ -693,7 +696,12 @@ bool intel_ddi_connector_get_hw_state(struct intel_connector *intel_connector)
 		break;
 
 	case TRANS_DDI_MODE_SELECT_FDI_OR_128B132B:
-		ret = type == DRM_MODE_CONNECTOR_VGA;
+		if (IS_DG2(dev_priv))
+			/* 128b/132b */
+			ret = false;
+		else
+			/* FDI */
+			ret = type == DRM_MODE_CONNECTOR_VGA;
 		break;
 
 	default:
@@ -780,8 +788,9 @@ static void intel_ddi_get_encoder_pipes(struct intel_encoder *encoder,
 		if ((tmp & port_mask) != ddi_select)
 			continue;
 
-		if ((tmp & TRANS_DDI_MODE_SELECT_MASK) ==
-		    TRANS_DDI_MODE_SELECT_DP_MST)
+		if ((tmp & TRANS_DDI_MODE_SELECT_MASK) == TRANS_DDI_MODE_SELECT_DP_MST ||
+		    (IS_DG2(dev_priv) &&
+		     (tmp & TRANS_DDI_MODE_SELECT_MASK) == TRANS_DDI_MODE_SELECT_FDI_OR_128B132B))
 			mst_pipe_mask |= BIT(p);
 
 		*pipe_mask |= BIT(p);
@@ -3572,9 +3581,6 @@ static void intel_ddi_read_func_ctl(struct intel_encoder *encoder,
 		pipe_config->output_types |= BIT(INTEL_OUTPUT_HDMI);
 		pipe_config->lane_count = 4;
 		break;
-	case TRANS_DDI_MODE_SELECT_FDI_OR_128B132B:
-		pipe_config->output_types |= BIT(INTEL_OUTPUT_ANALOG);
-		break;
 	case TRANS_DDI_MODE_SELECT_DP_SST:
 		if (encoder->type == INTEL_OUTPUT_EDP)
 			pipe_config->output_types |= BIT(INTEL_OUTPUT_EDP);
@@ -3603,6 +3609,13 @@ static void intel_ddi_read_func_ctl(struct intel_encoder *encoder,
 			pipe_config->infoframes.enable |=
 				intel_hdmi_infoframes_enabled(encoder, pipe_config);
 		break;
+	case TRANS_DDI_MODE_SELECT_FDI_OR_128B132B:
+		if (!IS_DG2(dev_priv)) {
+			/* FDI */
+			pipe_config->output_types |= BIT(INTEL_OUTPUT_ANALOG);
+			break;
+		}
+		fallthrough; /* 128b/132b */
 	case TRANS_DDI_MODE_SELECT_DP_MST:
 		pipe_config->output_types |= BIT(INTEL_OUTPUT_DP_MST);
 		pipe_config->lane_count =
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 16/17] drm/i915/dg2: configure TRANS_DP2_VFREQ{HIGH, LOW} for 128b/132b
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (14 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 15/17] drm/i915/dg2: use 128b/132b transcoder DDI mode Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 17/17] drm/i915/dg2: update link training " Jani Nikula
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

There's a new register pair for 128b/132b mode where you need to set the
pixel clock in Hz.

Bspec: 54128
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp_mst.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index d104441344c0..a003cc98312b 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -550,6 +550,17 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state,
 
 	clear_act_sent(encoder, pipe_config);
 
+	if (pipe_config->port_clock > 1000000) {
+		const struct drm_display_mode *adjusted_mode =
+			&pipe_config->hw.adjusted_mode;
+		u64 crtc_clock_hz = KHz(adjusted_mode->crtc_clock);
+
+		intel_de_write(dev_priv, TRANS_DP2_VFREQHIGH(pipe_config->cpu_transcoder),
+			       TRANS_DP2_VFREQ_PIXEL_CLOCK(crtc_clock_hz >> 24));
+		intel_de_write(dev_priv, TRANS_DP2_VFREQLOW(pipe_config->cpu_transcoder),
+			       TRANS_DP2_VFREQ_PIXEL_CLOCK(crtc_clock_hz & 0xffffff));
+	}
+
 	intel_ddi_enable_transcoder_func(encoder, pipe_config);
 
 	intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(trans), 0,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] [PATCH 17/17] drm/i915/dg2: update link training for 128b/132b
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (15 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 16/17] drm/i915/dg2: configure TRANS_DP2_VFREQ{HIGH, LOW} for 128b/132b Jani Nikula
@ 2021-08-18 18:10 ` Jani Nikula
  2021-08-18 20:19 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/dp: dp 2.0 enabling prep work Patchwork
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 28+ messages in thread
From: Jani Nikula @ 2021-08-18 18:10 UTC (permalink / raw)
  To: intel-gfx; +Cc: jani.nikula, manasi.d.navare, ville.syrjala

The 128b/132b channel coding link training uses more straightforward TX
FFE preset values.

Signed-off-by: Jani Nikula <jani.nikula@intel.com>
---
 drivers/gpu/drm/i915/display/intel_ddi.c      | 13 ++-
 .../drm/i915/display/intel_dp_link_training.c | 86 +++++++++++++------
 2 files changed, 70 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
index 1ee817348bf5..e32d2e39f4d5 100644
--- a/drivers/gpu/drm/i915/display/intel_ddi.c
+++ b/drivers/gpu/drm/i915/display/intel_ddi.c
@@ -1397,11 +1397,16 @@ static int translate_signal_level(struct intel_dp *intel_dp,
 static int intel_ddi_dp_level(struct intel_dp *intel_dp,
 			      const struct intel_crtc_state *crtc_state)
 {
-	u8 train_set = intel_dp->train_set[0];
-	u8 signal_levels = train_set & (DP_TRAIN_VOLTAGE_SWING_MASK |
-					DP_TRAIN_PRE_EMPHASIS_MASK);
+	if (crtc_state->port_clock >= 1000000) {
+		/* FIXME: We'll want independent presets for each lane. */
+		return intel_dp->train_set[0] & DP_TX_FFE_PRESET_VALUE_MASK;
+	} else {
+		u8 train_set = intel_dp->train_set[0];
+		u8 signal_levels = train_set & (DP_TRAIN_VOLTAGE_SWING_MASK |
+						DP_TRAIN_PRE_EMPHASIS_MASK);
 
-	return translate_signal_level(intel_dp, signal_levels);
+		return translate_signal_level(intel_dp, signal_levels);
+	}
 }
 
 static void
diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
index 01f0adc585d0..1a83ec03ac67 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
@@ -301,6 +301,24 @@ static u8 intel_dp_phy_preemph_max(struct intel_dp *intel_dp,
 	return preemph_max;
 }
 
+static void intel_dp_128b132b_adjust_train(struct intel_dp* intel_dp,
+					   const struct intel_crtc_state *crtc_state,
+					   const u8 link_status[DP_LINK_STATUS_SIZE])
+{
+	int lane;
+	u8 tx_ffe = 0;
+
+	/*
+	 * FIXME: We'll want independent presets for each lane. See also
+	 * intel_ddi_dp_level() and intel_snps_phy_ddi_vswing_sequence().
+	 */
+	for (lane = 0; lane < crtc_state->lane_count; lane++)
+		tx_ffe = max(tx_ffe, drm_dp_get_adjust_tx_ffe_preset(link_status, lane));
+
+	for (lane = 0; lane < crtc_state->lane_count; lane++)
+		intel_dp->train_set[lane] = tx_ffe;
+}
+
 void
 intel_dp_get_adjust_train(struct intel_dp *intel_dp,
 			  const struct intel_crtc_state *crtc_state,
@@ -313,6 +331,11 @@ intel_dp_get_adjust_train(struct intel_dp *intel_dp,
 	u8 voltage_max;
 	u8 preemph_max;
 
+	if (crtc_state->port_clock > 1000000) {
+		intel_dp_128b132b_adjust_train(intel_dp, crtc_state, link_status);
+		return;
+	}
+
 	for (lane = 0; lane < crtc_state->lane_count; lane++) {
 		v = max(v, drm_dp_get_adjust_request_voltage(link_status, lane));
 		p = max(p, drm_dp_get_adjust_request_pre_emphasis(link_status, lane));
@@ -402,14 +425,21 @@ void intel_dp_set_signal_levels(struct intel_dp *intel_dp,
 	u8 train_set = intel_dp->train_set[0];
 	char phy_name[10];
 
-	drm_dbg_kms(&dev_priv->drm, "Using vswing level %d%s, pre-emphasis level %d%s, at %s\n",
-		    train_set & DP_TRAIN_VOLTAGE_SWING_MASK,
-		    train_set & DP_TRAIN_MAX_SWING_REACHED ? " (max)" : "",
-		    (train_set & DP_TRAIN_PRE_EMPHASIS_MASK) >>
-		    DP_TRAIN_PRE_EMPHASIS_SHIFT,
-		    train_set & DP_TRAIN_MAX_PRE_EMPHASIS_REACHED ?
-		    " (max)" : "",
-		    intel_dp_phy_name(dp_phy, phy_name, sizeof(phy_name)));
+	if (crtc_state->port_clock > 1000000) {
+		/* FIXME: We'll want independent presets for each lane. */
+		drm_dbg_kms(&dev_priv->drm, "%s: Using 128b/132b TX FFE preset %u\n",
+			    intel_dp_phy_name(dp_phy, phy_name, sizeof(phy_name)),
+			    train_set & DP_TX_FFE_PRESET_VALUE_MASK);
+	} else {
+		drm_dbg_kms(&dev_priv->drm, "%s: Using 8b/10b vswing level %d%s, pre-emphasis level %d%s\n",
+			    intel_dp_phy_name(dp_phy, phy_name, sizeof(phy_name)),
+			    train_set & DP_TRAIN_VOLTAGE_SWING_MASK,
+			    train_set & DP_TRAIN_MAX_SWING_REACHED ? " (max)" : "",
+			    (train_set & DP_TRAIN_PRE_EMPHASIS_MASK) >>
+			    DP_TRAIN_PRE_EMPHASIS_SHIFT,
+			    train_set & DP_TRAIN_MAX_PRE_EMPHASIS_REACHED ?
+			    " (max)" : "");
+	}
 
 	if (intel_dp_phy_is_downstream_of_source(intel_dp, dp_phy))
 		intel_dp->set_signal_levels(intel_dp, crtc_state);
@@ -565,18 +595,21 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp,
 			return true;
 		}
 
-		if (voltage_tries == 5) {
-			drm_dbg_kms(&i915->drm,
-				    "Same voltage tried 5 times\n");
-			return false;
-		}
+		/* FIXME: 128b/132b needs better abstractions here. */
+		if (crtc_state->port_clock < 1000000) {
+			if (voltage_tries == 5) {
+				drm_dbg_kms(&i915->drm,
+					    "Same voltage tried 5 times\n");
+				return false;
+			}
 
-		if (max_vswing_reached) {
-			drm_dbg_kms(&i915->drm, "Max Voltage Swing reached\n");
-			return false;
-		}
+			if (max_vswing_reached) {
+				drm_dbg_kms(&i915->drm, "Max Voltage Swing reached\n");
+				return false;
+			}
 
-		voltage = intel_dp->train_set[0] & DP_TRAIN_VOLTAGE_SWING_MASK;
+			voltage = intel_dp->train_set[0] & DP_TRAIN_VOLTAGE_SWING_MASK;
+		}
 
 		/* Update training set as requested by target */
 		intel_dp_get_adjust_train(intel_dp, crtc_state, dp_phy,
@@ -587,14 +620,17 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp,
 			return false;
 		}
 
-		if ((intel_dp->train_set[0] & DP_TRAIN_VOLTAGE_SWING_MASK) ==
-		    voltage)
-			++voltage_tries;
-		else
-			voltage_tries = 1;
+		/* FIXME: 128b/132b needs better abstractions here. */
+		if (crtc_state->port_clock < 1000000) {
+			if ((intel_dp->train_set[0] & DP_TRAIN_VOLTAGE_SWING_MASK) ==
+			    voltage)
+				++voltage_tries;
+			else
+				voltage_tries = 1;
 
-		if (intel_dp_link_max_vswing_reached(intel_dp, crtc_state))
-			max_vswing_reached = true;
+			if (intel_dp_link_max_vswing_reached(intel_dp, crtc_state))
+				max_vswing_reached = true;
+		}
 
 	}
 	drm_err(&i915->drm,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/dp: dp 2.0 enabling prep work
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (16 preceding siblings ...)
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 17/17] drm/i915/dg2: update link training " Jani Nikula
@ 2021-08-18 20:19 ` Patchwork
  2021-08-18 20:21 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
  2021-08-18 20:49 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
  19 siblings, 0 replies; 28+ messages in thread
From: Patchwork @ 2021-08-18 20:19 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

== Series Details ==

Series: drm/i915/dp: dp 2.0 enabling prep work
URL   : https://patchwork.freedesktop.org/series/93800/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
098494421cf3 drm/dp: add DP 2.0 UHBR link rate and bw code conversions
15972d4824f6 drm/dp: use more of the extended receiver cap
176cd12c2a2e drm/dp: add LTTPR DP 2.0 DPCD addresses
348fff506827 drm/dp: add helper for extracting adjust 128b/132b TX FFE preset
2c801bbdc46b drm/i915/dp: use actual link rate values in struct link_config_limits
-:26: CHECK:MULTIPLE_ASSIGNMENTS: multiple assignments should be avoided
#26: FILE: drivers/gpu/drm/i915/display/intel_dp.c:1047:
+				limits->min_rate = limits->max_rate =

total: 0 errors, 0 warnings, 1 checks, 106 lines checked
4d03932add3c drm/i915/dp: read sink UHBR rates
bd393efa3f8d drm/i915/dg2: add TRANS_DP2_CTL register definition
-:24: WARNING:LONG_LINE: line length of 102 exceeds 100 columns
#24: FILE: drivers/gpu/drm/i915/i915_reg.h:9106:
+#define TRANS_DP2_CTL(trans)			_MMIO_TRANS(trans, _TRANS_DP2_CTL_A, _TRANS_DP2_CTL_B)

total: 0 errors, 1 warnings, 0 checks, 15 lines checked
2ae1f5cd9112 drm/i915/dg2: add DG2+ TRANS_DDI_FUNC_CTL DP 2.0 128b/132b mode
09dd0b1c53f5 drm/i915/dg2: add TRANS_DP2_VFREQHIGH and TRANS_DP2_VFREQLOW
-:25: WARNING:LONG_LINE: line length of 114 exceeds 100 columns
#25: FILE: drivers/gpu/drm/i915/i915_reg.h:9115:
+#define TRANS_DP2_VFREQHIGH(trans)		_MMIO_TRANS(trans, _TRANS_DP2_VFREQHIGH_A, _TRANS_DP2_VFREQHIGH_B)

-:27: WARNING:LONG_LINE: line length of 106 exceeds 100 columns
#27: FILE: drivers/gpu/drm/i915/i915_reg.h:9117:
+#define  TRANS_DP2_VFREQ_PIXEL_CLOCK(clk_hz)	REG_FIELD_PREP(TRANS_DP2_VFREQ_PIXEL_CLOCK_MASK, (clk_hz))

-:33: WARNING:LONG_LINE: line length of 112 exceeds 100 columns
#33: FILE: drivers/gpu/drm/i915/i915_reg.h:9123:
+#define TRANS_DP2_VFREQLOW(trans)		_MMIO_TRANS(trans, _TRANS_DP2_VFREQLOW_A, _TRANS_DP2_VFREQLOW_B)

total: 0 errors, 3 warnings, 0 checks, 20 lines checked
310bd160b5df drm/i915/dg2: add DG2 UHBR source rates
9cf6be09b318 drm/i915/dp: add max data rate calculation for UHBR rates
-:70: WARNING:LINE_SPACING: Missing a blank line after declarations
#70: FILE: drivers/gpu/drm/i915/display/intel_dp.c:254:
+		int max_link_rate_kbps = max_link_rate * 10;
+		max_link_rate_kbps = DIV_ROUND_CLOSEST_ULL(max_link_rate_kbps * 9671, 10000);

total: 0 errors, 1 warnings, 0 checks, 75 lines checked
7d3f86063b7b drm/i915/dp: use 128b/132b TPS2 for UHBR+ link rates
76465b4d3708 drm/i915/dp: select 128b/132b channel encoding for UHBR rates
753442fe1e74 drm/i915/dg2: configure TRANS_DP2_CTL for DP 2.0
73b7a835f7b3 drm/i915/dg2: use 128b/132b transcoder DDI mode
44526b8ce97b drm/i915/dg2: configure TRANS_DP2_VFREQ{HIGH, LOW} for 128b/132b
f619e6bfa217 drm/i915/dg2: update link training for 128b/132b
-:25: WARNING:UNNECESSARY_ELSE: else is not generally useful after a break or return
#25: FILE: drivers/gpu/drm/i915/display/intel_ddi.c:1403:
+		return intel_dp->train_set[0] & DP_TX_FFE_PRESET_VALUE_MASK;
+	} else {

-:44: ERROR:POINTER_LOCATION: "foo* bar" should be "foo *bar"
#44: FILE: drivers/gpu/drm/i915/display/intel_dp_link_training.c:304:
+static void intel_dp_128b132b_adjust_train(struct intel_dp* intel_dp,

total: 1 errors, 1 warnings, 0 checks, 139 lines checked



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/dp: dp 2.0 enabling prep work
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (17 preceding siblings ...)
  2021-08-18 20:19 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/dp: dp 2.0 enabling prep work Patchwork
@ 2021-08-18 20:21 ` Patchwork
  2021-08-18 20:49 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
  19 siblings, 0 replies; 28+ messages in thread
From: Patchwork @ 2021-08-18 20:21 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

== Series Details ==

Series: drm/i915/dp: dp 2.0 enabling prep work
URL   : https://patchwork.freedesktop.org/series/93800/
State : warning

== Summary ==

$ dim sparse --fast origin/drm-tip
Sparse version: v0.6.2
Fast mode used, each commit won't be checked separately.
-
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+./drivers/gpu/drm/amd/amdgpu/../amdgpu/amdgv_sriovmsg.h:316:49: error: static assertion failed: "amd_sriov_msg_pf2vf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1345:25: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1345:25:    struct dma_fence *
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1345:25:    struct dma_fence [noderef] __rcu *
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1346:17: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1346:17:    struct dma_fence *
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1346:17:    struct dma_fence [noderef] __rcu *
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1405:17: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1405:17:    struct dma_fence *
+drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c:1405:17:    struct dma_fence [noderef] __rcu *
+drivers/gpu/drm/amd/amdgpu/amdgpu_device.c:354:16: error: incompatible types in comparison expression (different type sizes):
+drivers/gpu/drm/amd/amdgpu/amdgpu_device.c:354:16:    unsigned long *
+drivers/gpu/drm/amd/amdgpu/amdgpu_device.c:354:16:    unsigned long long *
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:275:25: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:275:25:    struct dma_fence *
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:275:25:    struct dma_fence [noderef] __rcu *
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:276:17: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:276:17:    struct dma_fence *
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:276:17:    struct dma_fence [noderef] __rcu *
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:330:17: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:330:17:    struct dma_fence *
+drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c:330:17:    struct dma_fence [noderef] __rcu *
+drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.h:123:51: error: marked inline, but without a definition
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv_sriovmsg.h:312:49: error: static assertion failed: "amd_sriov_msg_vf2pf_info must be 1 KB"
+drivers/gpu/drm/amd/amdgpu/amdgv



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/dp: dp 2.0 enabling prep work
  2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
                   ` (18 preceding siblings ...)
  2021-08-18 20:21 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2021-08-18 20:49 ` Patchwork
  19 siblings, 0 replies; 28+ messages in thread
From: Patchwork @ 2021-08-18 20:49 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 4059 bytes --]

== Series Details ==

Series: drm/i915/dp: dp 2.0 enabling prep work
URL   : https://patchwork.freedesktop.org/series/93800/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10497 -> Patchwork_20850
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20850/index.html

Known issues
------------

  Here are the changes found in Patchwork_20850 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@amdgpu/amd_basic@cs-gfx:
    - fi-rkl-guc:         NOTRUN -> [SKIP][1] ([fdo#109315]) +17 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20850/fi-rkl-guc/igt@amdgpu/amd_basic@cs-gfx.html
    - fi-kbl-soraka:      NOTRUN -> [SKIP][2] ([fdo#109271]) +6 similar issues
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20850/fi-kbl-soraka/igt@amdgpu/amd_basic@cs-gfx.html

  * igt@runner@aborted:
    - fi-bdw-5557u:       NOTRUN -> [FAIL][3] ([i915#1602] / [i915#2029])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20850/fi-bdw-5557u/igt@runner@aborted.html

  
#### Possible fixes ####

  * igt@core_hotunplug@unbind-rebind:
    - fi-rkl-guc:         [DMESG-WARN][4] ([i915#3925]) -> [PASS][5]
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10497/fi-rkl-guc/igt@core_hotunplug@unbind-rebind.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20850/fi-rkl-guc/igt@core_hotunplug@unbind-rebind.html

  
#### Warnings ####

  * igt@i915_pm_rpm@basic-rte:
    - fi-kbl-guc:         [FAIL][6] ([i915#579]) -> [SKIP][7] ([fdo#109271])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10497/fi-kbl-guc/igt@i915_pm_rpm@basic-rte.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20850/fi-kbl-guc/igt@i915_pm_rpm@basic-rte.html

  
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109315]: https://bugs.freedesktop.org/show_bug.cgi?id=109315
  [i915#1602]: https://gitlab.freedesktop.org/drm/intel/issues/1602
  [i915#2029]: https://gitlab.freedesktop.org/drm/intel/issues/2029
  [i915#3925]: https://gitlab.freedesktop.org/drm/intel/issues/3925
  [i915#579]: https://gitlab.freedesktop.org/drm/intel/issues/579


Participating hosts (34 -> 33)
------------------------------

  Missing    (1): fi-bsw-cyan 


Build changes
-------------

  * Linux: CI_DRM_10497 -> Patchwork_20850

  CI-20190529: 20190529
  CI_DRM_10497: c50a8ea72915f1e3972d011690a423bafbee7a58 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6178: 146260200f9a6d4536e48a195e2ab49a07d4f0c1 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_20850: f619e6bfa217e2f33dfb31e0da1262cae8c9209f @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

f619e6bfa217 drm/i915/dg2: update link training for 128b/132b
44526b8ce97b drm/i915/dg2: configure TRANS_DP2_VFREQ{HIGH, LOW} for 128b/132b
73b7a835f7b3 drm/i915/dg2: use 128b/132b transcoder DDI mode
753442fe1e74 drm/i915/dg2: configure TRANS_DP2_CTL for DP 2.0
76465b4d3708 drm/i915/dp: select 128b/132b channel encoding for UHBR rates
7d3f86063b7b drm/i915/dp: use 128b/132b TPS2 for UHBR+ link rates
9cf6be09b318 drm/i915/dp: add max data rate calculation for UHBR rates
310bd160b5df drm/i915/dg2: add DG2 UHBR source rates
09dd0b1c53f5 drm/i915/dg2: add TRANS_DP2_VFREQHIGH and TRANS_DP2_VFREQLOW
2ae1f5cd9112 drm/i915/dg2: add DG2+ TRANS_DDI_FUNC_CTL DP 2.0 128b/132b mode
bd393efa3f8d drm/i915/dg2: add TRANS_DP2_CTL register definition
4d03932add3c drm/i915/dp: read sink UHBR rates
2c801bbdc46b drm/i915/dp: use actual link rate values in struct link_config_limits
348fff506827 drm/dp: add helper for extracting adjust 128b/132b TX FFE preset
176cd12c2a2e drm/dp: add LTTPR DP 2.0 DPCD addresses
15972d4824f6 drm/dp: use more of the extended receiver cap
098494421cf3 drm/dp: add DP 2.0 UHBR link rate and bw code conversions

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_20850/index.html

[-- Attachment #2: Type: text/html, Size: 4917 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Intel-gfx] [PATCH 01/17] drm/dp: add DP 2.0 UHBR link rate and bw code conversions
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 01/17] drm/dp: add DP 2.0 UHBR link rate and bw code conversions Jani Nikula
@ 2021-08-19 16:51   ` Ville Syrjälä
  0 siblings, 0 replies; 28+ messages in thread
From: Ville Syrjälä @ 2021-08-19 16:51 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx, manasi.d.navare, dri-devel

On Wed, Aug 18, 2021 at 09:10:36PM +0300, Jani Nikula wrote:
> The bw code equals link_rate / 0.27 Gbps only for 8b/10b link
> rates. Handle DP 2.0 UHBR rates as special cases, though this is not
> pretty.

Ugh. So if I'm reading the spec right the behaviour of this
register now changes dynamically depending on the state of
some other bit in another register?

> 
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: Jani Nikula <jani.nikula@intel.com>
> ---
>  drivers/gpu/drm/drm_dp_helper.c | 26 ++++++++++++++++++++++----
>  1 file changed, 22 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
> index 6d0f2c447f3b..9b2a2961fca8 100644
> --- a/drivers/gpu/drm/drm_dp_helper.c
> +++ b/drivers/gpu/drm/drm_dp_helper.c
> @@ -207,15 +207,33 @@ EXPORT_SYMBOL(drm_dp_lttpr_link_train_channel_eq_delay);
>  
>  u8 drm_dp_link_rate_to_bw_code(int link_rate)
>  {
> -	/* Spec says link_bw = link_rate / 0.27Gbps */
> -	return link_rate / 27000;
> +	switch (link_rate) {
> +	case 1000000:
> +		return DP_LINK_BW_10;
> +	case 1350000:
> +		return DP_LINK_BW_13_5;
> +	case 2000000:
> +		return DP_LINK_BW_20;
> +	default:
> +		/* Spec says link_bw = link_rate / 0.27Gbps */
> +		return link_rate / 27000;
> +	}
>  }
>  EXPORT_SYMBOL(drm_dp_link_rate_to_bw_code);
>  
>  int drm_dp_bw_code_to_link_rate(u8 link_bw)
>  {
> -	/* Spec says link_rate = link_bw * 0.27Gbps */
> -	return link_bw * 27000;
> +	switch (link_bw) {
> +	case DP_LINK_BW_10:
> +		return 1000000;
> +	case DP_LINK_BW_13_5:
> +		return 1350000;
> +	case DP_LINK_BW_20:
> +		return 2000000;
> +	default:
> +		/* Spec says link_rate = link_bw * 0.27Gbps */
> +		return link_bw * 27000;
> +	}
>  }
>  EXPORT_SYMBOL(drm_dp_bw_code_to_link_rate);
>  
> -- 
> 2.20.1

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Intel-gfx] [PATCH 04/17] drm/dp: add helper for extracting adjust 128b/132b TX FFE preset
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 04/17] drm/dp: add helper for extracting adjust 128b/132b TX FFE preset Jani Nikula
@ 2021-08-19 17:30   ` Ville Syrjälä
  0 siblings, 0 replies; 28+ messages in thread
From: Ville Syrjälä @ 2021-08-19 17:30 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx, manasi.d.navare, dri-devel

On Wed, Aug 18, 2021 at 09:10:39PM +0300, Jani Nikula wrote:
> The DP 2.0 128b/132b channel coding uses TX FFE presets instead of
> vswing and pre-emphasis.
> 
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: Jani Nikula <jani.nikula@intel.com>

Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>

> ---
>  drivers/gpu/drm/drm_dp_helper.c | 14 ++++++++++++++
>  include/drm/drm_dp_helper.h     |  2 ++
>  2 files changed, 16 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
> index 9389f92cb944..2843238a78e6 100644
> --- a/drivers/gpu/drm/drm_dp_helper.c
> +++ b/drivers/gpu/drm/drm_dp_helper.c
> @@ -130,6 +130,20 @@ u8 drm_dp_get_adjust_request_pre_emphasis(const u8 link_status[DP_LINK_STATUS_SI
>  }
>  EXPORT_SYMBOL(drm_dp_get_adjust_request_pre_emphasis);
>  
> +/* DP 2.0 128b/132b */
> +u8 drm_dp_get_adjust_tx_ffe_preset(const u8 link_status[DP_LINK_STATUS_SIZE],
> +				   int lane)
> +{
> +	int i = DP_ADJUST_REQUEST_LANE0_1 + (lane >> 1);
> +	int s = ((lane & 1) ?
> +		 DP_ADJUST_TX_FFE_PRESET_LANE1_SHIFT :
> +		 DP_ADJUST_TX_FFE_PRESET_LANE0_SHIFT);
> +	u8 l = dp_link_status(link_status, i);
> +
> +	return (l >> s) & 0xf;
> +}
> +EXPORT_SYMBOL(drm_dp_get_adjust_tx_ffe_preset);
> +
>  u8 drm_dp_get_adjust_request_post_cursor(const u8 link_status[DP_LINK_STATUS_SIZE],
>  					 unsigned int lane)
>  {
> diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h
> index f3a61341011d..3ee0b3ffb8a5 100644
> --- a/include/drm/drm_dp_helper.h
> +++ b/include/drm/drm_dp_helper.h
> @@ -1494,6 +1494,8 @@ u8 drm_dp_get_adjust_request_voltage(const u8 link_status[DP_LINK_STATUS_SIZE],
>  				     int lane);
>  u8 drm_dp_get_adjust_request_pre_emphasis(const u8 link_status[DP_LINK_STATUS_SIZE],
>  					  int lane);
> +u8 drm_dp_get_adjust_tx_ffe_preset(const u8 link_status[DP_LINK_STATUS_SIZE],
> +				   int lane);
>  u8 drm_dp_get_adjust_request_post_cursor(const u8 link_status[DP_LINK_STATUS_SIZE],
>  					 unsigned int lane);
>  
> -- 
> 2.20.1

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Intel-gfx] [PATCH 05/17] drm/i915/dp: use actual link rate values in struct link_config_limits
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 05/17] drm/i915/dp: use actual link rate values in struct link_config_limits Jani Nikula
@ 2021-08-19 17:34   ` Ville Syrjälä
  0 siblings, 0 replies; 28+ messages in thread
From: Ville Syrjälä @ 2021-08-19 17:34 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx, manasi.d.navare

On Wed, Aug 18, 2021 at 09:10:40PM +0300, Jani Nikula wrote:
> The MST code uses actual link rates in the limits struct, while the DP
> code in general uses indexes to the ->common_rates[] array. Fix the
> confusion by using actual link rate values everywhere. This is a better
> abstraction than some obscure index.
> 
> Rename the struct members while at it to ensure all the places are
> covered.
> 
> Signed-off-by: Jani Nikula <jani.nikula@intel.com>

Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>

> ---
>  drivers/gpu/drm/i915/display/intel_dp.c     | 30 ++++++++++++---------
>  drivers/gpu/drm/i915/display/intel_dp.h     |  2 +-
>  drivers/gpu/drm/i915/display/intel_dp_mst.c |  6 ++---
>  3 files changed, 21 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 75d4ebc66941..d273b3848785 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -1044,7 +1044,8 @@ intel_dp_adjust_compliance_config(struct intel_dp *intel_dp,
>  						    intel_dp->num_common_rates,
>  						    intel_dp->compliance.test_link_rate);
>  			if (index >= 0)
> -				limits->min_clock = limits->max_clock = index;
> +				limits->min_rate = limits->max_rate =
> +					intel_dp->compliance.test_link_rate;
>  			limits->min_lane_count = limits->max_lane_count =
>  				intel_dp->compliance.test_lane_count;
>  		}
> @@ -1058,8 +1059,8 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
>  				  const struct link_config_limits *limits)
>  {
>  	struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
> -	int bpp, clock, lane_count;
> -	int mode_rate, link_clock, link_avail;
> +	int bpp, i, lane_count;
> +	int mode_rate, link_rate, link_avail;
>  
>  	for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
>  		int output_bpp = intel_dp_output_bpp(pipe_config->output_format, bpp);
> @@ -1067,18 +1068,22 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
>  		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
>  						   output_bpp);
>  
> -		for (clock = limits->min_clock; clock <= limits->max_clock; clock++) {
> +		for (i = 0; i < intel_dp->num_common_rates; i++) {
> +			link_rate = intel_dp->common_rates[i];
> +			if (link_rate < limits->min_rate ||
> +			    link_rate > limits->max_rate)
> +				continue;
> +
>  			for (lane_count = limits->min_lane_count;
>  			     lane_count <= limits->max_lane_count;
>  			     lane_count <<= 1) {
> -				link_clock = intel_dp->common_rates[clock];
> -				link_avail = intel_dp_max_data_rate(link_clock,
> +				link_avail = intel_dp_max_data_rate(link_rate,
>  								    lane_count);
>  
>  				if (mode_rate <= link_avail) {
>  					pipe_config->lane_count = lane_count;
>  					pipe_config->pipe_bpp = bpp;
> -					pipe_config->port_clock = link_clock;
> +					pipe_config->port_clock = link_rate;
>  
>  					return 0;
>  				}
> @@ -1212,7 +1217,7 @@ static int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
>  	 * with DSC enabled for the requested mode.
>  	 */
>  	pipe_config->pipe_bpp = pipe_bpp;
> -	pipe_config->port_clock = intel_dp->common_rates[limits->max_clock];
> +	pipe_config->port_clock = limits->max_rate;
>  	pipe_config->lane_count = limits->max_lane_count;
>  
>  	if (intel_dp_is_edp(intel_dp)) {
> @@ -1321,8 +1326,8 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
>  	/* No common link rates between source and sink */
>  	drm_WARN_ON(encoder->base.dev, common_len <= 0);
>  
> -	limits.min_clock = 0;
> -	limits.max_clock = common_len - 1;
> +	limits.min_rate = intel_dp->common_rates[0];
> +	limits.max_rate = intel_dp->common_rates[common_len - 1];
>  
>  	limits.min_lane_count = 1;
>  	limits.max_lane_count = intel_dp_max_lane_count(intel_dp);
> @@ -1340,15 +1345,14 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
>  		 * values correspond to the native resolution of the panel.
>  		 */
>  		limits.min_lane_count = limits.max_lane_count;
> -		limits.min_clock = limits.max_clock;
> +		limits.min_rate = limits.max_rate;
>  	}
>  
>  	intel_dp_adjust_compliance_config(intel_dp, pipe_config, &limits);
>  
>  	drm_dbg_kms(&i915->drm, "DP link computation with max lane count %i "
>  		    "max rate %d max bpp %d pixel clock %iKHz\n",
> -		    limits.max_lane_count,
> -		    intel_dp->common_rates[limits.max_clock],
> +		    limits.max_lane_count, limits.max_rate,
>  		    limits.max_bpp, adjusted_mode->crtc_clock);
>  
>  	if ((adjusted_mode->crtc_clock > i915->max_dotclk_freq ||
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
> index 680631b5b437..1345d588fc6d 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -26,7 +26,7 @@ struct intel_dp;
>  struct intel_encoder;
>  
>  struct link_config_limits {
> -	int min_clock, max_clock;
> +	int min_rate, max_rate;
>  	int min_lane_count, max_lane_count;
>  	int min_bpp, max_bpp;
>  };
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> index 9859c0334ebc..d104441344c0 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> @@ -61,7 +61,7 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
>  	int bpp, slots = -EINVAL;
>  
>  	crtc_state->lane_count = limits->max_lane_count;
> -	crtc_state->port_clock = limits->max_clock;
> +	crtc_state->port_clock = limits->max_rate;
>  
>  	for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
>  		crtc_state->pipe_bpp = bpp;
> @@ -131,8 +131,8 @@ static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
>  	 * for MST we always configure max link bw - the spec doesn't
>  	 * seem to suggest we should do otherwise.
>  	 */
> -	limits.min_clock =
> -	limits.max_clock = intel_dp_max_link_rate(intel_dp);
> +	limits.min_rate =
> +	limits.max_rate = intel_dp_max_link_rate(intel_dp);
>  
>  	limits.min_lane_count =
>  	limits.max_lane_count = intel_dp_max_lane_count(intel_dp);
> -- 
> 2.20.1

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Intel-gfx] [PATCH 06/17] drm/i915/dp: read sink UHBR rates
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 06/17] drm/i915/dp: read sink UHBR rates Jani Nikula
@ 2021-08-19 17:45   ` Ville Syrjälä
  0 siblings, 0 replies; 28+ messages in thread
From: Ville Syrjälä @ 2021-08-19 17:45 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx, manasi.d.navare

On Wed, Aug 18, 2021 at 09:10:41PM +0300, Jani Nikula wrote:
> See if sink supports DP 2.0 128b/132b channel encoding, and update sink
> rates accordingly.
> 
> FIXME: Also take LTTPR 128b/132b into account.
> 
> Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
> Signed-off-by: Jani Nikula <jani.nikula@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index d273b3848785..079b5b37b85a 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -141,6 +141,24 @@ static void intel_dp_set_sink_rates(struct intel_dp *intel_dp)
>  		intel_dp->sink_rates[i] = dp_rates[i];
>  	}
>  
> +	/*
> +	 * Sink rates for 128b/132b. If set, sink should support all 8b/10b
> +	 * rates and 10 Gbps.
> +	 */
> +	if (intel_dp->dpcd[DP_MAIN_LINK_CHANNEL_CODING] & DP_CAP_ANSI_128B132B) {
> +		u8 uhbr_rates = 0;
> +
> +		drm_dp_dpcd_readb(&intel_dp->aux,
> +				  DP_128B132B_SUPPORTED_LINK_RATES, &uhbr_rates);
> +
> +		if (uhbr_rates & DP_UHBR10)
> +			intel_dp->sink_rates[i++] = 1000000;
> +		if (uhbr_rates & DP_UHBR13_5)
> +			intel_dp->sink_rates[i++] = 1350000;
> +		if (uhbr_rates & DP_UHBR20)
> +			intel_dp->sink_rates[i++] = 2000000;

OK, so the max link rate register isn't supposed to report the
new magic UHBR BW values it seems. That makes life a bit easier.

Maybe toss in a 
BUILD_BUG_ON(ARRAY_SIZE(sink_rates) >= ARRAY_SIZE(dp_rates)+3);
or something to remind people that we are dealing with a fixed size
array here?

Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>

> +	}
> +
>  	intel_dp->num_sink_rates = i;
>  }
>  
> -- 
> 2.20.1

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Intel-gfx] [PATCH 13/17] drm/i915/dp: select 128b/132b channel encoding for UHBR rates
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 13/17] drm/i915/dp: select 128b/132b channel encoding for UHBR rates Jani Nikula
@ 2021-08-19 17:49   ` Ville Syrjälä
  2021-08-20  6:36     ` Jani Nikula
  0 siblings, 1 reply; 28+ messages in thread
From: Ville Syrjälä @ 2021-08-19 17:49 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx, manasi.d.navare

On Wed, Aug 18, 2021 at 09:10:48PM +0300, Jani Nikula wrote:
> UHBR rates and 128b/132b channel encoding go hand in hand.
> 
> Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
> Signed-off-by: Jani Nikula <jani.nikula@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp_link_training.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> index 031c753fca56..01f0adc585d0 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> @@ -495,7 +495,8 @@ intel_dp_prepare_link_train(struct intel_dp *intel_dp,
>  				  &rate_select, 1);
>  
>  	link_config[0] = crtc_state->vrr.enable ? DP_MSA_TIMING_PAR_IGNORE_EN : 0;
> -	link_config[1] = DP_SET_ANSI_8B10B;
> +	link_config[1] = crtc_state->port_clock > 1000000 ?

Should this be >= ?

> +		DP_SET_ANSI_128B132B : DP_SET_ANSI_8B10B;
>  	drm_dp_dpcd_write(&intel_dp->aux, DP_DOWNSPREAD_CTRL, link_config, 2);
>  
>  	intel_dp->DP |= DP_PORT_EN;
> -- 
> 2.20.1

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Intel-gfx] [PATCH 15/17] drm/i915/dg2: use 128b/132b transcoder DDI mode
  2021-08-18 18:10 ` [Intel-gfx] [PATCH 15/17] drm/i915/dg2: use 128b/132b transcoder DDI mode Jani Nikula
@ 2021-08-19 17:54   ` Ville Syrjälä
  0 siblings, 0 replies; 28+ messages in thread
From: Ville Syrjälä @ 2021-08-19 17:54 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx, manasi.d.navare

On Wed, Aug 18, 2021 at 09:10:50PM +0300, Jani Nikula wrote:
> 128b/132b has a separate transcoder DDI mode, which also requires the
> MST transport select to be set. Note that we'll use DP MST also for
> single-stream 128b/132b.
> 
> Having the FDI and 128b/132b modes share the register mode value
> complicates things a bit.
> 
> Bspec: 50493
> Signed-off-by: Jani Nikula <jani.nikula@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_ddi.c | 27 ++++++++++++++++++------
>  1 file changed, 20 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
> index 8b8f5d679b72..1ee817348bf5 100644
> --- a/drivers/gpu/drm/i915/display/intel_ddi.c
> +++ b/drivers/gpu/drm/i915/display/intel_ddi.c
> @@ -505,7 +505,10 @@ intel_ddi_transcoder_func_reg_val_get(struct intel_encoder *encoder,
>  		temp |= TRANS_DDI_MODE_SELECT_FDI_OR_128B132B;
>  		temp |= (crtc_state->fdi_lanes - 1) << 1;
>  	} else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST)) {
> -		temp |= TRANS_DDI_MODE_SELECT_DP_MST;
> +		if (crtc_state->port_clock > 1000000)
> +			temp |= TRANS_DDI_MODE_SELECT_FDI_OR_128B132B;
> +		else
> +			temp |= TRANS_DDI_MODE_SELECT_DP_MST;
>  		temp |= DDI_PORT_WIDTH(crtc_state->lane_count);
>  
>  		if (DISPLAY_VER(dev_priv) >= 12) {
> @@ -693,7 +696,12 @@ bool intel_ddi_connector_get_hw_state(struct intel_connector *intel_connector)
>  		break;
>  
>  	case TRANS_DDI_MODE_SELECT_FDI_OR_128B132B:
> -		ret = type == DRM_MODE_CONNECTOR_VGA;
> +		if (IS_DG2(dev_priv))
> +			/* 128b/132b */
> +			ret = false;

Maybe introduce HAS_FDI() or something to avoid these platform checks
all over?

> +		else
> +			/* FDI */
> +			ret = type == DRM_MODE_CONNECTOR_VGA;
>  		break;
>  
>  	default:
> @@ -780,8 +788,9 @@ static void intel_ddi_get_encoder_pipes(struct intel_encoder *encoder,
>  		if ((tmp & port_mask) != ddi_select)
>  			continue;
>  
> -		if ((tmp & TRANS_DDI_MODE_SELECT_MASK) ==
> -		    TRANS_DDI_MODE_SELECT_DP_MST)
> +		if ((tmp & TRANS_DDI_MODE_SELECT_MASK) == TRANS_DDI_MODE_SELECT_DP_MST ||
> +		    (IS_DG2(dev_priv) &&
> +		     (tmp & TRANS_DDI_MODE_SELECT_MASK) == TRANS_DDI_MODE_SELECT_FDI_OR_128B132B))
>  			mst_pipe_mask |= BIT(p);
>  
>  		*pipe_mask |= BIT(p);
> @@ -3572,9 +3581,6 @@ static void intel_ddi_read_func_ctl(struct intel_encoder *encoder,
>  		pipe_config->output_types |= BIT(INTEL_OUTPUT_HDMI);
>  		pipe_config->lane_count = 4;
>  		break;
> -	case TRANS_DDI_MODE_SELECT_FDI_OR_128B132B:
> -		pipe_config->output_types |= BIT(INTEL_OUTPUT_ANALOG);
> -		break;
>  	case TRANS_DDI_MODE_SELECT_DP_SST:
>  		if (encoder->type == INTEL_OUTPUT_EDP)
>  			pipe_config->output_types |= BIT(INTEL_OUTPUT_EDP);
> @@ -3603,6 +3609,13 @@ static void intel_ddi_read_func_ctl(struct intel_encoder *encoder,
>  			pipe_config->infoframes.enable |=
>  				intel_hdmi_infoframes_enabled(encoder, pipe_config);
>  		break;
> +	case TRANS_DDI_MODE_SELECT_FDI_OR_128B132B:
> +		if (!IS_DG2(dev_priv)) {
> +			/* FDI */
> +			pipe_config->output_types |= BIT(INTEL_OUTPUT_ANALOG);
> +			break;
> +		}
> +		fallthrough; /* 128b/132b */
>  	case TRANS_DDI_MODE_SELECT_DP_MST:
>  		pipe_config->output_types |= BIT(INTEL_OUTPUT_DP_MST);
>  		pipe_config->lane_count =
> -- 
> 2.20.1

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Intel-gfx] [PATCH 13/17] drm/i915/dp: select 128b/132b channel encoding for UHBR rates
  2021-08-19 17:49   ` Ville Syrjälä
@ 2021-08-20  6:36     ` Jani Nikula
  0 siblings, 0 replies; 28+ messages in thread
From: Jani Nikula @ 2021-08-20  6:36 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx, manasi.d.navare

On Thu, 19 Aug 2021, Ville Syrjälä <ville.syrjala@linux.intel.com> wrote:
> On Wed, Aug 18, 2021 at 09:10:48PM +0300, Jani Nikula wrote:
>> UHBR rates and 128b/132b channel encoding go hand in hand.
>> 
>> Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
>> Signed-off-by: Jani Nikula <jani.nikula@intel.com>
>> ---
>>  drivers/gpu/drm/i915/display/intel_dp_link_training.c | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>> 
>> diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
>> index 031c753fca56..01f0adc585d0 100644
>> --- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
>> +++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
>> @@ -495,7 +495,8 @@ intel_dp_prepare_link_train(struct intel_dp *intel_dp,
>>  				  &rate_select, 1);
>>  
>>  	link_config[0] = crtc_state->vrr.enable ? DP_MSA_TIMING_PAR_IGNORE_EN : 0;
>> -	link_config[1] = DP_SET_ANSI_8B10B;
>> +	link_config[1] = crtc_state->port_clock > 1000000 ?
>
> Should this be >= ?

Yes, whoops, thanks!

>
>> +		DP_SET_ANSI_128B132B : DP_SET_ANSI_8B10B;
>>  	drm_dp_dpcd_write(&intel_dp->aux, DP_DOWNSPREAD_CTRL, link_config, 2);
>>  
>>  	intel_dp->DP |= DP_PORT_EN;
>> -- 
>> 2.20.1

-- 
Jani Nikula, Intel Open Source Graphics Center

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2021-08-20  6:36 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-18 18:10 [Intel-gfx] [PATCH 00/17] drm/i915/dp: dp 2.0 enabling prep work Jani Nikula
2021-08-18 18:10 ` [Intel-gfx] [PATCH 01/17] drm/dp: add DP 2.0 UHBR link rate and bw code conversions Jani Nikula
2021-08-19 16:51   ` Ville Syrjälä
2021-08-18 18:10 ` [Intel-gfx] [PATCH 02/17] drm/dp: use more of the extended receiver cap Jani Nikula
2021-08-18 18:10 ` [Intel-gfx] [PATCH 03/17] drm/dp: add LTTPR DP 2.0 DPCD addresses Jani Nikula
2021-08-18 18:10 ` [Intel-gfx] [PATCH 04/17] drm/dp: add helper for extracting adjust 128b/132b TX FFE preset Jani Nikula
2021-08-19 17:30   ` Ville Syrjälä
2021-08-18 18:10 ` [Intel-gfx] [PATCH 05/17] drm/i915/dp: use actual link rate values in struct link_config_limits Jani Nikula
2021-08-19 17:34   ` Ville Syrjälä
2021-08-18 18:10 ` [Intel-gfx] [PATCH 06/17] drm/i915/dp: read sink UHBR rates Jani Nikula
2021-08-19 17:45   ` Ville Syrjälä
2021-08-18 18:10 ` [Intel-gfx] [PATCH 07/17] drm/i915/dg2: add TRANS_DP2_CTL register definition Jani Nikula
2021-08-18 18:10 ` [Intel-gfx] [PATCH 08/17] drm/i915/dg2: add DG2+ TRANS_DDI_FUNC_CTL DP 2.0 128b/132b mode Jani Nikula
2021-08-18 18:10 ` [Intel-gfx] [PATCH 09/17] drm/i915/dg2: add TRANS_DP2_VFREQHIGH and TRANS_DP2_VFREQLOW Jani Nikula
2021-08-18 18:10 ` [Intel-gfx] [PATCH 10/17] drm/i915/dg2: add DG2 UHBR source rates Jani Nikula
2021-08-18 18:10 ` [Intel-gfx] [PATCH 11/17] drm/i915/dp: add max data rate calculation for UHBR rates Jani Nikula
2021-08-18 18:10 ` [Intel-gfx] [PATCH 12/17] drm/i915/dp: use 128b/132b TPS2 for UHBR+ link rates Jani Nikula
2021-08-18 18:10 ` [Intel-gfx] [PATCH 13/17] drm/i915/dp: select 128b/132b channel encoding for UHBR rates Jani Nikula
2021-08-19 17:49   ` Ville Syrjälä
2021-08-20  6:36     ` Jani Nikula
2021-08-18 18:10 ` [Intel-gfx] [PATCH 14/17] drm/i915/dg2: configure TRANS_DP2_CTL for DP 2.0 Jani Nikula
2021-08-18 18:10 ` [Intel-gfx] [PATCH 15/17] drm/i915/dg2: use 128b/132b transcoder DDI mode Jani Nikula
2021-08-19 17:54   ` Ville Syrjälä
2021-08-18 18:10 ` [Intel-gfx] [PATCH 16/17] drm/i915/dg2: configure TRANS_DP2_VFREQ{HIGH, LOW} for 128b/132b Jani Nikula
2021-08-18 18:10 ` [Intel-gfx] [PATCH 17/17] drm/i915/dg2: update link training " Jani Nikula
2021-08-18 20:19 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915/dp: dp 2.0 enabling prep work Patchwork
2021-08-18 20:21 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-08-18 20:49 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).