All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] Remaining patches for upfront link training on DDI platforms
@ 2016-09-14  1:08 Manasi Navare
  2016-09-14  1:08 ` [PATCH v5 1/5] drm/i915: Fallback to lower link rate and lane count during link training Manasi Navare
                   ` (6 more replies)
  0 siblings, 7 replies; 56+ messages in thread
From: Manasi Navare @ 2016-09-14  1:08 UTC (permalink / raw)
  To: intel-gfx

This patch series includes some of the remaining patches to enable
upfront link training on DDI platforms for DP SST and MST.
They are based on some of the patches submitted earlier by
Ander and Durgadoss.

The upfront link training had to be factored out of long pulse
hanlder because of deadlock issues seen on DP MST cases.
Now the upfront link training takes place in intel_dp_mode_valid()
to find the maximum lane count and link rate at which the DP link
can be successfully trained. These values are used to prune the
invalid modes before modeset. Modeset makes use the upfront lane
count and link train values.

These patches have been validated for DP SST and DP MST on DDI
platforms.

The existing implementation of link training does not implement fallback
for link rate/lane count as per the DP spec.
This patch series implements fallback loop to lower link rate
and lane count on CR and/or Channel EQ failures during link training.

Jim Bride (1):
  drm/i915/dp/mst: Add support for upfront link training for DP MST

Manasi Navare (4):
  drm/i915: Fallback to lower link rate and lane count during link
    training
  drm/i915: Remove the link rate and lane count loop in compute config
  drm/i915: Change the placement of some static functions in intel_dp.c
  drm/i915/dp: Enable Upfront link training on DDI platforms

 drivers/gpu/drm/i915/intel_ddi.c              | 129 ++++++++-
 drivers/gpu/drm/i915/intel_dp.c               | 386 +++++++++++++++++++-------
 drivers/gpu/drm/i915/intel_dp_link_training.c |  13 +-
 drivers/gpu/drm/i915/intel_dp_mst.c           |  74 +++--
 drivers/gpu/drm/i915/intel_drv.h              |  21 +-
 5 files changed, 490 insertions(+), 133 deletions(-)

-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH v5 1/5] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-14  1:08 [PATCH 0/5] Remaining patches for upfront link training on DDI platforms Manasi Navare
@ 2016-09-14  1:08 ` Manasi Navare
  2016-09-14  8:15   ` Mika Kahola
  2016-09-14  1:08 ` [PATCH v3 2/5] drm/i915: Remove the link rate and lane count loop in compute config Manasi Navare
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-14  1:08 UTC (permalink / raw)
  To: intel-gfx

According to the DisplayPort Spec, in case of Clock Recovery failure
the link training sequence should fall back to the lower link rate
followed by lower lane count until CR succeeds.
On CR success, the sequence proceeds with Channel EQ.
In case of Channel EQ failures, it should fallback to
lower link rate and lane count and start the CR phase again.

v5:
* Reset the link rate index to the max link rate index
before lowering the lane count (Jani Nikula)
* Use the paradigm for loop in intel_dp_link_rate_index
v4:
* Fixed the link rate fallback loop (Manasi Navare)
v3:
* Fixed some rebase issues (Mika Kahola)
v2:
* Add a helper function to return index of requested link rate
into common_rates array
* Changed the link rate fallback loop to make use
of common_rates array (Mika Kahola)
* Changed INTEL_INFO to INTEL_GEN (David Weinehall)

Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
---
 drivers/gpu/drm/i915/intel_ddi.c              | 112 +++++++++++++++++++++++---
 drivers/gpu/drm/i915/intel_dp.c               |  15 ++++
 drivers/gpu/drm/i915/intel_dp_link_training.c |  12 ++-
 drivers/gpu/drm/i915/intel_drv.h              |   6 +-
 4 files changed, 131 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
index 8065a5f..4d3a931 100644
--- a/drivers/gpu/drm/i915/intel_ddi.c
+++ b/drivers/gpu/drm/i915/intel_ddi.c
@@ -1637,19 +1637,18 @@ void intel_ddi_clk_select(struct intel_encoder *encoder,
 	}
 }
 
-static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
+static void intel_ddi_pre_enable_edp(struct intel_encoder *encoder,
 				    int link_rate, uint32_t lane_count,
-				    struct intel_shared_dpll *pll,
-				    bool link_mst)
+				    struct intel_shared_dpll *pll)
 {
 	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
 	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
 	enum port port = intel_ddi_get_encoder_port(encoder);
 
 	intel_dp_set_link_params(intel_dp, link_rate, lane_count,
-				 link_mst);
-	if (encoder->type == INTEL_OUTPUT_EDP)
-		intel_edp_panel_on(intel_dp);
+				 false);
+
+	intel_edp_panel_on(intel_dp);
 
 	intel_ddi_clk_select(encoder, pll);
 	intel_prepare_dp_ddi_buffers(encoder);
@@ -1660,6 +1659,28 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
 		intel_dp_stop_link_train(intel_dp);
 }
 
+static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
+				    int link_rate, uint32_t lane_count,
+				    struct intel_shared_dpll *pll,
+				    bool link_mst)
+{
+	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
+	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+	struct intel_shared_dpll_config tmp_pll_config;
+
+	/* Disable the PLL and obtain the PLL for Link Training
+	 * that starts with highest link rate and lane count.
+	 */
+	tmp_pll_config = pll->config;
+	pll->funcs.disable(dev_priv, pll);
+	pll->config.crtc_mask = 0;
+
+	/* If Link Training fails, send a uevent to generate a hotplug */
+	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
+		drm_kms_helper_hotplug_event(encoder->base.dev);
+	pll->config = tmp_pll_config;
+}
+
 static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
 				      bool has_hdmi_sink,
 				      struct drm_display_mode *adjusted_mode,
@@ -1693,20 +1714,26 @@ static void intel_ddi_pre_enable(struct intel_encoder *intel_encoder,
 	struct intel_crtc *crtc = to_intel_crtc(encoder->crtc);
 	int type = intel_encoder->type;
 
-	if (type == INTEL_OUTPUT_DP || type == INTEL_OUTPUT_EDP) {
+	if (type == INTEL_OUTPUT_EDP)
+		intel_ddi_pre_enable_edp(intel_encoder,
+					crtc->config->port_clock,
+					crtc->config->lane_count,
+					crtc->config->shared_dpll);
+
+	if (type == INTEL_OUTPUT_DP)
 		intel_ddi_pre_enable_dp(intel_encoder,
 					crtc->config->port_clock,
 					crtc->config->lane_count,
 					crtc->config->shared_dpll,
 					intel_crtc_has_type(crtc->config,
 							    INTEL_OUTPUT_DP_MST));
-	}
-	if (type == INTEL_OUTPUT_HDMI) {
+
+	if (type == INTEL_OUTPUT_HDMI)
 		intel_ddi_pre_enable_hdmi(intel_encoder,
 					  crtc->config->has_hdmi_sink,
 					  &crtc->config->base.adjusted_mode,
 					  crtc->config->shared_dpll);
-	}
+
 }
 
 static void intel_ddi_post_disable(struct intel_encoder *intel_encoder,
@@ -2435,6 +2462,71 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
 	return pll;
 }
 
+bool
+intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
+		     uint8_t max_lane_count, bool link_mst)
+{
+	struct intel_connector *connector = intel_dp->attached_connector;
+	struct intel_encoder *encoder = connector->encoder;
+	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+	struct intel_shared_dpll *pll;
+	struct intel_shared_dpll_config tmp_pll_config;
+	int link_rate, max_link_rate_index, link_rate_index;
+	uint8_t lane_count;
+	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
+	bool ret = false;
+
+	max_link_rate_index = intel_dp_link_rate_index(intel_dp, common_rates,
+						   max_link_rate);
+	if (max_link_rate_index < 0) {
+		DRM_ERROR("Invalid Link Rate\n");
+		return false;
+	}
+	for (lane_count = max_lane_count; lane_count > 0; lane_count >>= 1) {
+		for (link_rate_index = max_link_rate_index;
+		     link_rate_index >= 0; link_rate_index--) {
+			link_rate = common_rates[link_rate_index];
+			pll = intel_ddi_get_link_dpll(intel_dp, link_rate);
+			if (pll == NULL) {
+				DRM_ERROR("Could not find DPLL for link "
+					  "training.\n");
+				return false;
+			}
+			tmp_pll_config = pll->config;
+			pll->funcs.enable(dev_priv, pll);
+
+			intel_dp_set_link_params(intel_dp, link_rate,
+						 lane_count, link_mst);
+
+			intel_ddi_clk_select(encoder, pll);
+			intel_prepare_dp_ddi_buffers(encoder);
+			intel_ddi_init_dp_buf_reg(encoder);
+			intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
+			ret = intel_dp_start_link_train(intel_dp);
+			if (ret)
+				break;
+
+			/* Disable port followed by PLL for next
+			 *retry/clean up
+			 */
+			intel_ddi_post_disable(encoder, NULL, NULL);
+			pll->funcs.disable(dev_priv, pll);
+			pll->config = tmp_pll_config;
+		}
+		if (ret) {
+			DRM_DEBUG_KMS("Link Training successful at link rate: "
+				      "%d lane:%d\n", link_rate, lane_count);
+			break;
+		}
+	}
+	intel_dp_stop_link_train(intel_dp);
+
+	if (!lane_count)
+		DRM_ERROR("Link Training Failed\n");
+
+	return ret;
+}
+
 void intel_ddi_init(struct drm_device *dev, enum port port)
 {
 	struct drm_i915_private *dev_priv = to_i915(dev);
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 75ac62f..bb9df1e 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -1443,6 +1443,21 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
 	return rates[len - 1];
 }
 
+int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
+			     int link_rate)
+{
+	int common_len;
+	int index;
+
+	common_len = intel_dp_common_rates(intel_dp, common_rates);
+	for (index = 0; index < common_len; index++) {
+		if (link_rate == common_rates[common_len - index - 1])
+			return common_len - index - 1;
+	}
+
+	return -1;
+}
+
 int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
 {
 	return rate_to_index(rate, intel_dp->sink_rates);
diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
index c438b02..f1e08f0 100644
--- a/drivers/gpu/drm/i915/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
@@ -313,9 +313,15 @@ void intel_dp_stop_link_train(struct intel_dp *intel_dp)
 				DP_TRAINING_PATTERN_DISABLE);
 }
 
-void
+bool
 intel_dp_start_link_train(struct intel_dp *intel_dp)
 {
-	intel_dp_link_training_clock_recovery(intel_dp);
-	intel_dp_link_training_channel_equalization(intel_dp);
+	bool ret;
+
+	if (intel_dp_link_training_clock_recovery(intel_dp)) {
+		ret = intel_dp_link_training_channel_equalization(intel_dp);
+		if (ret)
+			return true;
+	}
+	return false;
 }
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index abe7a4d..69c8051 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -1160,6 +1160,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
 			 struct intel_crtc_state *pipe_config);
 void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
 uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
+bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
+			  uint8_t max_lane_count, bool link_mst);
 struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
 						  int clock);
 unsigned int intel_fb_align_height(struct drm_device *dev,
@@ -1381,7 +1383,7 @@ bool intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
 void intel_dp_set_link_params(struct intel_dp *intel_dp,
 			      int link_rate, uint8_t lane_count,
 			      bool link_mst);
-void intel_dp_start_link_train(struct intel_dp *intel_dp);
+bool intel_dp_start_link_train(struct intel_dp *intel_dp);
 void intel_dp_stop_link_train(struct intel_dp *intel_dp);
 void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
 void intel_dp_encoder_reset(struct drm_encoder *encoder);
@@ -1403,6 +1405,8 @@ void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *co
 void intel_dp_mst_suspend(struct drm_device *dev);
 void intel_dp_mst_resume(struct drm_device *dev);
 int intel_dp_max_link_rate(struct intel_dp *intel_dp);
+int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
+			     int link_rate);
 int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
 void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
 void intel_power_sequencer_reset(struct drm_i915_private *dev_priv);
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v3 2/5] drm/i915: Remove the link rate and lane count loop in compute config
  2016-09-14  1:08 [PATCH 0/5] Remaining patches for upfront link training on DDI platforms Manasi Navare
  2016-09-14  1:08 ` [PATCH v5 1/5] drm/i915: Fallback to lower link rate and lane count during link training Manasi Navare
@ 2016-09-14  1:08 ` Manasi Navare
  2016-09-14  1:08 ` [PATCH v2 3/5] drm/i915: Change the placement of some static functions in intel_dp.c Manasi Navare
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 56+ messages in thread
From: Manasi Navare @ 2016-09-14  1:08 UTC (permalink / raw)
  To: intel-gfx

While configuring the pipe during modeset, it should use
max clock and max lane count and reduce the bpp until
the requested mode rate is less than or equal to
available link BW.
This is required to pass DP Compliance.

v3:
* Add Debug print if requested mode cannot be supported
during modeset (Dhinakaran Pandiyan)
v2:
* Removed the loop since we use max values of clock
and lane count (Dhinakaran Pandiyan)

Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
---
 drivers/gpu/drm/i915/intel_dp.c | 23 +++++++++--------------
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index bb9df1e..07f9a49 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -1567,23 +1567,18 @@ intel_dp_compute_config(struct intel_encoder *encoder,
 	for (; bpp >= 6*3; bpp -= 2*3) {
 		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
 						   bpp);
-
-		for (clock = min_clock; clock <= max_clock; clock++) {
-			for (lane_count = min_lane_count;
-				lane_count <= max_lane_count;
-				lane_count <<= 1) {
-
-				link_clock = common_rates[clock];
-				link_avail = intel_dp_max_data_rate(link_clock,
-								    lane_count);
-
-				if (mode_rate <= link_avail) {
-					goto found;
-				}
-			}
+		clock = max_clock;
+		lane_count = max_lane_count;
+		link_clock = common_rates[clock];
+		link_avail = intel_dp_max_data_rate(link_clock,
+						    lane_count);
+
+		if (mode_rate <= link_avail) {
+			goto found;
 		}
 	}
 
+	DRM_DEBUG_KMS("Requested Mode Rate not supported\n");
 	return false;
 
 found:
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v2 3/5] drm/i915: Change the placement of some static functions in intel_dp.c
  2016-09-14  1:08 [PATCH 0/5] Remaining patches for upfront link training on DDI platforms Manasi Navare
  2016-09-14  1:08 ` [PATCH v5 1/5] drm/i915: Fallback to lower link rate and lane count during link training Manasi Navare
  2016-09-14  1:08 ` [PATCH v3 2/5] drm/i915: Remove the link rate and lane count loop in compute config Manasi Navare
@ 2016-09-14  1:08 ` Manasi Navare
  2016-09-15  7:41   ` Mika Kahola
  2016-09-14  1:08 ` [PATCH v17 4/5] drm/i915/dp: Enable Upfront link training on DDI platforms Manasi Navare
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-14  1:08 UTC (permalink / raw)
  To: intel-gfx

These static helper functions are required to be used within upfront
link training related functions so they need to be placed at the top
of the file. It also changes macro dev to dev_priv.

v2:
* Dont move around functions declared in intel_drv.h (Rodrigo Vivi)

Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
---
 drivers/gpu/drm/i915/intel_dp.c | 158 ++++++++++++++++++++--------------------
 1 file changed, 79 insertions(+), 79 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 07f9a49..a319102 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -190,6 +190,81 @@ intel_dp_max_data_rate(int max_link_clock, int max_lanes)
 	return (max_link_clock * max_lanes * 8) / 10;
 }
 
+static int
+intel_dp_sink_rates(struct intel_dp *intel_dp, const int **sink_rates)
+{
+	if (intel_dp->num_sink_rates) {
+		*sink_rates = intel_dp->sink_rates;
+		return intel_dp->num_sink_rates;
+	}
+
+	*sink_rates = default_rates;
+
+	return (intel_dp_max_link_bw(intel_dp) >> 3) + 1;
+}
+
+static int
+intel_dp_source_rates(struct intel_dp *intel_dp, const int **source_rates)
+{
+	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
+	struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
+	int size;
+
+	if (IS_BROXTON(dev_priv)) {
+		*source_rates = bxt_rates;
+		size = ARRAY_SIZE(bxt_rates);
+	} else if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
+		*source_rates = skl_rates;
+		size = ARRAY_SIZE(skl_rates);
+	} else {
+		*source_rates = default_rates;
+		size = ARRAY_SIZE(default_rates);
+	}
+
+	/* This depends on the fact that 5.4 is last value in the array */
+	if (!intel_dp_source_supports_hbr2(intel_dp))
+		size--;
+
+	return size;
+}
+
+static int intersect_rates(const int *source_rates, int source_len,
+			   const int *sink_rates, int sink_len,
+			   int *common_rates)
+{
+	int i = 0, j = 0, k = 0;
+
+	while (i < source_len && j < sink_len) {
+		if (source_rates[i] == sink_rates[j]) {
+			if (WARN_ON(k >= DP_MAX_SUPPORTED_RATES))
+				return k;
+			common_rates[k] = source_rates[i];
+			++k;
+			++i;
+			++j;
+		} else if (source_rates[i] < sink_rates[j]) {
+			++i;
+		} else {
+			++j;
+		}
+	}
+	return k;
+}
+
+static int intel_dp_common_rates(struct intel_dp *intel_dp,
+				 int *common_rates)
+{
+	const int *source_rates, *sink_rates;
+	int source_len, sink_len;
+
+	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
+	source_len = intel_dp_source_rates(intel_dp, &source_rates);
+
+	return intersect_rates(source_rates, source_len,
+			       sink_rates, sink_len,
+			       common_rates);
+}
+
 static enum drm_mode_status
 intel_dp_mode_valid(struct drm_connector *connector,
 		    struct drm_display_mode *mode)
@@ -1256,60 +1331,22 @@ intel_dp_aux_init(struct intel_dp *intel_dp, struct intel_connector *connector)
 	intel_dp->aux.transfer = intel_dp_aux_transfer;
 }
 
-static int
-intel_dp_sink_rates(struct intel_dp *intel_dp, const int **sink_rates)
-{
-	if (intel_dp->num_sink_rates) {
-		*sink_rates = intel_dp->sink_rates;
-		return intel_dp->num_sink_rates;
-	}
-
-	*sink_rates = default_rates;
-
-	return (intel_dp_max_link_bw(intel_dp) >> 3) + 1;
-}
-
 bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
-	struct drm_device *dev = dig_port->base.base.dev;
+	struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
 
 	/* WaDisableHBR2:skl */
-	if (IS_SKL_REVID(dev, 0, SKL_REVID_B0))
+	if (IS_SKL_REVID(dev_priv, 0, SKL_REVID_B0))
 		return false;
 
-	if ((IS_HASWELL(dev) && !IS_HSW_ULX(dev)) || IS_BROADWELL(dev) ||
-	    (INTEL_INFO(dev)->gen >= 9))
+	if ((IS_HASWELL(dev_priv) && !IS_HSW_ULX(dev_priv)) ||
+	    IS_BROADWELL(dev_priv) || (INTEL_GEN(dev_priv) >= 9))
 		return true;
 	else
 		return false;
 }
 
-static int
-intel_dp_source_rates(struct intel_dp *intel_dp, const int **source_rates)
-{
-	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
-	struct drm_device *dev = dig_port->base.base.dev;
-	int size;
-
-	if (IS_BROXTON(dev)) {
-		*source_rates = bxt_rates;
-		size = ARRAY_SIZE(bxt_rates);
-	} else if (IS_SKYLAKE(dev) || IS_KABYLAKE(dev)) {
-		*source_rates = skl_rates;
-		size = ARRAY_SIZE(skl_rates);
-	} else {
-		*source_rates = default_rates;
-		size = ARRAY_SIZE(default_rates);
-	}
-
-	/* This depends on the fact that 5.4 is last value in the array */
-	if (!intel_dp_source_supports_hbr2(intel_dp))
-		size--;
-
-	return size;
-}
-
 static void
 intel_dp_set_clock(struct intel_encoder *encoder,
 		   struct intel_crtc_state *pipe_config)
@@ -1343,43 +1380,6 @@ intel_dp_set_clock(struct intel_encoder *encoder,
 	}
 }
 
-static int intersect_rates(const int *source_rates, int source_len,
-			   const int *sink_rates, int sink_len,
-			   int *common_rates)
-{
-	int i = 0, j = 0, k = 0;
-
-	while (i < source_len && j < sink_len) {
-		if (source_rates[i] == sink_rates[j]) {
-			if (WARN_ON(k >= DP_MAX_SUPPORTED_RATES))
-				return k;
-			common_rates[k] = source_rates[i];
-			++k;
-			++i;
-			++j;
-		} else if (source_rates[i] < sink_rates[j]) {
-			++i;
-		} else {
-			++j;
-		}
-	}
-	return k;
-}
-
-static int intel_dp_common_rates(struct intel_dp *intel_dp,
-				 int *common_rates)
-{
-	const int *source_rates, *sink_rates;
-	int source_len, sink_len;
-
-	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
-	source_len = intel_dp_source_rates(intel_dp, &source_rates);
-
-	return intersect_rates(source_rates, source_len,
-			       sink_rates, sink_len,
-			       common_rates);
-}
-
 static void snprintf_int_array(char *str, size_t len,
 			       const int *array, int nelem)
 {
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v17 4/5] drm/i915/dp: Enable Upfront link training on DDI platforms
  2016-09-14  1:08 [PATCH 0/5] Remaining patches for upfront link training on DDI platforms Manasi Navare
                   ` (2 preceding siblings ...)
  2016-09-14  1:08 ` [PATCH v2 3/5] drm/i915: Change the placement of some static functions in intel_dp.c Manasi Navare
@ 2016-09-14  1:08 ` Manasi Navare
  2016-09-14  1:08 ` [PATCH v3 5/5] drm/i915/dp/mst: Add support for upfront link training for DP MST Manasi Navare
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 56+ messages in thread
From: Manasi Navare @ 2016-09-14  1:08 UTC (permalink / raw)
  To: intel-gfx

To support USB type C alternate DP mode, the display driver needs to
know the number of lanes required by the DP panel as well as number
of lanes that can be supported by the type-C cable. Sometimes, the
type-C cable may limit the bandwidth even if Panel can support
more lanes. To address these scenarios we need to train the link before
modeset. This upfront link training caches the values of max link rate
and max lane count that get used later during modeset. Upfront link
training does not change any HW state, the link is disabled and PLL
values are reset to previous values after upfront link tarining so
that subsequent modeset is not aware of these changes.

This patch is based on prior work done by
R,Durgadoss <durgadoss.r@intel.com>

Changes since v16:
* Use HAS_DDI macro for enabling this feature (Rodrigo Vivi)
* Fix some unnecessary removals/changes due to rebase (Rodrigo Vivi)

Changes since v15:
* Split this patch into two patches - one with functional
changes to enable upfront and other with moving the existing
functions around so that they can be used for upfront (Jani Nikula)
* Cleaned up the commit message

Signed-off-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
---
 drivers/gpu/drm/i915/intel_ddi.c              |  21 ++-
 drivers/gpu/drm/i915/intel_dp.c               | 190 +++++++++++++++++++++++++-
 drivers/gpu/drm/i915/intel_dp_link_training.c |   1 -
 drivers/gpu/drm/i915/intel_drv.h              |  14 +-
 4 files changed, 218 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
index 4d3a931..1b24d71 100644
--- a/drivers/gpu/drm/i915/intel_ddi.c
+++ b/drivers/gpu/drm/i915/intel_ddi.c
@@ -1676,7 +1676,8 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
 	pll->config.crtc_mask = 0;
 
 	/* If Link Training fails, send a uevent to generate a hotplug */
-	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
+	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst,
+				  false))
 		drm_kms_helper_hotplug_event(encoder->base.dev);
 	pll->config = tmp_pll_config;
 }
@@ -2464,7 +2465,7 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
 
 bool
 intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
-		     uint8_t max_lane_count, bool link_mst)
+		     uint8_t max_lane_count, bool link_mst, bool is_upfront)
 {
 	struct intel_connector *connector = intel_dp->attached_connector;
 	struct intel_encoder *encoder = connector->encoder;
@@ -2513,6 +2514,7 @@ intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
 			pll->funcs.disable(dev_priv, pll);
 			pll->config = tmp_pll_config;
 		}
+
 		if (ret) {
 			DRM_DEBUG_KMS("Link Training successful at link rate: "
 				      "%d lane:%d\n", link_rate, lane_count);
@@ -2521,6 +2523,21 @@ intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
 	}
 	intel_dp_stop_link_train(intel_dp);
 
+	if (is_upfront) {
+		DRM_DEBUG_KMS("Upfront link train %s: link_clock:%d lanes:%d\n",
+			      ret ? "Passed" : "Failed",
+			      link_rate, lane_count);
+		/* Disable port followed by PLL for next retry/clean up */
+		intel_ddi_post_disable(encoder, NULL, NULL);
+		pll->funcs.disable(dev_priv, pll);
+		pll->config = tmp_pll_config;
+		if (ret) {
+			/* Save the upfront values */
+			intel_dp->max_lanes_upfront = lane_count;
+			intel_dp->max_link_rate_upfront = link_rate;
+		}
+	}
+
 	if (!lane_count)
 		DRM_ERROR("Link Training Failed\n");
 
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index a319102..9042d28 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -153,12 +153,21 @@ intel_dp_max_link_bw(struct intel_dp  *intel_dp)
 static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
-	u8 source_max, sink_max;
+	u8 temp, source_max, sink_max;
 
 	source_max = intel_dig_port->max_lanes;
 	sink_max = drm_dp_max_lane_count(intel_dp->dpcd);
 
-	return min(source_max, sink_max);
+	temp = min(source_max, sink_max);
+
+	/*
+	 * Limit max lanes w.r.t to the max value found
+	 * using Upfront link training also.
+	 */
+	if (intel_dp->max_lanes_upfront)
+		return min(temp, intel_dp->max_lanes_upfront);
+	else
+		return temp;
 }
 
 /*
@@ -190,6 +199,42 @@ intel_dp_max_data_rate(int max_link_clock, int max_lanes)
 	return (max_link_clock * max_lanes * 8) / 10;
 }
 
+static int intel_dp_upfront_crtc_disable(struct intel_crtc *crtc,
+					 struct drm_modeset_acquire_ctx *ctx,
+					 bool enable)
+{
+	int ret;
+	struct drm_atomic_state *state;
+	struct intel_crtc_state *crtc_state;
+	struct drm_device *dev = crtc->base.dev;
+	enum pipe pipe = crtc->pipe;
+
+	state = drm_atomic_state_alloc(dev);
+	if (!state)
+		return -ENOMEM;
+
+	state->acquire_ctx = ctx;
+
+	crtc_state = intel_atomic_get_crtc_state(state, crtc);
+	if (IS_ERR(crtc_state)) {
+		ret = PTR_ERR(crtc_state);
+		drm_atomic_state_free(state);
+		return ret;
+	}
+
+	DRM_DEBUG_KMS("%sabling crtc %c %s upfront link train\n",
+			enable ? "En" : "Dis",
+			pipe_name(pipe),
+			enable ? "after" : "before");
+
+	crtc_state->base.active = enable;
+	ret = drm_atomic_commit(state);
+	if (ret)
+		drm_atomic_state_free(state);
+
+	return ret;
+}
+
 static int
 intel_dp_sink_rates(struct intel_dp *intel_dp, const int **sink_rates)
 {
@@ -258,6 +303,17 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
 	int source_len, sink_len;
 
 	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
+
+	/* Cap sink rates w.r.t upfront values */
+	if (intel_dp->max_link_rate_upfront) {
+		int len = sink_len - 1;
+
+		while (len > 0 && sink_rates[len] >
+		       intel_dp->max_link_rate_upfront)
+			len--;
+		sink_len = len + 1;
+	}
+
 	source_len = intel_dp_source_rates(intel_dp, &source_rates);
 
 	return intersect_rates(source_rates, source_len,
@@ -265,6 +321,92 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
 			       common_rates);
 }
 
+static bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
+{
+	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
+	struct intel_encoder *intel_encoder = &intel_dig_port->base;
+	struct drm_device *dev = intel_encoder->base.dev;
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	struct drm_mode_config *config = &dev->mode_config;
+	struct drm_modeset_acquire_ctx ctx;
+	struct intel_crtc *intel_crtc;
+	struct drm_crtc *crtc = NULL;
+	struct intel_shared_dpll *pll;
+	struct intel_shared_dpll_config tmp_pll_config;
+	bool disable_dpll = false;
+	int ret;
+	bool done = false, has_mst = false;
+	uint8_t max_lanes;
+	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
+	int common_len;
+	enum intel_display_power_domain power_domain;
+
+	power_domain = intel_display_port_power_domain(intel_encoder);
+	intel_display_power_get(dev_priv, power_domain);
+
+	common_len = intel_dp_common_rates(intel_dp, common_rates);
+	max_lanes = intel_dp_max_lane_count(intel_dp);
+	if (WARN_ON(common_len <= 0))
+		return true;
+
+	drm_modeset_acquire_init(&ctx, 0);
+retry:
+	ret = drm_modeset_lock(&config->connection_mutex, &ctx);
+	if (ret)
+		goto exit_fail;
+
+	if (intel_encoder->base.crtc) {
+		crtc = intel_encoder->base.crtc;
+
+		ret = drm_modeset_lock(&crtc->mutex, &ctx);
+		if (ret)
+			goto exit_fail;
+
+		ret = drm_modeset_lock(&crtc->primary->mutex, &ctx);
+		if (ret)
+			goto exit_fail;
+
+		intel_crtc = to_intel_crtc(crtc);
+		pll = intel_crtc->config->shared_dpll;
+		disable_dpll = true;
+		has_mst = intel_crtc_has_type(intel_crtc->config,
+					      INTEL_OUTPUT_DP_MST);
+		ret = intel_dp_upfront_crtc_disable(intel_crtc, &ctx, false);
+		if (ret)
+			goto exit_fail;
+	}
+
+	mutex_lock(&dev_priv->dpll_lock);
+	if (disable_dpll) {
+		/* Clear the PLL config state */
+		tmp_pll_config = pll->config;
+		pll->config.crtc_mask = 0;
+	}
+
+	done = intel_dp->upfront_link_train(intel_dp,
+					    common_rates[common_len-1],
+					    max_lanes,
+					    has_mst,
+					    true);
+	if (disable_dpll)
+		pll->config = tmp_pll_config;
+
+	mutex_unlock(&dev_priv->dpll_lock);
+
+	if (crtc)
+		ret = intel_dp_upfront_crtc_disable(intel_crtc, &ctx, true);
+
+exit_fail:
+	if (ret == -EDEADLK) {
+		drm_modeset_backoff(&ctx);
+		goto retry;
+	}
+	drm_modeset_drop_locks(&ctx);
+	drm_modeset_acquire_fini(&ctx);
+	intel_display_power_put(dev_priv, power_domain);
+	return done;
+}
+
 static enum drm_mode_status
 intel_dp_mode_valid(struct drm_connector *connector,
 		    struct drm_display_mode *mode)
@@ -286,6 +428,19 @@ intel_dp_mode_valid(struct drm_connector *connector,
 		target_clock = fixed_mode->clock;
 	}
 
+	if (intel_dp->upfront_link_train && !intel_dp->upfront_done) {
+		bool do_upfront_link_train;
+		/* Do not do upfront link train, if it is a compliance
+		 * request
+		 */
+		do_upfront_link_train = !intel_dp->upfront_done &&
+			(intel_dp->compliance_test_type !=
+			 DP_TEST_LINK_TRAINING);
+
+		if (do_upfront_link_train)
+			intel_dp->upfront_done = intel_dp_upfront_link_train(intel_dp);
+	}
+
 	max_link_clock = intel_dp_max_link_rate(intel_dp);
 	max_lanes = intel_dp_max_lane_count(intel_dp);
 
@@ -1436,6 +1591,9 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
 	int rates[DP_MAX_SUPPORTED_RATES] = {};
 	int len;
 
+	if (intel_dp->max_link_rate_upfront)
+		return intel_dp->max_link_rate_upfront;
+
 	len = intel_dp_common_rates(intel_dp, rates);
 	if (WARN_ON(len <= 0))
 		return 162000;
@@ -1567,6 +1725,21 @@ intel_dp_compute_config(struct intel_encoder *encoder,
 	for (; bpp >= 6*3; bpp -= 2*3) {
 		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
 						   bpp);
+
+		if (!is_edp(intel_dp) && intel_dp->upfront_done) {
+			clock = max_clock;
+			lane_count = intel_dp->max_lanes_upfront;
+			link_clock = intel_dp->max_link_rate_upfront;
+			link_avail = intel_dp_max_data_rate(link_clock,
+							    lane_count);
+			mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
+							   bpp);
+			if (mode_rate <= link_avail)
+				goto found;
+			else
+				continue;
+		}
+
 		clock = max_clock;
 		lane_count = max_lane_count;
 		link_clock = common_rates[clock];
@@ -1596,7 +1769,6 @@ found:
 	}
 
 	pipe_config->lane_count = lane_count;
-
 	pipe_config->pipe_bpp = bpp;
 	pipe_config->port_clock = common_rates[clock];
 
@@ -4374,8 +4546,12 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
 
 out:
 	if ((status != connector_status_connected) &&
-	    (intel_dp->is_mst == false))
+	    (intel_dp->is_mst == false)) {
 		intel_dp_unset_edid(intel_dp);
+		intel_dp->upfront_done = false;
+		intel_dp->max_lanes_upfront = 0;
+		intel_dp->max_link_rate_upfront = 0;
+	}
 
 	intel_display_power_put(to_i915(dev), power_domain);
 	return;
@@ -5619,6 +5795,12 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
 	if (type == DRM_MODE_CONNECTOR_eDP)
 		intel_encoder->type = INTEL_OUTPUT_EDP;
 
+	/* Initialize upfront link training vfunc for DP */
+	if (intel_encoder->type != INTEL_OUTPUT_EDP) {
+		if (HAS_DDI(dev_priv))
+			intel_dp->upfront_link_train = intel_ddi_link_train;
+	}
+
 	/* eDP only on port B and/or C on vlv/chv */
 	if (WARN_ON((IS_VALLEYVIEW(dev) || IS_CHERRYVIEW(dev)) &&
 		    is_edp(intel_dp) && port != PORT_B && port != PORT_C))
diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
index f1e08f0..b6f380b 100644
--- a/drivers/gpu/drm/i915/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
@@ -304,7 +304,6 @@ intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
 	intel_dp_set_idle_link_train(intel_dp);
 
 	return intel_dp->channel_eq_status;
-
 }
 
 void intel_dp_stop_link_train(struct intel_dp *intel_dp)
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 69c8051..fc2f1bc 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -882,6 +882,12 @@ struct intel_dp {
 	enum hdmi_force_audio force_audio;
 	bool limited_color_range;
 	bool color_range_auto;
+
+	/* Upfront link train parameters */
+	int max_link_rate_upfront;
+	uint8_t max_lanes_upfront;
+	bool upfront_done;
+
 	uint8_t dpcd[DP_RECEIVER_CAP_SIZE];
 	uint8_t psr_dpcd[EDP_PSR_RECEIVER_CAP_SIZE];
 	uint8_t downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
@@ -939,6 +945,11 @@ struct intel_dp {
 	/* This is called before a link training is starterd */
 	void (*prepare_link_retrain)(struct intel_dp *intel_dp);
 
+	/* For Upfront link training */
+	bool (*upfront_link_train)(struct intel_dp *intel_dp, int clock,
+				   uint8_t lane_count, bool link_mst,
+				   bool is_upfront);
+
 	/* Displayport compliance testing */
 	unsigned long compliance_test_type;
 	unsigned long compliance_test_data;
@@ -1161,7 +1172,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
 void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
 uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
 bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
-			  uint8_t max_lane_count, bool link_mst);
+			  uint8_t max_lane_count, bool link_mst,
+			  bool is_upfront);
 struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
 						  int clock);
 unsigned int intel_fb_align_height(struct drm_device *dev,
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v3 5/5] drm/i915/dp/mst: Add support for upfront link training for DP MST
  2016-09-14  1:08 [PATCH 0/5] Remaining patches for upfront link training on DDI platforms Manasi Navare
                   ` (3 preceding siblings ...)
  2016-09-14  1:08 ` [PATCH v17 4/5] drm/i915/dp: Enable Upfront link training on DDI platforms Manasi Navare
@ 2016-09-14  1:08 ` Manasi Navare
  2016-09-15 17:48   ` Pandiyan, Dhinakaran
  2016-09-14  5:38 ` ✓ Fi.CI.BAT: success for Remaining patches for upfront link training on DDI platforms Patchwork
  2016-09-16  0:03 ` [PATCH 0/6] " Manasi Navare
  6 siblings, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-14  1:08 UTC (permalink / raw)
  To: intel-gfx

From: Jim Bride <jim.bride@linux.intel.com>

Add upfront link training to intel_dp_mst_mode_valid() so that we know
topology constraints before we validate the legality of modes to be
checked.

v3:
* Reset the upfront values but dont unset the EDID for MST. (Manasi)
v2:
* Rebased on new revision of link training patch. (Manasi)

Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_dp.c     | 15 ++++----
 drivers/gpu/drm/i915/intel_dp_mst.c | 74 +++++++++++++++++++++++++++----------
 drivers/gpu/drm/i915/intel_drv.h    |  3 ++
 3 files changed, 64 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 9042d28..635830e 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -131,7 +131,7 @@ static void vlv_steal_power_sequencer(struct drm_device *dev,
 				      enum pipe pipe);
 static void intel_dp_unset_edid(struct intel_dp *intel_dp);
 
-static int
+int
 intel_dp_max_link_bw(struct intel_dp  *intel_dp)
 {
 	int max_link_bw = intel_dp->dpcd[DP_MAX_LINK_RATE];
@@ -150,7 +150,7 @@ intel_dp_max_link_bw(struct intel_dp  *intel_dp)
 	return max_link_bw;
 }
 
-static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
+u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
 	u8 temp, source_max, sink_max;
@@ -296,8 +296,7 @@ static int intersect_rates(const int *source_rates, int source_len,
 	return k;
 }
 
-static int intel_dp_common_rates(struct intel_dp *intel_dp,
-				 int *common_rates)
+int intel_dp_common_rates(struct intel_dp *intel_dp, int *common_rates)
 {
 	const int *source_rates, *sink_rates;
 	int source_len, sink_len;
@@ -321,7 +320,7 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
 			       common_rates);
 }
 
-static bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
+bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
 	struct intel_encoder *intel_encoder = &intel_dig_port->base;
@@ -4545,12 +4544,12 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
 	}
 
 out:
-	if ((status != connector_status_connected) &&
-	    (intel_dp->is_mst == false)) {
-		intel_dp_unset_edid(intel_dp);
+	if (status != connector_status_connected) {
 		intel_dp->upfront_done = false;
 		intel_dp->max_lanes_upfront = 0;
 		intel_dp->max_link_rate_upfront = 0;
+		if (intel_dp->is_mst == false)
+			intel_dp_unset_edid(intel_dp);
 	}
 
 	intel_display_power_put(to_i915(dev), power_domain);
diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c
index 54a9d76..98d45a4 100644
--- a/drivers/gpu/drm/i915/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/intel_dp_mst.c
@@ -41,21 +41,30 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
 	int bpp;
 	int lane_count, slots;
 	const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
-	int mst_pbn;
+	int mst_pbn, common_len;
+	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
 
 	pipe_config->dp_encoder_is_mst = true;
 	pipe_config->has_pch_encoder = false;
-	bpp = 24;
+
 	/*
-	 * for MST we always configure max link bw - the spec doesn't
-	 * seem to suggest we should do otherwise.
+	 * For MST we always configure for the maximum trainable link bw -
+	 * the spec doesn't seem to suggest we should do otherwise.  The
+	 * calls to intel_dp_max_lane_count() and intel_dp_common_rates()
+	 * both take successful upfront link training into account, and
+	 * return the DisplayPort max supported values in the event that
+	 * upfront link training was not done.
 	 */
-	lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
+	lane_count = intel_dp_max_lane_count(intel_dp);
 
 	pipe_config->lane_count = lane_count;
 
-	pipe_config->pipe_bpp = 24;
-	pipe_config->port_clock = intel_dp_max_link_rate(intel_dp);
+	pipe_config->pipe_bpp = bpp = 24;
+	common_len = intel_dp_common_rates(intel_dp, common_rates);
+	pipe_config->port_clock = common_rates[common_len - 1];
+
+	DRM_DEBUG_KMS("DP MST link configured for %d lanes @ %d.\n",
+		      pipe_config->lane_count, pipe_config->port_clock);
 
 	state = pipe_config->base.state;
 
@@ -137,6 +146,8 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
 	enum port port = intel_dig_port->port;
 	struct intel_connector *connector =
 		to_intel_connector(conn_state->connector);
+	struct intel_shared_dpll *pll = pipe_config->shared_dpll;
+	struct intel_shared_dpll_config tmp_pll_config;
 	int ret;
 	uint32_t temp;
 	int slots;
@@ -150,21 +161,23 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
 	DRM_DEBUG_KMS("%d\n", intel_dp->active_mst_links);
 
 	if (intel_dp->active_mst_links == 0) {
-		intel_ddi_clk_select(&intel_dig_port->base,
-				     pipe_config->shared_dpll);
-
-		intel_prepare_dp_ddi_buffers(&intel_dig_port->base);
-		intel_dp_set_link_params(intel_dp,
-					 pipe_config->port_clock,
-					 pipe_config->lane_count,
-					 true);
-
-		intel_ddi_init_dp_buf_reg(&intel_dig_port->base);
 
-		intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
+		/* Disable the PLL since we need to acquire the PLL
+		 * based on the link rate in the link training sequence
+		 */
+		tmp_pll_config = pll->config;
+		pll->funcs.disable(dev_priv, pll);
+		pll->config.crtc_mask = 0;
+
+		/* If Link Training fails, send a uevent to generate a
+		 *hotplug
+		 */
+		if (!(intel_ddi_link_train(intel_dp, pipe_config->port_clock,
+					   pipe_config->lane_count, true,
+					   false)))
+			drm_kms_helper_hotplug_event(encoder->base.dev);
+		pll->config = tmp_pll_config;
 
-		intel_dp_start_link_train(intel_dp);
-		intel_dp_stop_link_train(intel_dp);
 	}
 
 	ret = drm_dp_mst_allocate_vcpi(&intel_dp->mst_mgr,
@@ -336,6 +349,27 @@ intel_dp_mst_mode_valid(struct drm_connector *connector,
 			struct drm_display_mode *mode)
 {
 	int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
+	struct intel_connector *intel_connector = to_intel_connector(connector);
+	struct intel_dp *intel_dp = intel_connector->mst_port;
+
+	if (intel_dp->upfront_link_train && !intel_dp->upfront_done) {
+		bool do_upfront_link_train;
+
+		do_upfront_link_train = intel_dp->compliance_test_type !=
+			DP_TEST_LINK_TRAINING;
+		if (do_upfront_link_train) {
+			intel_dp->upfront_done =
+				intel_dp_upfront_link_train(intel_dp);
+			if (intel_dp->upfront_done) {
+				DRM_DEBUG_KMS("MST upfront trained at "
+					      "%d lanes @ %d.",
+					      intel_dp->max_lanes_upfront,
+					      intel_dp->max_link_rate_upfront);
+			} else
+				DRM_DEBUG_KMS("MST upfront link training "
+					      "failed.");
+		}
+	}
 
 	/* TODO - validate mode against available PBN for link */
 	if (mode->clock < 10000)
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index fc2f1bc..b4bc002 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -1416,6 +1416,7 @@ void intel_edp_panel_off(struct intel_dp *intel_dp);
 void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *connector);
 void intel_dp_mst_suspend(struct drm_device *dev);
 void intel_dp_mst_resume(struct drm_device *dev);
+u8 intel_dp_max_lane_count(struct intel_dp *intel_dp);
 int intel_dp_max_link_rate(struct intel_dp *intel_dp);
 int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
 			     int link_rate);
@@ -1446,6 +1447,8 @@ intel_dp_pre_emphasis_max(struct intel_dp *intel_dp, uint8_t voltage_swing);
 void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
 			   uint8_t *link_bw, uint8_t *rate_select);
 bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp);
+int intel_dp_common_rates(struct intel_dp *intel_dp, int *common_rates);
+bool intel_dp_upfront_link_train(struct intel_dp *intel_dp);
 bool
 intel_dp_get_link_status(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE]);
 
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* ✓ Fi.CI.BAT: success for Remaining patches for upfront link training on DDI platforms
  2016-09-14  1:08 [PATCH 0/5] Remaining patches for upfront link training on DDI platforms Manasi Navare
                   ` (4 preceding siblings ...)
  2016-09-14  1:08 ` [PATCH v3 5/5] drm/i915/dp/mst: Add support for upfront link training for DP MST Manasi Navare
@ 2016-09-14  5:38 ` Patchwork
  2016-09-16  0:03 ` [PATCH 0/6] " Manasi Navare
  6 siblings, 0 replies; 56+ messages in thread
From: Patchwork @ 2016-09-14  5:38 UTC (permalink / raw)
  To: Navare, Manasi D; +Cc: intel-gfx

== Series Details ==

Series: Remaining patches for upfront link training on DDI platforms
URL   : https://patchwork.freedesktop.org/series/12425/
State : success

== Summary ==

Series 12425v1 Remaining patches for upfront link training on DDI platforms
https://patchwork.freedesktop.org/api/1.0/series/12425/revisions/1/mbox/

Test kms_pipe_crc_basic:
        Subgroup hang-read-crc-pipe-c:
                skip       -> PASS       (fi-hsw-4770r)

fi-bsw-n3050     total:244  pass:202  dwarn:0   dfail:0   fail:0   skip:42 
fi-hsw-4770k     total:244  pass:226  dwarn:0   dfail:0   fail:0   skip:18 
fi-hsw-4770r     total:244  pass:222  dwarn:0   dfail:0   fail:0   skip:22 
fi-ilk-650       total:244  pass:183  dwarn:0   dfail:0   fail:1   skip:60 
fi-ivb-3520m     total:244  pass:219  dwarn:0   dfail:0   fail:0   skip:25 
fi-ivb-3770      total:244  pass:207  dwarn:0   dfail:0   fail:0   skip:37 
fi-skl-6260u     total:244  pass:230  dwarn:0   dfail:0   fail:0   skip:14 
fi-skl-6700hq    total:244  pass:221  dwarn:0   dfail:0   fail:1   skip:22 
fi-skl-6700k     total:244  pass:219  dwarn:1   dfail:0   fail:0   skip:24 
fi-snb-2520m     total:244  pass:208  dwarn:0   dfail:0   fail:0   skip:36 
fi-snb-2600      total:244  pass:207  dwarn:0   dfail:0   fail:0   skip:37 

Results at /archive/results/CI_IGT_test/Patchwork_2526/

208290026552464713d3897ab5d649f4445d5513 drm-intel-nightly: 2016y-09m-13d-14h-45m-32s UTC integration manifest
7ea9b7d drm/i915/dp/mst: Add support for upfront link training for DP MST
4b260e3 drm/i915/dp: Enable Upfront link training on DDI platforms
46cab65 drm/i915: Change the placement of some static functions in intel_dp.c
a02b212 drm/i915: Remove the link rate and lane count loop in compute config
f9704d9 drm/i915: Fallback to lower link rate and lane count during link training

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v5 1/5] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-14  1:08 ` [PATCH v5 1/5] drm/i915: Fallback to lower link rate and lane count during link training Manasi Navare
@ 2016-09-14  8:15   ` Mika Kahola
  2016-09-15 19:56     ` Manasi Navare
  0 siblings, 1 reply; 56+ messages in thread
From: Mika Kahola @ 2016-09-14  8:15 UTC (permalink / raw)
  To: Manasi Navare, intel-gfx

On Tue, 2016-09-13 at 18:08 -0700, Manasi Navare wrote:
> According to the DisplayPort Spec, in case of Clock Recovery failure
> the link training sequence should fall back to the lower link rate
> followed by lower lane count until CR succeeds.
> On CR success, the sequence proceeds with Channel EQ.
> In case of Channel EQ failures, it should fallback to
> lower link rate and lane count and start the CR phase again.
> 
> v5:
> * Reset the link rate index to the max link rate index
> before lowering the lane count (Jani Nikula)
> * Use the paradigm for loop in intel_dp_link_rate_index
> v4:
> * Fixed the link rate fallback loop (Manasi Navare)
> v3:
> * Fixed some rebase issues (Mika Kahola)
> v2:
> * Add a helper function to return index of requested link rate
> into common_rates array
> * Changed the link rate fallback loop to make use
> of common_rates array (Mika Kahola)
> * Changed INTEL_INFO to INTEL_GEN (David Weinehall)
> 
> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_ddi.c              | 112
> +++++++++++++++++++++++---
>  drivers/gpu/drm/i915/intel_dp.c               |  15 ++++
>  drivers/gpu/drm/i915/intel_dp_link_training.c |  12 ++-
>  drivers/gpu/drm/i915/intel_drv.h              |   6 +-
>  4 files changed, 131 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_ddi.c
> b/drivers/gpu/drm/i915/intel_ddi.c
> index 8065a5f..4d3a931 100644
> --- a/drivers/gpu/drm/i915/intel_ddi.c
> +++ b/drivers/gpu/drm/i915/intel_ddi.c
> @@ -1637,19 +1637,18 @@ void intel_ddi_clk_select(struct
> intel_encoder *encoder,
>  	}
>  }
>  
> -static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> +static void intel_ddi_pre_enable_edp(struct intel_encoder *encoder,
>  				    int link_rate, uint32_t
> lane_count,
> -				    struct intel_shared_dpll *pll,
> -				    bool link_mst)
> +				    struct intel_shared_dpll *pll)
>  {
>  	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
>  	struct drm_i915_private *dev_priv = to_i915(encoder-
> >base.dev);
>  	enum port port = intel_ddi_get_encoder_port(encoder);
>  
>  	intel_dp_set_link_params(intel_dp, link_rate, lane_count,
> -				 link_mst);
> -	if (encoder->type == INTEL_OUTPUT_EDP)
> -		intel_edp_panel_on(intel_dp);
> +				 false);
> +
> +	intel_edp_panel_on(intel_dp);
>  
>  	intel_ddi_clk_select(encoder, pll);
>  	intel_prepare_dp_ddi_buffers(encoder);
> @@ -1660,6 +1659,28 @@ static void intel_ddi_pre_enable_dp(struct
> intel_encoder *encoder,
>  		intel_dp_stop_link_train(intel_dp);
>  }
>  
> +static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> +				    int link_rate, uint32_t
> lane_count,
> +				    struct intel_shared_dpll *pll,
> +				    bool link_mst)
> +{
> +	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
> +	struct drm_i915_private *dev_priv = to_i915(encoder-
> >base.dev);
> +	struct intel_shared_dpll_config tmp_pll_config;
> +
> +	/* Disable the PLL and obtain the PLL for Link Training
> +	 * that starts with highest link rate and lane count.
> +	 */
> +	tmp_pll_config = pll->config;
> +	pll->funcs.disable(dev_priv, pll);
> +	pll->config.crtc_mask = 0;
> +
> +	/* If Link Training fails, send a uevent to generate a
> hotplug */
> +	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count,
> link_mst))
> +		drm_kms_helper_hotplug_event(encoder->base.dev);
> +	pll->config = tmp_pll_config;
> +}
> +
>  static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
>  				      bool has_hdmi_sink,
>  				      struct drm_display_mode
> *adjusted_mode,
> @@ -1693,20 +1714,26 @@ static void intel_ddi_pre_enable(struct
> intel_encoder *intel_encoder,
>  	struct intel_crtc *crtc = to_intel_crtc(encoder->crtc);
>  	int type = intel_encoder->type;
>  
> -	if (type == INTEL_OUTPUT_DP || type == INTEL_OUTPUT_EDP) {
> +	if (type == INTEL_OUTPUT_EDP)
> +		intel_ddi_pre_enable_edp(intel_encoder,
> +					crtc->config->port_clock,
> +					crtc->config->lane_count,
> +					crtc->config->shared_dpll);
> +
> +	if (type == INTEL_OUTPUT_DP)
>  		intel_ddi_pre_enable_dp(intel_encoder,
>  					crtc->config->port_clock,
>  					crtc->config->lane_count,
>  					crtc->config->shared_dpll,
>  					intel_crtc_has_type(crtc-
> >config,
>  							    INTEL_OU
> TPUT_DP_MST));
> -	}
> -	if (type == INTEL_OUTPUT_HDMI) {
> +
> +	if (type == INTEL_OUTPUT_HDMI)
>  		intel_ddi_pre_enable_hdmi(intel_encoder,
>  					  crtc->config-
> >has_hdmi_sink,
>  					  &crtc->config-
> >base.adjusted_mode,
>  					  crtc->config-
> >shared_dpll);
> -	}
> +
>  }
>  
>  static void intel_ddi_post_disable(struct intel_encoder
> *intel_encoder,
> @@ -2435,6 +2462,71 @@ intel_ddi_get_link_dpll(struct intel_dp
> *intel_dp, int clock)
>  	return pll;
>  }
>  
> +bool
> +intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
> +		     uint8_t max_lane_count, bool link_mst)
> +{
> +	struct intel_connector *connector = intel_dp-
> >attached_connector;
> +	struct intel_encoder *encoder = connector->encoder;
> +	struct drm_i915_private *dev_priv = to_i915(encoder-
> >base.dev);
> +	struct intel_shared_dpll *pll;
> +	struct intel_shared_dpll_config tmp_pll_config;
> +	int link_rate, max_link_rate_index, link_rate_index;
> +	uint8_t lane_count;
> +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
> +	bool ret = false;
> +
> +	max_link_rate_index = intel_dp_link_rate_index(intel_dp,
> common_rates,
> +						   max_link_rate);
> +	if (max_link_rate_index < 0) {
> +		DRM_ERROR("Invalid Link Rate\n");
> +		return false;
> +	}
> +	for (lane_count = max_lane_count; lane_count > 0; lane_count
> >>= 1) {
> +		for (link_rate_index = max_link_rate_index;
> +		     link_rate_index >= 0; link_rate_index--) {
> +			link_rate = common_rates[link_rate_index];
> +			pll = intel_ddi_get_link_dpll(intel_dp,
> link_rate);
> +			if (pll == NULL) {
> +				DRM_ERROR("Could not find DPLL for
> link "
> +					  "training.\n");
checkpatch.pl gives a warning:

WARNING: quoted string split across lines
#233: FILE: drivers/gpu/drm/i915/intel_ddi.c:2492:
+                               DRM_ERROR("Could not find DPLL for link
"
+                                         "training.\n");

I think we could put this error message into a single line. In this
case, the tool warns you on exceeding the 80 character limit but we
break that rule here and there in our driver anyway.

> +				return false;
> +			}
> +			tmp_pll_config = pll->config;
> +			pll->funcs.enable(dev_priv, pll);
> +
> +			intel_dp_set_link_params(intel_dp,
> link_rate,
> +						 lane_count,
> link_mst);
> +
> +			intel_ddi_clk_select(encoder, pll);
> +			intel_prepare_dp_ddi_buffers(encoder);
> +			intel_ddi_init_dp_buf_reg(encoder);
> +			intel_dp_sink_dpms(intel_dp,
> DRM_MODE_DPMS_ON);
> +			ret = intel_dp_start_link_train(intel_dp);
> +			if (ret)
> +				break;
> +
> +			/* Disable port followed by PLL for next
> +			 *retry/clean up
> +			 */
> +			intel_ddi_post_disable(encoder, NULL, NULL);
> +			pll->funcs.disable(dev_priv, pll);
> +			pll->config = tmp_pll_config;
> +		}
> +		if (ret) {
> +			DRM_DEBUG_KMS("Link Training successful at
> link rate: "
> +				      "%d lane:%d\n", link_rate,
> lane_count);
Same thing here. Maybe

DRM_DEBUG_KMS("Link Training successful at link rate: %d lane:%d\n", 	
              link_rate, lane_count);

> 
> +		}
> +	}
> +	intel_dp_stop_link_train(intel_dp);
> +
> +	if (!lane_count)
> +		DRM_ERROR("Link Training Failed\n");
> +
> +	return ret;
> +}
> +
>  void intel_ddi_init(struct drm_device *dev, enum port port)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(dev);
> diff --git a/drivers/gpu/drm/i915/intel_dp.c
> b/drivers/gpu/drm/i915/intel_dp.c
> index 75ac62f..bb9df1e 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -1443,6 +1443,21 @@ intel_dp_max_link_rate(struct intel_dp
> *intel_dp)
>  	return rates[len - 1];
>  }
>  
> +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int
> *common_rates,
> +			     int link_rate)
> +{
> +	int common_len;
> +	int index;
> +
> +	common_len = intel_dp_common_rates(intel_dp, common_rates);
> +	for (index = 0; index < common_len; index++) {
> +		if (link_rate == common_rates[common_len - index -
> 1])
> +			return common_len - index - 1;
> +	}
> +
> +	return -1;
> +}
> +
>  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
>  {
>  	return rate_to_index(rate, intel_dp->sink_rates);
> diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c
> b/drivers/gpu/drm/i915/intel_dp_link_training.c
> index c438b02..f1e08f0 100644
> --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
> +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
> @@ -313,9 +313,15 @@ void intel_dp_stop_link_train(struct intel_dp
> *intel_dp)
>  				DP_TRAINING_PATTERN_DISABLE);
>  }
>  
> -void
> +bool
>  intel_dp_start_link_train(struct intel_dp *intel_dp)
>  {
> -	intel_dp_link_training_clock_recovery(intel_dp);
> -	intel_dp_link_training_channel_equalization(intel_dp);
> +	bool ret;
> +
> +	if (intel_dp_link_training_clock_recovery(intel_dp)) {
> +		ret =
> intel_dp_link_training_channel_equalization(intel_dp);
> +		if (ret)
> +			return true;
> +	}
> +	return false;
>  }
> diff --git a/drivers/gpu/drm/i915/intel_drv.h
> b/drivers/gpu/drm/i915/intel_drv.h
> index abe7a4d..69c8051 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -1160,6 +1160,8 @@ void intel_ddi_clock_get(struct intel_encoder
> *encoder,
>  			 struct intel_crtc_state *pipe_config);
>  void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool
> state);
>  uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
> +bool intel_ddi_link_train(struct intel_dp *intel_dp, int
> max_link_rate,
> +			  uint8_t max_lane_count, bool link_mst);
>  struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp
> *intel_dp,
>  						  int clock);
>  unsigned int intel_fb_align_height(struct drm_device *dev,
> @@ -1381,7 +1383,7 @@ bool intel_dp_init_connector(struct
> intel_digital_port *intel_dig_port,
>  void intel_dp_set_link_params(struct intel_dp *intel_dp,
>  			      int link_rate, uint8_t lane_count,
>  			      bool link_mst);
> -void intel_dp_start_link_train(struct intel_dp *intel_dp);
> +bool intel_dp_start_link_train(struct intel_dp *intel_dp);
>  void intel_dp_stop_link_train(struct intel_dp *intel_dp);
>  void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
>  void intel_dp_encoder_reset(struct drm_encoder *encoder);
> @@ -1403,6 +1405,8 @@ void intel_dp_add_properties(struct intel_dp
> *intel_dp, struct drm_connector *co
>  void intel_dp_mst_suspend(struct drm_device *dev);
>  void intel_dp_mst_resume(struct drm_device *dev);
>  int intel_dp_max_link_rate(struct intel_dp *intel_dp);
> +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int
> *common_rates,
> +			     int link_rate);
>  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
>  void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
>  void intel_power_sequencer_reset(struct drm_i915_private *dev_priv);
-- 
Mika Kahola - Intel OTC

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v2 3/5] drm/i915: Change the placement of some static functions in intel_dp.c
  2016-09-14  1:08 ` [PATCH v2 3/5] drm/i915: Change the placement of some static functions in intel_dp.c Manasi Navare
@ 2016-09-15  7:41   ` Mika Kahola
  2016-09-15 19:08     ` Manasi Navare
  0 siblings, 1 reply; 56+ messages in thread
From: Mika Kahola @ 2016-09-15  7:41 UTC (permalink / raw)
  To: Manasi Navare, intel-gfx

On Tue, 2016-09-13 at 18:08 -0700, Manasi Navare wrote:
> These static helper functions are required to be used within upfront
> link training related functions so they need to be placed at the top
> of the file. It also changes macro dev to dev_priv.
> 
We could split this patch into two parts. One being moving around the
helper functions and the other one cleanup patch to change dev in favor
of dev_priv.

> v2:
> * Dont move around functions declared in intel_drv.h (Rodrigo Vivi)
> 
> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_dp.c | 158 ++++++++++++++++++++--------
> ------------
>  1 file changed, 79 insertions(+), 79 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_dp.c
> b/drivers/gpu/drm/i915/intel_dp.c
> index 07f9a49..a319102 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -190,6 +190,81 @@ intel_dp_max_data_rate(int max_link_clock, int
> max_lanes)
>  	return (max_link_clock * max_lanes * 8) / 10;
>  }
>  
> +static int
> +intel_dp_sink_rates(struct intel_dp *intel_dp, const int
> **sink_rates)
> +{
> +	if (intel_dp->num_sink_rates) {
> +		*sink_rates = intel_dp->sink_rates;
> +		return intel_dp->num_sink_rates;
> +	}
> +
> +	*sink_rates = default_rates;
> +
> +	return (intel_dp_max_link_bw(intel_dp) >> 3) + 1;
> +}
> +
> +static int
> +intel_dp_source_rates(struct intel_dp *intel_dp, const int
> **source_rates)
> +{
> +	struct intel_digital_port *dig_port =
> dp_to_dig_port(intel_dp);
> +	struct drm_i915_private *dev_priv = to_i915(dig_port-
> >base.base.dev);
> +	int size;
> +
> +	if (IS_BROXTON(dev_priv)) {
> +		*source_rates = bxt_rates;
> +		size = ARRAY_SIZE(bxt_rates);
> +	} else if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
> +		*source_rates = skl_rates;
> +		size = ARRAY_SIZE(skl_rates);
> +	} else {
> +		*source_rates = default_rates;
> +		size = ARRAY_SIZE(default_rates);
> +	}
> +
> +	/* This depends on the fact that 5.4 is last value in the
> array */
> +	if (!intel_dp_source_supports_hbr2(intel_dp))
> +		size--;
> +
> +	return size;
> +}
> +
> +static int intersect_rates(const int *source_rates, int source_len,
> +			   const int *sink_rates, int sink_len,
> +			   int *common_rates)
> +{
> +	int i = 0, j = 0, k = 0;
> +
> +	while (i < source_len && j < sink_len) {
> +		if (source_rates[i] == sink_rates[j]) {
> +			if (WARN_ON(k >= DP_MAX_SUPPORTED_RATES))
> +				return k;
> +			common_rates[k] = source_rates[i];
> +			++k;
> +			++i;
> +			++j;
> +		} else if (source_rates[i] < sink_rates[j]) {
> +			++i;
> +		} else {
> +			++j;
> +		}
> +	}
> +	return k;
> +}
> +
> +static int intel_dp_common_rates(struct intel_dp *intel_dp,
> +				 int *common_rates)
> +{
> +	const int *source_rates, *sink_rates;
> +	int source_len, sink_len;
> +
> +	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
> +	source_len = intel_dp_source_rates(intel_dp, &source_rates);
> +
> +	return intersect_rates(source_rates, source_len,
> +			       sink_rates, sink_len,
> +			       common_rates);
> +}
> +
>  static enum drm_mode_status
>  intel_dp_mode_valid(struct drm_connector *connector,
>  		    struct drm_display_mode *mode)
> @@ -1256,60 +1331,22 @@ intel_dp_aux_init(struct intel_dp *intel_dp,
> struct intel_connector *connector)
>  	intel_dp->aux.transfer = intel_dp_aux_transfer;
>  }
>  
> -static int
> -intel_dp_sink_rates(struct intel_dp *intel_dp, const int
> **sink_rates)
> -{
> -	if (intel_dp->num_sink_rates) {
> -		*sink_rates = intel_dp->sink_rates;
> -		return intel_dp->num_sink_rates;
> -	}
> -
> -	*sink_rates = default_rates;
> -
> -	return (intel_dp_max_link_bw(intel_dp) >> 3) + 1;
> -}
> -
>  bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp)
>  {
>  	struct intel_digital_port *dig_port =
> dp_to_dig_port(intel_dp);
> -	struct drm_device *dev = dig_port->base.base.dev;
> +	struct drm_i915_private *dev_priv = to_i915(dig_port-
> >base.base.dev);
>  
>  	/* WaDisableHBR2:skl */
> -	if (IS_SKL_REVID(dev, 0, SKL_REVID_B0))
> +	if (IS_SKL_REVID(dev_priv, 0, SKL_REVID_B0))
>  		return false;
>  
> -	if ((IS_HASWELL(dev) && !IS_HSW_ULX(dev)) ||
> IS_BROADWELL(dev) ||
> -	    (INTEL_INFO(dev)->gen >= 9))
> +	if ((IS_HASWELL(dev_priv) && !IS_HSW_ULX(dev_priv)) ||
> +	    IS_BROADWELL(dev_priv) || (INTEL_GEN(dev_priv) >= 9))
>  		return true;
>  	else
>  		return false;
>  }
>  
> -static int
> -intel_dp_source_rates(struct intel_dp *intel_dp, const int
> **source_rates)
> -{
> -	struct intel_digital_port *dig_port =
> dp_to_dig_port(intel_dp);
> -	struct drm_device *dev = dig_port->base.base.dev;
> -	int size;
> -
> -	if (IS_BROXTON(dev)) {
> -		*source_rates = bxt_rates;
> -		size = ARRAY_SIZE(bxt_rates);
> -	} else if (IS_SKYLAKE(dev) || IS_KABYLAKE(dev)) {
> -		*source_rates = skl_rates;
> -		size = ARRAY_SIZE(skl_rates);
> -	} else {
> -		*source_rates = default_rates;
> -		size = ARRAY_SIZE(default_rates);
> -	}
> -
> -	/* This depends on the fact that 5.4 is last value in the
> array */
> -	if (!intel_dp_source_supports_hbr2(intel_dp))
> -		size--;
> -
> -	return size;
> -}
> -
>  static void
>  intel_dp_set_clock(struct intel_encoder *encoder,
>  		   struct intel_crtc_state *pipe_config)
> @@ -1343,43 +1380,6 @@ intel_dp_set_clock(struct intel_encoder
> *encoder,
>  	}
>  }
>  
> -static int intersect_rates(const int *source_rates, int source_len,
> -			   const int *sink_rates, int sink_len,
> -			   int *common_rates)
> -{
> -	int i = 0, j = 0, k = 0;
> -
> -	while (i < source_len && j < sink_len) {
> -		if (source_rates[i] == sink_rates[j]) {
> -			if (WARN_ON(k >= DP_MAX_SUPPORTED_RATES))
> -				return k;
> -			common_rates[k] = source_rates[i];
> -			++k;
> -			++i;
> -			++j;
> -		} else if (source_rates[i] < sink_rates[j]) {
> -			++i;
> -		} else {
> -			++j;
> -		}
> -	}
> -	return k;
> -}
> -
> -static int intel_dp_common_rates(struct intel_dp *intel_dp,
> -				 int *common_rates)
> -{
> -	const int *source_rates, *sink_rates;
> -	int source_len, sink_len;
> -
> -	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
> -	source_len = intel_dp_source_rates(intel_dp, &source_rates);
> -
> -	return intersect_rates(source_rates, source_len,
> -			       sink_rates, sink_len,
> -			       common_rates);
> -}
> -
>  static void snprintf_int_array(char *str, size_t len,
>  			       const int *array, int nelem)
>  {
-- 
Mika Kahola - Intel OTC

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 5/5] drm/i915/dp/mst: Add support for upfront link training for DP MST
  2016-09-14  1:08 ` [PATCH v3 5/5] drm/i915/dp/mst: Add support for upfront link training for DP MST Manasi Navare
@ 2016-09-15 17:48   ` Pandiyan, Dhinakaran
  2016-09-15 19:25     ` Manasi Navare
  0 siblings, 1 reply; 56+ messages in thread
From: Pandiyan, Dhinakaran @ 2016-09-15 17:48 UTC (permalink / raw)
  To: Navare, Manasi D; +Cc: intel-gfx

On Tue, 2016-09-13 at 18:08 -0700, Manasi Navare wrote:
> From: Jim Bride <jim.bride@linux.intel.com>
> 
> Add upfront link training to intel_dp_mst_mode_valid() so that we know
> topology constraints before we validate the legality of modes to be
> checked.
> 

The patch seems to do a lot more things than just what is described
here. I guess, it would be better to split this into multiple patches or
at least provide adequate description here.

> v3:
> * Reset the upfront values but dont unset the EDID for MST. (Manasi)
> v2:
> * Rebased on new revision of link training patch. (Manasi)
> 
> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/intel_dp.c     | 15 ++++----
>  drivers/gpu/drm/i915/intel_dp_mst.c | 74 +++++++++++++++++++++++++++----------
>  drivers/gpu/drm/i915/intel_drv.h    |  3 ++
>  3 files changed, 64 insertions(+), 28 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index 9042d28..635830e 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -131,7 +131,7 @@ static void vlv_steal_power_sequencer(struct drm_device *dev,
>  				      enum pipe pipe);
>  static void intel_dp_unset_edid(struct intel_dp *intel_dp);
>  
> -static int
> +int
>  intel_dp_max_link_bw(struct intel_dp  *intel_dp)
>  {
>  	int max_link_bw = intel_dp->dpcd[DP_MAX_LINK_RATE];
> @@ -150,7 +150,7 @@ intel_dp_max_link_bw(struct intel_dp  *intel_dp)
>  	return max_link_bw;
>  }
>  
> -static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
> +u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
>  {
>  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
>  	u8 temp, source_max, sink_max;
> @@ -296,8 +296,7 @@ static int intersect_rates(const int *source_rates, int source_len,
>  	return k;
>  }
>  
> -static int intel_dp_common_rates(struct intel_dp *intel_dp,
> -				 int *common_rates)
> +int intel_dp_common_rates(struct intel_dp *intel_dp, int *common_rates)
>  {
>  	const int *source_rates, *sink_rates;
>  	int source_len, sink_len;
> @@ -321,7 +320,7 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
>  			       common_rates);
>  }
>  
> -static bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
> +bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
>  {
>  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
>  	struct intel_encoder *intel_encoder = &intel_dig_port->base;
> @@ -4545,12 +4544,12 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
>  	}
>  
>  out:
> -	if ((status != connector_status_connected) &&
> -	    (intel_dp->is_mst == false)) {
> -		intel_dp_unset_edid(intel_dp);
> +	if (status != connector_status_connected) {
>  		intel_dp->upfront_done = false;
>  		intel_dp->max_lanes_upfront = 0;
>  		intel_dp->max_link_rate_upfront = 0;
> +		if (intel_dp->is_mst == false)
> +			intel_dp_unset_edid(intel_dp);
>  	}
>  
>  	intel_display_power_put(to_i915(dev), power_domain);
> diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c
> index 54a9d76..98d45a4 100644
> --- a/drivers/gpu/drm/i915/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/intel_dp_mst.c
> @@ -41,21 +41,30 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
>  	int bpp;
>  	int lane_count, slots;
>  	const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
> -	int mst_pbn;
> +	int mst_pbn, common_len;
> +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
>  
>  	pipe_config->dp_encoder_is_mst = true;
>  	pipe_config->has_pch_encoder = false;
> -	bpp = 24;
> +
>  	/*
> -	 * for MST we always configure max link bw - the spec doesn't
> -	 * seem to suggest we should do otherwise.
> +	 * For MST we always configure for the maximum trainable link bw -
> +	 * the spec doesn't seem to suggest we should do otherwise.  The
> +	 * calls to intel_dp_max_lane_count() and intel_dp_common_rates()
> +	 * both take successful upfront link training into account, and
> +	 * return the DisplayPort max supported values in the event that
> +	 * upfront link training was not done.
>  	 */
> -	lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
> +	lane_count = intel_dp_max_lane_count(intel_dp);
>  
>  	pipe_config->lane_count = lane_count;
>  
> -	pipe_config->pipe_bpp = 24;
> -	pipe_config->port_clock = intel_dp_max_link_rate(intel_dp);
> +	pipe_config->pipe_bpp = bpp = 24;
> +	common_len = intel_dp_common_rates(intel_dp, common_rates);
> +	pipe_config->port_clock = common_rates[common_len - 1];
> +
> +	DRM_DEBUG_KMS("DP MST link configured for %d lanes @ %d.\n",
> +		      pipe_config->lane_count, pipe_config->port_clock);
>  
>  	state = pipe_config->base.state;
>  
> @@ -137,6 +146,8 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
>  	enum port port = intel_dig_port->port;
>  	struct intel_connector *connector =
>  		to_intel_connector(conn_state->connector);
> +	struct intel_shared_dpll *pll = pipe_config->shared_dpll;
> +	struct intel_shared_dpll_config tmp_pll_config;
>  	int ret;
>  	uint32_t temp;
>  	int slots;
> @@ -150,21 +161,23 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
>  	DRM_DEBUG_KMS("%d\n", intel_dp->active_mst_links);
>  
>  	if (intel_dp->active_mst_links == 0) {
> -		intel_ddi_clk_select(&intel_dig_port->base,
> -				     pipe_config->shared_dpll);
> -
> -		intel_prepare_dp_ddi_buffers(&intel_dig_port->base);
> -		intel_dp_set_link_params(intel_dp,
> -					 pipe_config->port_clock,
> -					 pipe_config->lane_count,
> -					 true);
> -
> -		intel_ddi_init_dp_buf_reg(&intel_dig_port->base);
>  
> -		intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
> +		/* Disable the PLL since we need to acquire the PLL
> +		 * based on the link rate in the link training sequence
> +		 */
> +		tmp_pll_config = pll->config;
> +		pll->funcs.disable(dev_priv, pll);
> +		pll->config.crtc_mask = 0;
> +
> +		/* If Link Training fails, send a uevent to generate a
> +		 *hotplug
> +		 */
> +		if (!(intel_ddi_link_train(intel_dp, pipe_config->port_clock,
> +					   pipe_config->lane_count, true,
> +					   false)))
> +			drm_kms_helper_hotplug_event(encoder->base.dev);
> +		pll->config = tmp_pll_config;
>  
> -		intel_dp_start_link_train(intel_dp);
> -		intel_dp_stop_link_train(intel_dp);
>  	}
>  
>  	ret = drm_dp_mst_allocate_vcpi(&intel_dp->mst_mgr,
> @@ -336,6 +349,27 @@ intel_dp_mst_mode_valid(struct drm_connector *connector,
>  			struct drm_display_mode *mode)
>  {
>  	int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
> +	struct intel_connector *intel_connector = to_intel_connector(connector);
> +	struct intel_dp *intel_dp = intel_connector->mst_port;
> +
> +	if (intel_dp->upfront_link_train && !intel_dp->upfront_done) {
> +		bool do_upfront_link_train;
> +
> +		do_upfront_link_train = intel_dp->compliance_test_type !=
> +			DP_TEST_LINK_TRAINING;
> +		if (do_upfront_link_train) {
> +			intel_dp->upfront_done =
> +				intel_dp_upfront_link_train(intel_dp);
> +			if (intel_dp->upfront_done) {
> +				DRM_DEBUG_KMS("MST upfront trained at "
> +					      "%d lanes @ %d.",
> +					      intel_dp->max_lanes_upfront,
> +					      intel_dp->max_link_rate_upfront);
> +			} else
> +				DRM_DEBUG_KMS("MST upfront link training "
> +					      "failed.");
> +		}
> +	}
>  
>  	/* TODO - validate mode against available PBN for link */
>  	if (mode->clock < 10000)
> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> index fc2f1bc..b4bc002 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -1416,6 +1416,7 @@ void intel_edp_panel_off(struct intel_dp *intel_dp);
>  void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *connector);
>  void intel_dp_mst_suspend(struct drm_device *dev);
>  void intel_dp_mst_resume(struct drm_device *dev);
> +u8 intel_dp_max_lane_count(struct intel_dp *intel_dp);
>  int intel_dp_max_link_rate(struct intel_dp *intel_dp);
>  int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
>  			     int link_rate);
> @@ -1446,6 +1447,8 @@ intel_dp_pre_emphasis_max(struct intel_dp *intel_dp, uint8_t voltage_swing);
>  void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
>  			   uint8_t *link_bw, uint8_t *rate_select);
>  bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp);
> +int intel_dp_common_rates(struct intel_dp *intel_dp, int *common_rates);
> +bool intel_dp_upfront_link_train(struct intel_dp *intel_dp);
>  bool
>  intel_dp_get_link_status(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE]);
>  

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v2 3/5] drm/i915: Change the placement of some static functions in intel_dp.c
  2016-09-15  7:41   ` Mika Kahola
@ 2016-09-15 19:08     ` Manasi Navare
  0 siblings, 0 replies; 56+ messages in thread
From: Manasi Navare @ 2016-09-15 19:08 UTC (permalink / raw)
  To: Mika Kahola; +Cc: intel-gfx

On Thu, Sep 15, 2016 at 10:41:23AM +0300, Mika Kahola wrote:
> On Tue, 2016-09-13 at 18:08 -0700, Manasi Navare wrote:
> > These static helper functions are required to be used within upfront
> > link training related functions so they need to be placed at the top
> > of the file. It also changes macro dev to dev_priv.
> > 
> We could split this patch into two parts. One being moving around the
> helper functions and the other one cleanup patch to change dev in favor
> of dev_priv.
>

It was just one place for changing dev to dev_priv. But sure I can
add a separate patch for that.

Manasi 
> > v2:
> > * Dont move around functions declared in intel_drv.h (Rodrigo Vivi)
> > 
> > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> > ---
> >  drivers/gpu/drm/i915/intel_dp.c | 158 ++++++++++++++++++++--------
> > ------------
> >  1 file changed, 79 insertions(+), 79 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_dp.c
> > b/drivers/gpu/drm/i915/intel_dp.c
> > index 07f9a49..a319102 100644
> > --- a/drivers/gpu/drm/i915/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/intel_dp.c
> > @@ -190,6 +190,81 @@ intel_dp_max_data_rate(int max_link_clock, int
> > max_lanes)
> >  	return (max_link_clock * max_lanes * 8) / 10;
> >  }
> >  
> > +static int
> > +intel_dp_sink_rates(struct intel_dp *intel_dp, const int
> > **sink_rates)
> > +{
> > +	if (intel_dp->num_sink_rates) {
> > +		*sink_rates = intel_dp->sink_rates;
> > +		return intel_dp->num_sink_rates;
> > +	}
> > +
> > +	*sink_rates = default_rates;
> > +
> > +	return (intel_dp_max_link_bw(intel_dp) >> 3) + 1;
> > +}
> > +
> > +static int
> > +intel_dp_source_rates(struct intel_dp *intel_dp, const int
> > **source_rates)
> > +{
> > +	struct intel_digital_port *dig_port =
> > dp_to_dig_port(intel_dp);
> > +	struct drm_i915_private *dev_priv = to_i915(dig_port-
> > >base.base.dev);
> > +	int size;
> > +
> > +	if (IS_BROXTON(dev_priv)) {
> > +		*source_rates = bxt_rates;
> > +		size = ARRAY_SIZE(bxt_rates);
> > +	} else if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
> > +		*source_rates = skl_rates;
> > +		size = ARRAY_SIZE(skl_rates);
> > +	} else {
> > +		*source_rates = default_rates;
> > +		size = ARRAY_SIZE(default_rates);
> > +	}
> > +
> > +	/* This depends on the fact that 5.4 is last value in the
> > array */
> > +	if (!intel_dp_source_supports_hbr2(intel_dp))
> > +		size--;
> > +
> > +	return size;
> > +}
> > +
> > +static int intersect_rates(const int *source_rates, int source_len,
> > +			   const int *sink_rates, int sink_len,
> > +			   int *common_rates)
> > +{
> > +	int i = 0, j = 0, k = 0;
> > +
> > +	while (i < source_len && j < sink_len) {
> > +		if (source_rates[i] == sink_rates[j]) {
> > +			if (WARN_ON(k >= DP_MAX_SUPPORTED_RATES))
> > +				return k;
> > +			common_rates[k] = source_rates[i];
> > +			++k;
> > +			++i;
> > +			++j;
> > +		} else if (source_rates[i] < sink_rates[j]) {
> > +			++i;
> > +		} else {
> > +			++j;
> > +		}
> > +	}
> > +	return k;
> > +}
> > +
> > +static int intel_dp_common_rates(struct intel_dp *intel_dp,
> > +				 int *common_rates)
> > +{
> > +	const int *source_rates, *sink_rates;
> > +	int source_len, sink_len;
> > +
> > +	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
> > +	source_len = intel_dp_source_rates(intel_dp, &source_rates);
> > +
> > +	return intersect_rates(source_rates, source_len,
> > +			       sink_rates, sink_len,
> > +			       common_rates);
> > +}
> > +
> >  static enum drm_mode_status
> >  intel_dp_mode_valid(struct drm_connector *connector,
> >  		    struct drm_display_mode *mode)
> > @@ -1256,60 +1331,22 @@ intel_dp_aux_init(struct intel_dp *intel_dp,
> > struct intel_connector *connector)
> >  	intel_dp->aux.transfer = intel_dp_aux_transfer;
> >  }
> >  
> > -static int
> > -intel_dp_sink_rates(struct intel_dp *intel_dp, const int
> > **sink_rates)
> > -{
> > -	if (intel_dp->num_sink_rates) {
> > -		*sink_rates = intel_dp->sink_rates;
> > -		return intel_dp->num_sink_rates;
> > -	}
> > -
> > -	*sink_rates = default_rates;
> > -
> > -	return (intel_dp_max_link_bw(intel_dp) >> 3) + 1;
> > -}
> > -
> >  bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp)
> >  {
> >  	struct intel_digital_port *dig_port =
> > dp_to_dig_port(intel_dp);
> > -	struct drm_device *dev = dig_port->base.base.dev;
> > +	struct drm_i915_private *dev_priv = to_i915(dig_port-
> > >base.base.dev);
> >  
> >  	/* WaDisableHBR2:skl */
> > -	if (IS_SKL_REVID(dev, 0, SKL_REVID_B0))
> > +	if (IS_SKL_REVID(dev_priv, 0, SKL_REVID_B0))
> >  		return false;
> >  
> > -	if ((IS_HASWELL(dev) && !IS_HSW_ULX(dev)) ||
> > IS_BROADWELL(dev) ||
> > -	    (INTEL_INFO(dev)->gen >= 9))
> > +	if ((IS_HASWELL(dev_priv) && !IS_HSW_ULX(dev_priv)) ||
> > +	    IS_BROADWELL(dev_priv) || (INTEL_GEN(dev_priv) >= 9))
> >  		return true;
> >  	else
> >  		return false;
> >  }
> >  
> > -static int
> > -intel_dp_source_rates(struct intel_dp *intel_dp, const int
> > **source_rates)
> > -{
> > -	struct intel_digital_port *dig_port =
> > dp_to_dig_port(intel_dp);
> > -	struct drm_device *dev = dig_port->base.base.dev;
> > -	int size;
> > -
> > -	if (IS_BROXTON(dev)) {
> > -		*source_rates = bxt_rates;
> > -		size = ARRAY_SIZE(bxt_rates);
> > -	} else if (IS_SKYLAKE(dev) || IS_KABYLAKE(dev)) {
> > -		*source_rates = skl_rates;
> > -		size = ARRAY_SIZE(skl_rates);
> > -	} else {
> > -		*source_rates = default_rates;
> > -		size = ARRAY_SIZE(default_rates);
> > -	}
> > -
> > -	/* This depends on the fact that 5.4 is last value in the
> > array */
> > -	if (!intel_dp_source_supports_hbr2(intel_dp))
> > -		size--;
> > -
> > -	return size;
> > -}
> > -
> >  static void
> >  intel_dp_set_clock(struct intel_encoder *encoder,
> >  		   struct intel_crtc_state *pipe_config)
> > @@ -1343,43 +1380,6 @@ intel_dp_set_clock(struct intel_encoder
> > *encoder,
> >  	}
> >  }
> >  
> > -static int intersect_rates(const int *source_rates, int source_len,
> > -			   const int *sink_rates, int sink_len,
> > -			   int *common_rates)
> > -{
> > -	int i = 0, j = 0, k = 0;
> > -
> > -	while (i < source_len && j < sink_len) {
> > -		if (source_rates[i] == sink_rates[j]) {
> > -			if (WARN_ON(k >= DP_MAX_SUPPORTED_RATES))
> > -				return k;
> > -			common_rates[k] = source_rates[i];
> > -			++k;
> > -			++i;
> > -			++j;
> > -		} else if (source_rates[i] < sink_rates[j]) {
> > -			++i;
> > -		} else {
> > -			++j;
> > -		}
> > -	}
> > -	return k;
> > -}
> > -
> > -static int intel_dp_common_rates(struct intel_dp *intel_dp,
> > -				 int *common_rates)
> > -{
> > -	const int *source_rates, *sink_rates;
> > -	int source_len, sink_len;
> > -
> > -	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
> > -	source_len = intel_dp_source_rates(intel_dp, &source_rates);
> > -
> > -	return intersect_rates(source_rates, source_len,
> > -			       sink_rates, sink_len,
> > -			       common_rates);
> > -}
> > -
> >  static void snprintf_int_array(char *str, size_t len,
> >  			       const int *array, int nelem)
> >  {
> -- 
> Mika Kahola - Intel OTC
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 5/5] drm/i915/dp/mst: Add support for upfront link training for DP MST
  2016-09-15 17:48   ` Pandiyan, Dhinakaran
@ 2016-09-15 19:25     ` Manasi Navare
  2016-09-19 17:03       ` Jim Bride
  0 siblings, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-15 19:25 UTC (permalink / raw)
  To: Pandiyan, Dhinakaran; +Cc: intel-gfx

On Thu, Sep 15, 2016 at 10:48:17AM -0700, Pandiyan, Dhinakaran wrote:
> On Tue, 2016-09-13 at 18:08 -0700, Manasi Navare wrote:
> > From: Jim Bride <jim.bride@linux.intel.com>
> > 
> > Add upfront link training to intel_dp_mst_mode_valid() so that we know
> > topology constraints before we validate the legality of modes to be
> > checked.
> > 
> 
> The patch seems to do a lot more things than just what is described
> here. I guess, it would be better to split this into multiple patches or
> at least provide adequate description here.
> 

I think the only other thing its doing is making some functions
non static so they can be used for MST upfront enabling.
But I think that can be in the same patch since it is done in order
to enable upfront for MST.
Jim, any thoughts?


> > v3:
> > * Reset the upfront values but dont unset the EDID for MST. (Manasi)
> > v2:
> > * Rebased on new revision of link training patch. (Manasi)
> > 
> > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> > Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
> > ---
> >  drivers/gpu/drm/i915/intel_dp.c     | 15 ++++----
> >  drivers/gpu/drm/i915/intel_dp_mst.c | 74 +++++++++++++++++++++++++++----------
> >  drivers/gpu/drm/i915/intel_drv.h    |  3 ++
> >  3 files changed, 64 insertions(+), 28 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> > index 9042d28..635830e 100644
> > --- a/drivers/gpu/drm/i915/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/intel_dp.c
> > @@ -131,7 +131,7 @@ static void vlv_steal_power_sequencer(struct drm_device *dev,
> >  				      enum pipe pipe);
> >  static void intel_dp_unset_edid(struct intel_dp *intel_dp);
> >  
> > -static int
> > +int
> >  intel_dp_max_link_bw(struct intel_dp  *intel_dp)
> >  {
> >  	int max_link_bw = intel_dp->dpcd[DP_MAX_LINK_RATE];
> > @@ -150,7 +150,7 @@ intel_dp_max_link_bw(struct intel_dp  *intel_dp)
> >  	return max_link_bw;
> >  }
> >  
> > -static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
> > +u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
> >  {
> >  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
> >  	u8 temp, source_max, sink_max;
> > @@ -296,8 +296,7 @@ static int intersect_rates(const int *source_rates, int source_len,
> >  	return k;
> >  }
> >  
> > -static int intel_dp_common_rates(struct intel_dp *intel_dp,
> > -				 int *common_rates)
> > +int intel_dp_common_rates(struct intel_dp *intel_dp, int *common_rates)
> >  {
> >  	const int *source_rates, *sink_rates;
> >  	int source_len, sink_len;
> > @@ -321,7 +320,7 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
> >  			       common_rates);
> >  }
> >  
> > -static bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
> > +bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
> >  {
> >  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
> >  	struct intel_encoder *intel_encoder = &intel_dig_port->base;
> > @@ -4545,12 +4544,12 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
> >  	}
> >  
> >  out:
> > -	if ((status != connector_status_connected) &&
> > -	    (intel_dp->is_mst == false)) {
> > -		intel_dp_unset_edid(intel_dp);
> > +	if (status != connector_status_connected) {
> >  		intel_dp->upfront_done = false;
> >  		intel_dp->max_lanes_upfront = 0;
> >  		intel_dp->max_link_rate_upfront = 0;
> > +		if (intel_dp->is_mst == false)
> > +			intel_dp_unset_edid(intel_dp);
> >  	}
> >  
> >  	intel_display_power_put(to_i915(dev), power_domain);
> > diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c
> > index 54a9d76..98d45a4 100644
> > --- a/drivers/gpu/drm/i915/intel_dp_mst.c
> > +++ b/drivers/gpu/drm/i915/intel_dp_mst.c
> > @@ -41,21 +41,30 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
> >  	int bpp;
> >  	int lane_count, slots;
> >  	const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
> > -	int mst_pbn;
> > +	int mst_pbn, common_len;
> > +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
> >  
> >  	pipe_config->dp_encoder_is_mst = true;
> >  	pipe_config->has_pch_encoder = false;
> > -	bpp = 24;
> > +
> >  	/*
> > -	 * for MST we always configure max link bw - the spec doesn't
> > -	 * seem to suggest we should do otherwise.
> > +	 * For MST we always configure for the maximum trainable link bw -
> > +	 * the spec doesn't seem to suggest we should do otherwise.  The
> > +	 * calls to intel_dp_max_lane_count() and intel_dp_common_rates()
> > +	 * both take successful upfront link training into account, and
> > +	 * return the DisplayPort max supported values in the event that
> > +	 * upfront link training was not done.
> >  	 */
> > -	lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
> > +	lane_count = intel_dp_max_lane_count(intel_dp);
> >  
> >  	pipe_config->lane_count = lane_count;
> >  
> > -	pipe_config->pipe_bpp = 24;
> > -	pipe_config->port_clock = intel_dp_max_link_rate(intel_dp);
> > +	pipe_config->pipe_bpp = bpp = 24;
> > +	common_len = intel_dp_common_rates(intel_dp, common_rates);
> > +	pipe_config->port_clock = common_rates[common_len - 1];
> > +
> > +	DRM_DEBUG_KMS("DP MST link configured for %d lanes @ %d.\n",
> > +		      pipe_config->lane_count, pipe_config->port_clock);
> >  
> >  	state = pipe_config->base.state;
> >  
> > @@ -137,6 +146,8 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
> >  	enum port port = intel_dig_port->port;
> >  	struct intel_connector *connector =
> >  		to_intel_connector(conn_state->connector);
> > +	struct intel_shared_dpll *pll = pipe_config->shared_dpll;
> > +	struct intel_shared_dpll_config tmp_pll_config;
> >  	int ret;
> >  	uint32_t temp;
> >  	int slots;
> > @@ -150,21 +161,23 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
> >  	DRM_DEBUG_KMS("%d\n", intel_dp->active_mst_links);
> >  
> >  	if (intel_dp->active_mst_links == 0) {
> > -		intel_ddi_clk_select(&intel_dig_port->base,
> > -				     pipe_config->shared_dpll);
> > -
> > -		intel_prepare_dp_ddi_buffers(&intel_dig_port->base);
> > -		intel_dp_set_link_params(intel_dp,
> > -					 pipe_config->port_clock,
> > -					 pipe_config->lane_count,
> > -					 true);
> > -
> > -		intel_ddi_init_dp_buf_reg(&intel_dig_port->base);
> >  
> > -		intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
> > +		/* Disable the PLL since we need to acquire the PLL
> > +		 * based on the link rate in the link training sequence
> > +		 */
> > +		tmp_pll_config = pll->config;
> > +		pll->funcs.disable(dev_priv, pll);
> > +		pll->config.crtc_mask = 0;
> > +
> > +		/* If Link Training fails, send a uevent to generate a
> > +		 *hotplug
> > +		 */
> > +		if (!(intel_ddi_link_train(intel_dp, pipe_config->port_clock,
> > +					   pipe_config->lane_count, true,
> > +					   false)))
> > +			drm_kms_helper_hotplug_event(encoder->base.dev);
> > +		pll->config = tmp_pll_config;
> >  
> > -		intel_dp_start_link_train(intel_dp);
> > -		intel_dp_stop_link_train(intel_dp);
> >  	}
> >  
> >  	ret = drm_dp_mst_allocate_vcpi(&intel_dp->mst_mgr,
> > @@ -336,6 +349,27 @@ intel_dp_mst_mode_valid(struct drm_connector *connector,
> >  			struct drm_display_mode *mode)
> >  {
> >  	int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
> > +	struct intel_connector *intel_connector = to_intel_connector(connector);
> > +	struct intel_dp *intel_dp = intel_connector->mst_port;
> > +
> > +	if (intel_dp->upfront_link_train && !intel_dp->upfront_done) {
> > +		bool do_upfront_link_train;
> > +
> > +		do_upfront_link_train = intel_dp->compliance_test_type !=
> > +			DP_TEST_LINK_TRAINING;
> > +		if (do_upfront_link_train) {
> > +			intel_dp->upfront_done =
> > +				intel_dp_upfront_link_train(intel_dp);
> > +			if (intel_dp->upfront_done) {
> > +				DRM_DEBUG_KMS("MST upfront trained at "
> > +					      "%d lanes @ %d.",
> > +					      intel_dp->max_lanes_upfront,
> > +					      intel_dp->max_link_rate_upfront);
> > +			} else
> > +				DRM_DEBUG_KMS("MST upfront link training "
> > +					      "failed.");
> > +		}
> > +	}
> >  
> >  	/* TODO - validate mode against available PBN for link */
> >  	if (mode->clock < 10000)
> > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> > index fc2f1bc..b4bc002 100644
> > --- a/drivers/gpu/drm/i915/intel_drv.h
> > +++ b/drivers/gpu/drm/i915/intel_drv.h
> > @@ -1416,6 +1416,7 @@ void intel_edp_panel_off(struct intel_dp *intel_dp);
> >  void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *connector);
> >  void intel_dp_mst_suspend(struct drm_device *dev);
> >  void intel_dp_mst_resume(struct drm_device *dev);
> > +u8 intel_dp_max_lane_count(struct intel_dp *intel_dp);
> >  int intel_dp_max_link_rate(struct intel_dp *intel_dp);
> >  int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
> >  			     int link_rate);
> > @@ -1446,6 +1447,8 @@ intel_dp_pre_emphasis_max(struct intel_dp *intel_dp, uint8_t voltage_swing);
> >  void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
> >  			   uint8_t *link_bw, uint8_t *rate_select);
> >  bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp);
> > +int intel_dp_common_rates(struct intel_dp *intel_dp, int *common_rates);
> > +bool intel_dp_upfront_link_train(struct intel_dp *intel_dp);
> >  bool
> >  intel_dp_get_link_status(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE]);
> >  
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v5 1/5] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-14  8:15   ` Mika Kahola
@ 2016-09-15 19:56     ` Manasi Navare
  0 siblings, 0 replies; 56+ messages in thread
From: Manasi Navare @ 2016-09-15 19:56 UTC (permalink / raw)
  To: Mika Kahola; +Cc: intel-gfx

On Wed, Sep 14, 2016 at 11:15:13AM +0300, Mika Kahola wrote:
> On Tue, 2016-09-13 at 18:08 -0700, Manasi Navare wrote:
> > According to the DisplayPort Spec, in case of Clock Recovery failure
> > the link training sequence should fall back to the lower link rate
> > followed by lower lane count until CR succeeds.
> > On CR success, the sequence proceeds with Channel EQ.
> > In case of Channel EQ failures, it should fallback to
> > lower link rate and lane count and start the CR phase again.
> > 
> > v5:
> > * Reset the link rate index to the max link rate index
> > before lowering the lane count (Jani Nikula)
> > * Use the paradigm for loop in intel_dp_link_rate_index
> > v4:
> > * Fixed the link rate fallback loop (Manasi Navare)
> > v3:
> > * Fixed some rebase issues (Mika Kahola)
> > v2:
> > * Add a helper function to return index of requested link rate
> > into common_rates array
> > * Changed the link rate fallback loop to make use
> > of common_rates array (Mika Kahola)
> > * Changed INTEL_INFO to INTEL_GEN (David Weinehall)
> > 
> > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> > ---
> >  drivers/gpu/drm/i915/intel_ddi.c              | 112
> > +++++++++++++++++++++++---
> >  drivers/gpu/drm/i915/intel_dp.c               |  15 ++++
> >  drivers/gpu/drm/i915/intel_dp_link_training.c |  12 ++-
> >  drivers/gpu/drm/i915/intel_drv.h              |   6 +-
> >  4 files changed, 131 insertions(+), 14 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_ddi.c
> > b/drivers/gpu/drm/i915/intel_ddi.c
> > index 8065a5f..4d3a931 100644
> > --- a/drivers/gpu/drm/i915/intel_ddi.c
> > +++ b/drivers/gpu/drm/i915/intel_ddi.c
> > @@ -1637,19 +1637,18 @@ void intel_ddi_clk_select(struct
> > intel_encoder *encoder,
> >  	}
> >  }
> >  
> > -static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> > +static void intel_ddi_pre_enable_edp(struct intel_encoder *encoder,
> >  				    int link_rate, uint32_t
> > lane_count,
> > -				    struct intel_shared_dpll *pll,
> > -				    bool link_mst)
> > +				    struct intel_shared_dpll *pll)
> >  {
> >  	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
> >  	struct drm_i915_private *dev_priv = to_i915(encoder-
> > >base.dev);
> >  	enum port port = intel_ddi_get_encoder_port(encoder);
> >  
> >  	intel_dp_set_link_params(intel_dp, link_rate, lane_count,
> > -				 link_mst);
> > -	if (encoder->type == INTEL_OUTPUT_EDP)
> > -		intel_edp_panel_on(intel_dp);
> > +				 false);
> > +
> > +	intel_edp_panel_on(intel_dp);
> >  
> >  	intel_ddi_clk_select(encoder, pll);
> >  	intel_prepare_dp_ddi_buffers(encoder);
> > @@ -1660,6 +1659,28 @@ static void intel_ddi_pre_enable_dp(struct
> > intel_encoder *encoder,
> >  		intel_dp_stop_link_train(intel_dp);
> >  }
> >  
> > +static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> > +				    int link_rate, uint32_t
> > lane_count,
> > +				    struct intel_shared_dpll *pll,
> > +				    bool link_mst)
> > +{
> > +	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
> > +	struct drm_i915_private *dev_priv = to_i915(encoder-
> > >base.dev);
> > +	struct intel_shared_dpll_config tmp_pll_config;
> > +
> > +	/* Disable the PLL and obtain the PLL for Link Training
> > +	 * that starts with highest link rate and lane count.
> > +	 */
> > +	tmp_pll_config = pll->config;
> > +	pll->funcs.disable(dev_priv, pll);
> > +	pll->config.crtc_mask = 0;
> > +
> > +	/* If Link Training fails, send a uevent to generate a
> > hotplug */
> > +	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count,
> > link_mst))
> > +		drm_kms_helper_hotplug_event(encoder->base.dev);
> > +	pll->config = tmp_pll_config;
> > +}
> > +
> >  static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
> >  				      bool has_hdmi_sink,
> >  				      struct drm_display_mode
> > *adjusted_mode,
> > @@ -1693,20 +1714,26 @@ static void intel_ddi_pre_enable(struct
> > intel_encoder *intel_encoder,
> >  	struct intel_crtc *crtc = to_intel_crtc(encoder->crtc);
> >  	int type = intel_encoder->type;
> >  
> > -	if (type == INTEL_OUTPUT_DP || type == INTEL_OUTPUT_EDP) {
> > +	if (type == INTEL_OUTPUT_EDP)
> > +		intel_ddi_pre_enable_edp(intel_encoder,
> > +					crtc->config->port_clock,
> > +					crtc->config->lane_count,
> > +					crtc->config->shared_dpll);
> > +
> > +	if (type == INTEL_OUTPUT_DP)
> >  		intel_ddi_pre_enable_dp(intel_encoder,
> >  					crtc->config->port_clock,
> >  					crtc->config->lane_count,
> >  					crtc->config->shared_dpll,
> >  					intel_crtc_has_type(crtc-
> > >config,
> >  							    INTEL_OU
> > TPUT_DP_MST));
> > -	}
> > -	if (type == INTEL_OUTPUT_HDMI) {
> > +
> > +	if (type == INTEL_OUTPUT_HDMI)
> >  		intel_ddi_pre_enable_hdmi(intel_encoder,
> >  					  crtc->config-
> > >has_hdmi_sink,
> >  					  &crtc->config-
> > >base.adjusted_mode,
> >  					  crtc->config-
> > >shared_dpll);
> > -	}
> > +
> >  }
> >  
> >  static void intel_ddi_post_disable(struct intel_encoder
> > *intel_encoder,
> > @@ -2435,6 +2462,71 @@ intel_ddi_get_link_dpll(struct intel_dp
> > *intel_dp, int clock)
> >  	return pll;
> >  }
> >  
> > +bool
> > +intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
> > +		     uint8_t max_lane_count, bool link_mst)
> > +{
> > +	struct intel_connector *connector = intel_dp-
> > >attached_connector;
> > +	struct intel_encoder *encoder = connector->encoder;
> > +	struct drm_i915_private *dev_priv = to_i915(encoder-
> > >base.dev);
> > +	struct intel_shared_dpll *pll;
> > +	struct intel_shared_dpll_config tmp_pll_config;
> > +	int link_rate, max_link_rate_index, link_rate_index;
> > +	uint8_t lane_count;
> > +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
> > +	bool ret = false;
> > +
> > +	max_link_rate_index = intel_dp_link_rate_index(intel_dp,
> > common_rates,
> > +						   max_link_rate);
> > +	if (max_link_rate_index < 0) {
> > +		DRM_ERROR("Invalid Link Rate\n");
> > +		return false;
> > +	}
> > +	for (lane_count = max_lane_count; lane_count > 0; lane_count
> > >>= 1) {
> > +		for (link_rate_index = max_link_rate_index;
> > +		     link_rate_index >= 0; link_rate_index--) {
> > +			link_rate = common_rates[link_rate_index];
> > +			pll = intel_ddi_get_link_dpll(intel_dp,
> > link_rate);
> > +			if (pll == NULL) {
> > +				DRM_ERROR("Could not find DPLL for
> > link "
> > +					  "training.\n");
> checkpatch.pl gives a warning:
> 
> WARNING: quoted string split across lines
> #233: FILE: drivers/gpu/drm/i915/intel_ddi.c:2492:
> +                               DRM_ERROR("Could not find DPLL for link
> "
> +                                         "training.\n");
> 
> I think we could put this error message into a single line. In this
> case, the tool warns you on exceeding the 80 character limit but we
> break that rule here and there in our driver anyway.

Yes, I had a chat with Rodrigi and Jani regarding the same thing earlier.
Because even if I keep it one line, the checkpatch gives a warning of line
being over 80 chars. Which one these warnings can be safely ignored and what
would be the general convention - keeping it longer than 80 chars or 
having split quoted string.

Manasi

> 
> > +				return false;
> > +			}
> > +			tmp_pll_config = pll->config;
> > +			pll->funcs.enable(dev_priv, pll);
> > +
> > +			intel_dp_set_link_params(intel_dp,
> > link_rate,
> > +						 lane_count,
> > link_mst);
> > +
> > +			intel_ddi_clk_select(encoder, pll);
> > +			intel_prepare_dp_ddi_buffers(encoder);
> > +			intel_ddi_init_dp_buf_reg(encoder);
> > +			intel_dp_sink_dpms(intel_dp,
> > DRM_MODE_DPMS_ON);
> > +			ret = intel_dp_start_link_train(intel_dp);
> > +			if (ret)
> > +				break;
> > +
> > +			/* Disable port followed by PLL for next
> > +			 *retry/clean up
> > +			 */
> > +			intel_ddi_post_disable(encoder, NULL, NULL);
> > +			pll->funcs.disable(dev_priv, pll);
> > +			pll->config = tmp_pll_config;
> > +		}
> > +		if (ret) {
> > +			DRM_DEBUG_KMS("Link Training successful at
> > link rate: "
> > +				      "%d lane:%d\n", link_rate,
> > lane_count);
> Same thing here. Maybe
> 
> DRM_DEBUG_KMS("Link Training successful at link rate: %d lane:%d\n", 	
>               link_rate, lane_count);
> 
> > 
> > +		}
> > +	}
> > +	intel_dp_stop_link_train(intel_dp);
> > +
> > +	if (!lane_count)
> > +		DRM_ERROR("Link Training Failed\n");
> > +
> > +	return ret;
> > +}
> > +
> >  void intel_ddi_init(struct drm_device *dev, enum port port)
> >  {
> >  	struct drm_i915_private *dev_priv = to_i915(dev);
> > diff --git a/drivers/gpu/drm/i915/intel_dp.c
> > b/drivers/gpu/drm/i915/intel_dp.c
> > index 75ac62f..bb9df1e 100644
> > --- a/drivers/gpu/drm/i915/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/intel_dp.c
> > @@ -1443,6 +1443,21 @@ intel_dp_max_link_rate(struct intel_dp
> > *intel_dp)
> >  	return rates[len - 1];
> >  }
> >  
> > +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int
> > *common_rates,
> > +			     int link_rate)
> > +{
> > +	int common_len;
> > +	int index;
> > +
> > +	common_len = intel_dp_common_rates(intel_dp, common_rates);
> > +	for (index = 0; index < common_len; index++) {
> > +		if (link_rate == common_rates[common_len - index -
> > 1])
> > +			return common_len - index - 1;
> > +	}
> > +
> > +	return -1;
> > +}
> > +
> >  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
> >  {
> >  	return rate_to_index(rate, intel_dp->sink_rates);
> > diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c
> > b/drivers/gpu/drm/i915/intel_dp_link_training.c
> > index c438b02..f1e08f0 100644
> > --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
> > +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
> > @@ -313,9 +313,15 @@ void intel_dp_stop_link_train(struct intel_dp
> > *intel_dp)
> >  				DP_TRAINING_PATTERN_DISABLE);
> >  }
> >  
> > -void
> > +bool
> >  intel_dp_start_link_train(struct intel_dp *intel_dp)
> >  {
> > -	intel_dp_link_training_clock_recovery(intel_dp);
> > -	intel_dp_link_training_channel_equalization(intel_dp);
> > +	bool ret;
> > +
> > +	if (intel_dp_link_training_clock_recovery(intel_dp)) {
> > +		ret =
> > intel_dp_link_training_channel_equalization(intel_dp);
> > +		if (ret)
> > +			return true;
> > +	}
> > +	return false;
> >  }
> > diff --git a/drivers/gpu/drm/i915/intel_drv.h
> > b/drivers/gpu/drm/i915/intel_drv.h
> > index abe7a4d..69c8051 100644
> > --- a/drivers/gpu/drm/i915/intel_drv.h
> > +++ b/drivers/gpu/drm/i915/intel_drv.h
> > @@ -1160,6 +1160,8 @@ void intel_ddi_clock_get(struct intel_encoder
> > *encoder,
> >  			 struct intel_crtc_state *pipe_config);
> >  void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool
> > state);
> >  uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
> > +bool intel_ddi_link_train(struct intel_dp *intel_dp, int
> > max_link_rate,
> > +			  uint8_t max_lane_count, bool link_mst);
> >  struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp
> > *intel_dp,
> >  						  int clock);
> >  unsigned int intel_fb_align_height(struct drm_device *dev,
> > @@ -1381,7 +1383,7 @@ bool intel_dp_init_connector(struct
> > intel_digital_port *intel_dig_port,
> >  void intel_dp_set_link_params(struct intel_dp *intel_dp,
> >  			      int link_rate, uint8_t lane_count,
> >  			      bool link_mst);
> > -void intel_dp_start_link_train(struct intel_dp *intel_dp);
> > +bool intel_dp_start_link_train(struct intel_dp *intel_dp);
> >  void intel_dp_stop_link_train(struct intel_dp *intel_dp);
> >  void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
> >  void intel_dp_encoder_reset(struct drm_encoder *encoder);
> > @@ -1403,6 +1405,8 @@ void intel_dp_add_properties(struct intel_dp
> > *intel_dp, struct drm_connector *co
> >  void intel_dp_mst_suspend(struct drm_device *dev);
> >  void intel_dp_mst_resume(struct drm_device *dev);
> >  int intel_dp_max_link_rate(struct intel_dp *intel_dp);
> > +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int
> > *common_rates,
> > +			     int link_rate);
> >  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
> >  void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
> >  void intel_power_sequencer_reset(struct drm_i915_private *dev_priv);
> -- 
> Mika Kahola - Intel OTC
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 0/6] Remaining patches for upfront link training on DDI platforms
  2016-09-14  1:08 [PATCH 0/5] Remaining patches for upfront link training on DDI platforms Manasi Navare
                   ` (5 preceding siblings ...)
  2016-09-14  5:38 ` ✓ Fi.CI.BAT: success for Remaining patches for upfront link training on DDI platforms Patchwork
@ 2016-09-16  0:03 ` Manasi Navare
  2016-09-16  0:03   ` [PATCH v6 1/6] drm/i915: Fallback to lower link rate and lane count during link training Manasi Navare
                     ` (9 more replies)
  6 siblings, 10 replies; 56+ messages in thread
From: Manasi Navare @ 2016-09-16  0:03 UTC (permalink / raw)
  To: intel-gfx

This patch series includes some of the remaining patches to enable
upfront link training on DDI platforms for DP SST and MST.
They are based on some of the patches submitted earlier by
Ander and Durgadoss.

The upfront link training had to be factored out of long pulse
hanlder because of deadlock issues seen on DP MST cases.
Now the upfront link training takes place in intel_dp_mode_valid()
to find the maximum lane count and link rate at which the DP link
can be successfully trained. These values are used to prune the
invalid modes before modeset. Modeset makes use the upfront lane
count and link train values.

These patches have been validated for DP SST and DP MST on DDI
platforms.

The existing implementation of link training does not implement fallback
for link rate/lane count as per the DP spec.
This patch series implements fallback loop to lower link rate
and lane count on CR and/or Channel EQ failures during link training.

Jim Bride (1):
  drm/i915/dp/mst: Add support for upfront link training for DP MST

Manasi Navare (5):
  drm/i915: Fallback to lower link rate and lane count during link
    training
  drm/i915: Remove the link rate and lane count loop in compute config
  drm/i915: Change the placement of some static functions in intel_dp.c
  drm/i915: Code cleanup to use dev_priv and INTEL_GEN
  drm/i915/dp: Enable Upfront link training on DDI platforms

 drivers/gpu/drm/i915/intel_ddi.c              | 128 ++++++++-
 drivers/gpu/drm/i915/intel_dp.c               | 388 +++++++++++++++++++-------
 drivers/gpu/drm/i915/intel_dp_link_training.c |  13 +-
 drivers/gpu/drm/i915/intel_dp_mst.c           |  72 +++--
 drivers/gpu/drm/i915/intel_drv.h              |  21 +-
 5 files changed, 488 insertions(+), 134 deletions(-)

-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH v6 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-16  0:03 ` [PATCH 0/6] " Manasi Navare
@ 2016-09-16  0:03   ` Manasi Navare
  2016-09-16  9:29     ` Mika Kahola
  2016-09-16 18:45     ` [PATCH v7 " Manasi Navare
  2016-09-16  0:04   ` [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config Manasi Navare
                     ` (8 subsequent siblings)
  9 siblings, 2 replies; 56+ messages in thread
From: Manasi Navare @ 2016-09-16  0:03 UTC (permalink / raw)
  To: intel-gfx

According to the DisplayPort Spec, in case of Clock Recovery failure
the link training sequence should fall back to the lower link rate
followed by lower lane count until CR succeeds.
On CR success, the sequence proceeds with Channel EQ.
In case of Channel EQ failures, it should fallback to
lower link rate and lane count and start the CR phase again.

v6:
* Do not split quoted string across line (Mika Kahola)
v5:
* Reset the link rate index to the max link rate index
before lowering the lane count (Jani Nikula)
* Use the paradigm for loop in intel_dp_link_rate_index
v4:
* Fixed the link rate fallback loop (Manasi Navare)
v3:
* Fixed some rebase issues (Mika Kahola)
v2:
* Add a helper function to return index of requested link rate
into common_rates array
* Changed the link rate fallback loop to make use
of common_rates array (Mika Kahola)
* Changed INTEL_INFO to INTEL_GEN (David Weinehall)

Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
---
 drivers/gpu/drm/i915/intel_ddi.c              | 111 +++++++++++++++++++++++---
 drivers/gpu/drm/i915/intel_dp.c               |  15 ++++
 drivers/gpu/drm/i915/intel_dp_link_training.c |  12 ++-
 drivers/gpu/drm/i915/intel_drv.h              |   6 +-
 4 files changed, 130 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
index 8065a5f..826d9f7 100644
--- a/drivers/gpu/drm/i915/intel_ddi.c
+++ b/drivers/gpu/drm/i915/intel_ddi.c
@@ -1637,19 +1637,18 @@ void intel_ddi_clk_select(struct intel_encoder *encoder,
 	}
 }
 
-static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
+static void intel_ddi_pre_enable_edp(struct intel_encoder *encoder,
 				    int link_rate, uint32_t lane_count,
-				    struct intel_shared_dpll *pll,
-				    bool link_mst)
+				    struct intel_shared_dpll *pll)
 {
 	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
 	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
 	enum port port = intel_ddi_get_encoder_port(encoder);
 
 	intel_dp_set_link_params(intel_dp, link_rate, lane_count,
-				 link_mst);
-	if (encoder->type == INTEL_OUTPUT_EDP)
-		intel_edp_panel_on(intel_dp);
+				 false);
+
+	intel_edp_panel_on(intel_dp);
 
 	intel_ddi_clk_select(encoder, pll);
 	intel_prepare_dp_ddi_buffers(encoder);
@@ -1660,6 +1659,28 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
 		intel_dp_stop_link_train(intel_dp);
 }
 
+static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
+				    int link_rate, uint32_t lane_count,
+				    struct intel_shared_dpll *pll,
+				    bool link_mst)
+{
+	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
+	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+	struct intel_shared_dpll_config tmp_pll_config;
+
+	/* Disable the PLL and obtain the PLL for Link Training
+	 * that starts with highest link rate and lane count.
+	 */
+	tmp_pll_config = pll->config;
+	pll->funcs.disable(dev_priv, pll);
+	pll->config.crtc_mask = 0;
+
+	/* If Link Training fails, send a uevent to generate a hotplug */
+	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
+		drm_kms_helper_hotplug_event(encoder->base.dev);
+	pll->config = tmp_pll_config;
+}
+
 static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
 				      bool has_hdmi_sink,
 				      struct drm_display_mode *adjusted_mode,
@@ -1693,20 +1714,26 @@ static void intel_ddi_pre_enable(struct intel_encoder *intel_encoder,
 	struct intel_crtc *crtc = to_intel_crtc(encoder->crtc);
 	int type = intel_encoder->type;
 
-	if (type == INTEL_OUTPUT_DP || type == INTEL_OUTPUT_EDP) {
+	if (type == INTEL_OUTPUT_EDP)
+		intel_ddi_pre_enable_edp(intel_encoder,
+					crtc->config->port_clock,
+					crtc->config->lane_count,
+					crtc->config->shared_dpll);
+
+	if (type == INTEL_OUTPUT_DP)
 		intel_ddi_pre_enable_dp(intel_encoder,
 					crtc->config->port_clock,
 					crtc->config->lane_count,
 					crtc->config->shared_dpll,
 					intel_crtc_has_type(crtc->config,
 							    INTEL_OUTPUT_DP_MST));
-	}
-	if (type == INTEL_OUTPUT_HDMI) {
+
+	if (type == INTEL_OUTPUT_HDMI)
 		intel_ddi_pre_enable_hdmi(intel_encoder,
 					  crtc->config->has_hdmi_sink,
 					  &crtc->config->base.adjusted_mode,
 					  crtc->config->shared_dpll);
-	}
+
 }
 
 static void intel_ddi_post_disable(struct intel_encoder *intel_encoder,
@@ -2435,6 +2462,70 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
 	return pll;
 }
 
+bool
+intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
+		     uint8_t max_lane_count, bool link_mst)
+{
+	struct intel_connector *connector = intel_dp->attached_connector;
+	struct intel_encoder *encoder = connector->encoder;
+	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+	struct intel_shared_dpll *pll;
+	struct intel_shared_dpll_config tmp_pll_config;
+	int link_rate, max_link_rate_index, link_rate_index;
+	uint8_t lane_count;
+	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
+	bool ret = false;
+
+	max_link_rate_index = intel_dp_link_rate_index(intel_dp, common_rates,
+						   max_link_rate);
+	if (max_link_rate_index < 0) {
+		DRM_ERROR("Invalid Link Rate\n");
+		return false;
+	}
+	for (lane_count = max_lane_count; lane_count > 0; lane_count >>= 1) {
+		for (link_rate_index = max_link_rate_index;
+		     link_rate_index >= 0; link_rate_index--) {
+			link_rate = common_rates[link_rate_index];
+			pll = intel_ddi_get_link_dpll(intel_dp, link_rate);
+			if (pll == NULL) {
+				DRM_ERROR("Could not find DPLL for link training.\n");
+				return false;
+			}
+			tmp_pll_config = pll->config;
+			pll->funcs.enable(dev_priv, pll);
+
+			intel_dp_set_link_params(intel_dp, link_rate,
+						 lane_count, link_mst);
+
+			intel_ddi_clk_select(encoder, pll);
+			intel_prepare_dp_ddi_buffers(encoder);
+			intel_ddi_init_dp_buf_reg(encoder);
+			intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
+			ret = intel_dp_start_link_train(intel_dp);
+			if (ret)
+				break;
+
+			/* Disable port followed by PLL for next
+			 *retry/clean up
+			 */
+			intel_ddi_post_disable(encoder, NULL, NULL);
+			pll->funcs.disable(dev_priv, pll);
+			pll->config = tmp_pll_config;
+		}
+		if (ret) {
+			DRM_DEBUG_KMS("Link Training successful at link rate: %d lane: %d\n",
+				      link_rate, lane_count);
+			break;
+		}
+	}
+	intel_dp_stop_link_train(intel_dp);
+
+	if (!lane_count)
+		DRM_ERROR("Link Training Failed\n");
+
+	return ret;
+}
+
 void intel_ddi_init(struct drm_device *dev, enum port port)
 {
 	struct drm_i915_private *dev_priv = to_i915(dev);
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 69cee9b..d81c67cb 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -1506,6 +1506,21 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
 	return rates[len - 1];
 }
 
+int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
+			     int link_rate)
+{
+	int common_len;
+	int index;
+
+	common_len = intel_dp_common_rates(intel_dp, common_rates);
+	for (index = 0; index < common_len; index++) {
+		if (link_rate == common_rates[common_len - index - 1])
+			return common_len - index - 1;
+	}
+
+	return -1;
+}
+
 int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
 {
 	return rate_to_index(rate, intel_dp->sink_rates);
diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
index c438b02..f1e08f0 100644
--- a/drivers/gpu/drm/i915/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
@@ -313,9 +313,15 @@ void intel_dp_stop_link_train(struct intel_dp *intel_dp)
 				DP_TRAINING_PATTERN_DISABLE);
 }
 
-void
+bool
 intel_dp_start_link_train(struct intel_dp *intel_dp)
 {
-	intel_dp_link_training_clock_recovery(intel_dp);
-	intel_dp_link_training_channel_equalization(intel_dp);
+	bool ret;
+
+	if (intel_dp_link_training_clock_recovery(intel_dp)) {
+		ret = intel_dp_link_training_channel_equalization(intel_dp);
+		if (ret)
+			return true;
+	}
+	return false;
 }
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 8fd16ad..08cb571 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -1164,6 +1164,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
 			 struct intel_crtc_state *pipe_config);
 void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
 uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
+bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
+			  uint8_t max_lane_count, bool link_mst);
 struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
 						  int clock);
 unsigned int intel_fb_align_height(struct drm_device *dev,
@@ -1385,7 +1387,7 @@ bool intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
 void intel_dp_set_link_params(struct intel_dp *intel_dp,
 			      int link_rate, uint8_t lane_count,
 			      bool link_mst);
-void intel_dp_start_link_train(struct intel_dp *intel_dp);
+bool intel_dp_start_link_train(struct intel_dp *intel_dp);
 void intel_dp_stop_link_train(struct intel_dp *intel_dp);
 void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
 void intel_dp_encoder_reset(struct drm_encoder *encoder);
@@ -1407,6 +1409,8 @@ void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *co
 void intel_dp_mst_suspend(struct drm_device *dev);
 void intel_dp_mst_resume(struct drm_device *dev);
 int intel_dp_max_link_rate(struct intel_dp *intel_dp);
+int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
+			     int link_rate);
 int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
 void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
 void intel_power_sequencer_reset(struct drm_i915_private *dev_priv);
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config
  2016-09-16  0:03 ` [PATCH 0/6] " Manasi Navare
  2016-09-16  0:03   ` [PATCH v6 1/6] drm/i915: Fallback to lower link rate and lane count during link training Manasi Navare
@ 2016-09-16  0:04   ` Manasi Navare
  2016-09-26 13:41     ` Jani Nikula
  2016-09-16  0:04   ` [PATCH v3 3/6] drm/i915: Change the placement of some static functions in intel_dp.c Manasi Navare
                     ` (7 subsequent siblings)
  9 siblings, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-16  0:04 UTC (permalink / raw)
  To: intel-gfx

While configuring the pipe during modeset, it should use
max clock and max lane count and reduce the bpp until
the requested mode rate is less than or equal to
available link BW.
This is required to pass DP Compliance.

v3:
* Add Debug print if requested mode cannot be supported
during modeset (Dhinakaran Pandiyan)
v2:
* Removed the loop since we use max values of clock
and lane count (Dhinakaran Pandiyan)

Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
---
 drivers/gpu/drm/i915/intel_dp.c | 22 ++++++++--------------
 1 file changed, 8 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index d81c67cb..65b4559 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -1644,23 +1644,17 @@ intel_dp_compute_config(struct intel_encoder *encoder,
 	for (; bpp >= 6*3; bpp -= 2*3) {
 		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
 						   bpp);
+		clock = max_clock;
+		lane_count = max_lane_count;
+		link_clock = common_rates[clock];
+		link_avail = intel_dp_max_data_rate(link_clock,
+						    lane_count);
 
-		for (clock = min_clock; clock <= max_clock; clock++) {
-			for (lane_count = min_lane_count;
-				lane_count <= max_lane_count;
-				lane_count <<= 1) {
-
-				link_clock = common_rates[clock];
-				link_avail = intel_dp_max_data_rate(link_clock,
-								    lane_count);
-
-				if (mode_rate <= link_avail) {
-					goto found;
-				}
-			}
-		}
+		if (mode_rate <= link_avail)
+			goto found;
 	}
 
+	DRM_DEBUG_KMS("Requested Mode Rate not supported\n");
 	return false;
 
 found:
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v3 3/6] drm/i915: Change the placement of some static functions in intel_dp.c
  2016-09-16  0:03 ` [PATCH 0/6] " Manasi Navare
  2016-09-16  0:03   ` [PATCH v6 1/6] drm/i915: Fallback to lower link rate and lane count during link training Manasi Navare
  2016-09-16  0:04   ` [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config Manasi Navare
@ 2016-09-16  0:04   ` Manasi Navare
  2016-09-16  8:12     ` Mika Kahola
  2016-09-16  0:04   ` [PATCH 4/6] drm/i915: Code cleanup to use dev_priv and INTEL_GEN Manasi Navare
                     ` (6 subsequent siblings)
  9 siblings, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-16  0:04 UTC (permalink / raw)
  To: intel-gfx

These static helper functions are required to be used within upfront
link training related functions so they need to be placed at the top
of the file. It also changes macro dev to dev_priv.

v3:
* Add cleanup to other patch (Mika Kahola)
v2:
* Dont move around functions declared in intel_drv.h (Rodrigo Vivi)

Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
---
 drivers/gpu/drm/i915/intel_dp.c | 150 ++++++++++++++++++++--------------------
 1 file changed, 75 insertions(+), 75 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 65b4559..61d71fa 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -213,6 +213,81 @@ intel_dp_downstream_max_dotclock(struct intel_dp *intel_dp)
 	return max_dotclk;
 }
 
+static int
+intel_dp_sink_rates(struct intel_dp *intel_dp, const int **sink_rates)
+{
+	if (intel_dp->num_sink_rates) {
+		*sink_rates = intel_dp->sink_rates;
+		return intel_dp->num_sink_rates;
+	}
+
+	*sink_rates = default_rates;
+
+	return (intel_dp_max_link_bw(intel_dp) >> 3) + 1;
+}
+
+static int
+intel_dp_source_rates(struct intel_dp *intel_dp, const int **source_rates)
+{
+	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
+	struct drm_device *dev = dig_port->base.base.dev;
+	int size;
+
+	if (IS_BROXTON(dev)) {
+		*source_rates = bxt_rates;
+		size = ARRAY_SIZE(bxt_rates);
+	} else if (IS_SKYLAKE(dev) || IS_KABYLAKE(dev)) {
+		*source_rates = skl_rates;
+		size = ARRAY_SIZE(skl_rates);
+	} else {
+		*source_rates = default_rates;
+		size = ARRAY_SIZE(default_rates);
+	}
+
+	/* This depends on the fact that 5.4 is last value in the array */
+	if (!intel_dp_source_supports_hbr2(intel_dp))
+		size--;
+
+	return size;
+}
+
+static int intersect_rates(const int *source_rates, int source_len,
+			   const int *sink_rates, int sink_len,
+			   int *common_rates)
+{
+	int i = 0, j = 0, k = 0;
+
+	while (i < source_len && j < sink_len) {
+		if (source_rates[i] == sink_rates[j]) {
+			if (WARN_ON(k >= DP_MAX_SUPPORTED_RATES))
+				return k;
+			common_rates[k] = source_rates[i];
+			++k;
+			++i;
+			++j;
+		} else if (source_rates[i] < sink_rates[j]) {
+			++i;
+		} else {
+			++j;
+		}
+	}
+	return k;
+}
+
+static int intel_dp_common_rates(struct intel_dp *intel_dp,
+				 int *common_rates)
+{
+	const int *source_rates, *sink_rates;
+	int source_len, sink_len;
+
+	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
+	source_len = intel_dp_source_rates(intel_dp, &source_rates);
+
+	return intersect_rates(source_rates, source_len,
+			       sink_rates, sink_len,
+			       common_rates);
+}
+
 static enum drm_mode_status
 intel_dp_mode_valid(struct drm_connector *connector,
 		    struct drm_display_mode *mode)
@@ -1281,19 +1356,6 @@ intel_dp_aux_init(struct intel_dp *intel_dp)
 	intel_dp->aux.transfer = intel_dp_aux_transfer;
 }
 
-static int
-intel_dp_sink_rates(struct intel_dp *intel_dp, const int **sink_rates)
-{
-	if (intel_dp->num_sink_rates) {
-		*sink_rates = intel_dp->sink_rates;
-		return intel_dp->num_sink_rates;
-	}
-
-	*sink_rates = default_rates;
-
-	return (intel_dp_max_link_bw(intel_dp) >> 3) + 1;
-}
-
 bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
@@ -1310,31 +1372,6 @@ bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp)
 		return false;
 }
 
-static int
-intel_dp_source_rates(struct intel_dp *intel_dp, const int **source_rates)
-{
-	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
-	struct drm_device *dev = dig_port->base.base.dev;
-	int size;
-
-	if (IS_BROXTON(dev)) {
-		*source_rates = bxt_rates;
-		size = ARRAY_SIZE(bxt_rates);
-	} else if (IS_SKYLAKE(dev) || IS_KABYLAKE(dev)) {
-		*source_rates = skl_rates;
-		size = ARRAY_SIZE(skl_rates);
-	} else {
-		*source_rates = default_rates;
-		size = ARRAY_SIZE(default_rates);
-	}
-
-	/* This depends on the fact that 5.4 is last value in the array */
-	if (!intel_dp_source_supports_hbr2(intel_dp))
-		size--;
-
-	return size;
-}
-
 static void
 intel_dp_set_clock(struct intel_encoder *encoder,
 		   struct intel_crtc_state *pipe_config)
@@ -1368,43 +1405,6 @@ intel_dp_set_clock(struct intel_encoder *encoder,
 	}
 }
 
-static int intersect_rates(const int *source_rates, int source_len,
-			   const int *sink_rates, int sink_len,
-			   int *common_rates)
-{
-	int i = 0, j = 0, k = 0;
-
-	while (i < source_len && j < sink_len) {
-		if (source_rates[i] == sink_rates[j]) {
-			if (WARN_ON(k >= DP_MAX_SUPPORTED_RATES))
-				return k;
-			common_rates[k] = source_rates[i];
-			++k;
-			++i;
-			++j;
-		} else if (source_rates[i] < sink_rates[j]) {
-			++i;
-		} else {
-			++j;
-		}
-	}
-	return k;
-}
-
-static int intel_dp_common_rates(struct intel_dp *intel_dp,
-				 int *common_rates)
-{
-	const int *source_rates, *sink_rates;
-	int source_len, sink_len;
-
-	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
-	source_len = intel_dp_source_rates(intel_dp, &source_rates);
-
-	return intersect_rates(source_rates, source_len,
-			       sink_rates, sink_len,
-			       common_rates);
-}
-
 static void snprintf_int_array(char *str, size_t len,
 			       const int *array, int nelem)
 {
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH 4/6] drm/i915: Code cleanup to use dev_priv and INTEL_GEN
  2016-09-16  0:03 ` [PATCH 0/6] " Manasi Navare
                     ` (2 preceding siblings ...)
  2016-09-16  0:04   ` [PATCH v3 3/6] drm/i915: Change the placement of some static functions in intel_dp.c Manasi Navare
@ 2016-09-16  0:04   ` Manasi Navare
  2016-09-16  7:40     ` Mika Kahola
  2016-09-26 13:45     ` Jani Nikula
  2016-09-16  0:04   ` [PATCH v17 5/6] drm/i915/dp: Enable Upfront link training on DDI platforms Manasi Navare
                     ` (5 subsequent siblings)
  9 siblings, 2 replies; 56+ messages in thread
From: Manasi Navare @ 2016-09-16  0:04 UTC (permalink / raw)
  To: intel-gfx

Replace dev with dev_priv and INTEL_INFO with INTEL_GEN

Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
---
 drivers/gpu/drm/i915/intel_dp.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 61d71fa..8061e32 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -230,13 +230,13 @@ static int
 intel_dp_source_rates(struct intel_dp *intel_dp, const int **source_rates)
 {
 	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
-	struct drm_device *dev = dig_port->base.base.dev;
+	struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
 	int size;
 
-	if (IS_BROXTON(dev)) {
+	if (IS_BROXTON(dev_priv)) {
 		*source_rates = bxt_rates;
 		size = ARRAY_SIZE(bxt_rates);
-	} else if (IS_SKYLAKE(dev) || IS_KABYLAKE(dev)) {
+	} else if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
 		*source_rates = skl_rates;
 		size = ARRAY_SIZE(skl_rates);
 	} else {
@@ -1359,14 +1359,14 @@ intel_dp_aux_init(struct intel_dp *intel_dp)
 bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
-	struct drm_device *dev = dig_port->base.base.dev;
+	struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
 
 	/* WaDisableHBR2:skl */
-	if (IS_SKL_REVID(dev, 0, SKL_REVID_B0))
+	if (IS_SKL_REVID(dev_priv, 0, SKL_REVID_B0))
 		return false;
 
-	if ((IS_HASWELL(dev) && !IS_HSW_ULX(dev)) || IS_BROADWELL(dev) ||
-	    (INTEL_INFO(dev)->gen >= 9))
+	if ((IS_HASWELL(dev_priv) && !IS_HSW_ULX(dev_priv)) ||
+	    IS_BROADWELL(dev_priv) || (INTEL_GEN(dev_priv) >= 9))
 		return true;
 	else
 		return false;
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v17 5/6] drm/i915/dp: Enable Upfront link training on DDI platforms
  2016-09-16  0:03 ` [PATCH 0/6] " Manasi Navare
                     ` (3 preceding siblings ...)
  2016-09-16  0:04   ` [PATCH 4/6] drm/i915: Code cleanup to use dev_priv and INTEL_GEN Manasi Navare
@ 2016-09-16  0:04   ` Manasi Navare
  2016-09-20 22:04     ` [PATCH v18 " Manasi Navare
  2016-09-16  0:04   ` [PATCH v3 6/6] drm/i915/dp/mst: Add support for upfront link training for DP MST Manasi Navare
                     ` (4 subsequent siblings)
  9 siblings, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-16  0:04 UTC (permalink / raw)
  To: intel-gfx

To support USB type C alternate DP mode, the display driver needs to
know the number of lanes required by the DP panel as well as number
of lanes that can be supported by the type-C cable. Sometimes, the
type-C cable may limit the bandwidth even if Panel can support
more lanes. To address these scenarios we need to train the link before
modeset. This upfront link training caches the values of max link rate
and max lane count that get used later during modeset. Upfront link
training does not change any HW state, the link is disabled and PLL
values are reset to previous values after upfront link tarining so
that subsequent modeset is not aware of these changes.

This patch is based on prior work done by
R,Durgadoss <durgadoss.r@intel.com>

Changes since v16:
* Use HAS_DDI macro for enabling this feature (Rodrigo Vivi)
* Fix some unnecessary removals/changes due to rebase (Rodrigo Vivi)

Changes since v15:
* Split this patch into two patches - one with functional
changes to enable upfront and other with moving the existing
functions around so that they can be used for upfront (Jani Nikula)
* Cleaned up the commit message

Signed-off-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
---
 drivers/gpu/drm/i915/intel_ddi.c              |  21 ++-
 drivers/gpu/drm/i915/intel_dp.c               | 190 +++++++++++++++++++++++++-
 drivers/gpu/drm/i915/intel_dp_link_training.c |   1 -
 drivers/gpu/drm/i915/intel_drv.h              |  14 +-
 4 files changed, 218 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
index 826d9f7..3168bcf 100644
--- a/drivers/gpu/drm/i915/intel_ddi.c
+++ b/drivers/gpu/drm/i915/intel_ddi.c
@@ -1676,7 +1676,8 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
 	pll->config.crtc_mask = 0;
 
 	/* If Link Training fails, send a uevent to generate a hotplug */
-	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
+	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst,
+				  false))
 		drm_kms_helper_hotplug_event(encoder->base.dev);
 	pll->config = tmp_pll_config;
 }
@@ -2464,7 +2465,7 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
 
 bool
 intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
-		     uint8_t max_lane_count, bool link_mst)
+		     uint8_t max_lane_count, bool link_mst, bool is_upfront)
 {
 	struct intel_connector *connector = intel_dp->attached_connector;
 	struct intel_encoder *encoder = connector->encoder;
@@ -2512,6 +2513,7 @@ intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
 			pll->funcs.disable(dev_priv, pll);
 			pll->config = tmp_pll_config;
 		}
+
 		if (ret) {
 			DRM_DEBUG_KMS("Link Training successful at link rate: %d lane: %d\n",
 				      link_rate, lane_count);
@@ -2520,6 +2522,21 @@ intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
 	}
 	intel_dp_stop_link_train(intel_dp);
 
+	if (is_upfront) {
+		DRM_DEBUG_KMS("Upfront link train %s: link_clock:%d lanes:%d\n",
+			      ret ? "Passed" : "Failed",
+			      link_rate, lane_count);
+		/* Disable port followed by PLL for next retry/clean up */
+		intel_ddi_post_disable(encoder, NULL, NULL);
+		pll->funcs.disable(dev_priv, pll);
+		pll->config = tmp_pll_config;
+		if (ret) {
+			/* Save the upfront values */
+			intel_dp->max_lanes_upfront = lane_count;
+			intel_dp->max_link_rate_upfront = link_rate;
+		}
+	}
+
 	if (!lane_count)
 		DRM_ERROR("Link Training Failed\n");
 
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 8061e32..30b41ad 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -153,12 +153,21 @@ intel_dp_max_link_bw(struct intel_dp  *intel_dp)
 static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
-	u8 source_max, sink_max;
+	u8 temp, source_max, sink_max;
 
 	source_max = intel_dig_port->max_lanes;
 	sink_max = drm_dp_max_lane_count(intel_dp->dpcd);
 
-	return min(source_max, sink_max);
+	temp = min(source_max, sink_max);
+
+	/*
+	 * Limit max lanes w.r.t to the max value found
+	 * using Upfront link training also.
+	 */
+	if (intel_dp->max_lanes_upfront)
+		return min(temp, intel_dp->max_lanes_upfront);
+	else
+		return temp;
 }
 
 /*
@@ -190,6 +199,42 @@ intel_dp_max_data_rate(int max_link_clock, int max_lanes)
 	return (max_link_clock * max_lanes * 8) / 10;
 }
 
+static int intel_dp_upfront_crtc_disable(struct intel_crtc *crtc,
+					 struct drm_modeset_acquire_ctx *ctx,
+					 bool enable)
+{
+	int ret;
+	struct drm_atomic_state *state;
+	struct intel_crtc_state *crtc_state;
+	struct drm_device *dev = crtc->base.dev;
+	enum pipe pipe = crtc->pipe;
+
+	state = drm_atomic_state_alloc(dev);
+	if (!state)
+		return -ENOMEM;
+
+	state->acquire_ctx = ctx;
+
+	crtc_state = intel_atomic_get_crtc_state(state, crtc);
+	if (IS_ERR(crtc_state)) {
+		ret = PTR_ERR(crtc_state);
+		drm_atomic_state_free(state);
+		return ret;
+	}
+
+	DRM_DEBUG_KMS("%sabling crtc %c %s upfront link train\n",
+			enable ? "En" : "Dis",
+			pipe_name(pipe),
+			enable ? "after" : "before");
+
+	crtc_state->base.active = enable;
+	ret = drm_atomic_commit(state);
+	if (ret)
+		drm_atomic_state_free(state);
+
+	return ret;
+}
+
 static int
 intel_dp_downstream_max_dotclock(struct intel_dp *intel_dp)
 {
@@ -281,6 +326,17 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
 	int source_len, sink_len;
 
 	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
+
+	/* Cap sink rates w.r.t upfront values */
+	if (intel_dp->max_link_rate_upfront) {
+		int len = sink_len - 1;
+
+		while (len > 0 && sink_rates[len] >
+		       intel_dp->max_link_rate_upfront)
+			len--;
+		sink_len = len + 1;
+	}
+
 	source_len = intel_dp_source_rates(intel_dp, &source_rates);
 
 	return intersect_rates(source_rates, source_len,
@@ -288,6 +344,92 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
 			       common_rates);
 }
 
+static bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
+{
+	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
+	struct intel_encoder *intel_encoder = &intel_dig_port->base;
+	struct drm_device *dev = intel_encoder->base.dev;
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	struct drm_mode_config *config = &dev->mode_config;
+	struct drm_modeset_acquire_ctx ctx;
+	struct intel_crtc *intel_crtc;
+	struct drm_crtc *crtc = NULL;
+	struct intel_shared_dpll *pll;
+	struct intel_shared_dpll_config tmp_pll_config;
+	bool disable_dpll = false;
+	int ret;
+	bool done = false, has_mst = false;
+	uint8_t max_lanes;
+	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
+	int common_len;
+	enum intel_display_power_domain power_domain;
+
+	power_domain = intel_display_port_power_domain(intel_encoder);
+	intel_display_power_get(dev_priv, power_domain);
+
+	common_len = intel_dp_common_rates(intel_dp, common_rates);
+	max_lanes = intel_dp_max_lane_count(intel_dp);
+	if (WARN_ON(common_len <= 0))
+		return true;
+
+	drm_modeset_acquire_init(&ctx, 0);
+retry:
+	ret = drm_modeset_lock(&config->connection_mutex, &ctx);
+	if (ret)
+		goto exit_fail;
+
+	if (intel_encoder->base.crtc) {
+		crtc = intel_encoder->base.crtc;
+
+		ret = drm_modeset_lock(&crtc->mutex, &ctx);
+		if (ret)
+			goto exit_fail;
+
+		ret = drm_modeset_lock(&crtc->primary->mutex, &ctx);
+		if (ret)
+			goto exit_fail;
+
+		intel_crtc = to_intel_crtc(crtc);
+		pll = intel_crtc->config->shared_dpll;
+		disable_dpll = true;
+		has_mst = intel_crtc_has_type(intel_crtc->config,
+					      INTEL_OUTPUT_DP_MST);
+		ret = intel_dp_upfront_crtc_disable(intel_crtc, &ctx, false);
+		if (ret)
+			goto exit_fail;
+	}
+
+	mutex_lock(&dev_priv->dpll_lock);
+	if (disable_dpll) {
+		/* Clear the PLL config state */
+		tmp_pll_config = pll->config;
+		pll->config.crtc_mask = 0;
+	}
+
+	done = intel_dp->upfront_link_train(intel_dp,
+					    common_rates[common_len-1],
+					    max_lanes,
+					    has_mst,
+					    true);
+	if (disable_dpll)
+		pll->config = tmp_pll_config;
+
+	mutex_unlock(&dev_priv->dpll_lock);
+
+	if (crtc)
+		ret = intel_dp_upfront_crtc_disable(intel_crtc, &ctx, true);
+
+exit_fail:
+	if (ret == -EDEADLK) {
+		drm_modeset_backoff(&ctx);
+		goto retry;
+	}
+	drm_modeset_drop_locks(&ctx);
+	drm_modeset_acquire_fini(&ctx);
+	intel_display_power_put(dev_priv, power_domain);
+	return done;
+}
+
 static enum drm_mode_status
 intel_dp_mode_valid(struct drm_connector *connector,
 		    struct drm_display_mode *mode)
@@ -311,6 +453,19 @@ intel_dp_mode_valid(struct drm_connector *connector,
 		target_clock = fixed_mode->clock;
 	}
 
+	if (intel_dp->upfront_link_train && !intel_dp->upfront_done) {
+		bool do_upfront_link_train;
+		/* Do not do upfront link train, if it is a compliance
+		 * request
+		 */
+		do_upfront_link_train = !intel_dp->upfront_done &&
+			(intel_dp->compliance_test_type !=
+			 DP_TEST_LINK_TRAINING);
+
+		if (do_upfront_link_train)
+			intel_dp->upfront_done = intel_dp_upfront_link_train(intel_dp);
+	}
+
 	max_link_clock = intel_dp_max_link_rate(intel_dp);
 	max_lanes = intel_dp_max_lane_count(intel_dp);
 
@@ -1499,6 +1654,9 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
 	int rates[DP_MAX_SUPPORTED_RATES] = {};
 	int len;
 
+	if (intel_dp->max_link_rate_upfront)
+		return intel_dp->max_link_rate_upfront;
+
 	len = intel_dp_common_rates(intel_dp, rates);
 	if (WARN_ON(len <= 0))
 		return 162000;
@@ -1644,6 +1802,21 @@ intel_dp_compute_config(struct intel_encoder *encoder,
 	for (; bpp >= 6*3; bpp -= 2*3) {
 		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
 						   bpp);
+
+		if (!is_edp(intel_dp) && intel_dp->upfront_done) {
+			clock = max_clock;
+			lane_count = intel_dp->max_lanes_upfront;
+			link_clock = intel_dp->max_link_rate_upfront;
+			link_avail = intel_dp_max_data_rate(link_clock,
+							    lane_count);
+			mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
+							   bpp);
+			if (mode_rate <= link_avail)
+				goto found;
+			else
+				continue;
+		}
+
 		clock = max_clock;
 		lane_count = max_lane_count;
 		link_clock = common_rates[clock];
@@ -1672,7 +1845,6 @@ found:
 	}
 
 	pipe_config->lane_count = lane_count;
-
 	pipe_config->pipe_bpp = bpp;
 	pipe_config->port_clock = common_rates[clock];
 
@@ -4453,8 +4625,12 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
 
 out:
 	if ((status != connector_status_connected) &&
-	    (intel_dp->is_mst == false))
+	    (intel_dp->is_mst == false)) {
 		intel_dp_unset_edid(intel_dp);
+		intel_dp->upfront_done = false;
+		intel_dp->max_lanes_upfront = 0;
+		intel_dp->max_link_rate_upfront = 0;
+	}
 
 	intel_display_power_put(to_i915(dev), power_domain);
 	return;
@@ -5698,6 +5874,12 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
 	if (type == DRM_MODE_CONNECTOR_eDP)
 		intel_encoder->type = INTEL_OUTPUT_EDP;
 
+	/* Initialize upfront link training vfunc for DP */
+	if (intel_encoder->type != INTEL_OUTPUT_EDP) {
+		if (HAS_DDI(dev_priv))
+			intel_dp->upfront_link_train = intel_ddi_link_train;
+	}
+
 	/* eDP only on port B and/or C on vlv/chv */
 	if (WARN_ON((IS_VALLEYVIEW(dev) || IS_CHERRYVIEW(dev)) &&
 		    is_edp(intel_dp) && port != PORT_B && port != PORT_C))
diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
index f1e08f0..b6f380b 100644
--- a/drivers/gpu/drm/i915/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
@@ -304,7 +304,6 @@ intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
 	intel_dp_set_idle_link_train(intel_dp);
 
 	return intel_dp->channel_eq_status;
-
 }
 
 void intel_dp_stop_link_train(struct intel_dp *intel_dp)
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 08cb571..9cf147bd 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -886,6 +886,12 @@ struct intel_dp {
 	enum hdmi_force_audio force_audio;
 	bool limited_color_range;
 	bool color_range_auto;
+
+	/* Upfront link train parameters */
+	int max_link_rate_upfront;
+	uint8_t max_lanes_upfront;
+	bool upfront_done;
+
 	uint8_t dpcd[DP_RECEIVER_CAP_SIZE];
 	uint8_t psr_dpcd[EDP_PSR_RECEIVER_CAP_SIZE];
 	uint8_t downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
@@ -943,6 +949,11 @@ struct intel_dp {
 	/* This is called before a link training is starterd */
 	void (*prepare_link_retrain)(struct intel_dp *intel_dp);
 
+	/* For Upfront link training */
+	bool (*upfront_link_train)(struct intel_dp *intel_dp, int clock,
+				   uint8_t lane_count, bool link_mst,
+				   bool is_upfront);
+
 	/* Displayport compliance testing */
 	unsigned long compliance_test_type;
 	unsigned long compliance_test_data;
@@ -1165,7 +1176,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
 void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
 uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
 bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
-			  uint8_t max_lane_count, bool link_mst);
+			  uint8_t max_lane_count, bool link_mst,
+			  bool is_upfront);
 struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
 						  int clock);
 unsigned int intel_fb_align_height(struct drm_device *dev,
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH v3 6/6] drm/i915/dp/mst: Add support for upfront link training for DP MST
  2016-09-16  0:03 ` [PATCH 0/6] " Manasi Navare
                     ` (4 preceding siblings ...)
  2016-09-16  0:04   ` [PATCH v17 5/6] drm/i915/dp: Enable Upfront link training on DDI platforms Manasi Navare
@ 2016-09-16  0:04   ` Manasi Navare
  2016-09-16  0:47   ` ✓ Fi.CI.BAT: success for series starting with [v6,1/6] drm/i915: Fallback to lower link rate and lane count during link training Patchwork
                     ` (3 subsequent siblings)
  9 siblings, 0 replies; 56+ messages in thread
From: Manasi Navare @ 2016-09-16  0:04 UTC (permalink / raw)
  To: intel-gfx

From: Jim Bride <jim.bride@linux.intel.com>

Add upfront link training to intel_dp_mst_mode_valid() so that we know
topology constraints before we validate the legality of modes to be
checked. These upfront values get used in mst_compute_config instead
of using max link rate and lane count values.

Train the link during modeset using the function intel_ddi_link_train()
that starts training fast and wide and falls back to lower link rate/
lane count in each iteration until link training succeeds.

v3:
* Reset the upfront values but dont unset the EDID for MST. (Manasi)
v2:
* Rebased on new revision of link training patch. (Manasi)

Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_dp.c     | 15 ++++----
 drivers/gpu/drm/i915/intel_dp_mst.c | 72 ++++++++++++++++++++++++++-----------
 drivers/gpu/drm/i915/intel_drv.h    |  3 ++
 3 files changed, 62 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 30b41ad..d1b247e 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -131,7 +131,7 @@ static void vlv_steal_power_sequencer(struct drm_device *dev,
 				      enum pipe pipe);
 static void intel_dp_unset_edid(struct intel_dp *intel_dp);
 
-static int
+int
 intel_dp_max_link_bw(struct intel_dp  *intel_dp)
 {
 	int max_link_bw = intel_dp->dpcd[DP_MAX_LINK_RATE];
@@ -150,7 +150,7 @@ intel_dp_max_link_bw(struct intel_dp  *intel_dp)
 	return max_link_bw;
 }
 
-static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
+u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
 	u8 temp, source_max, sink_max;
@@ -319,8 +319,7 @@ static int intersect_rates(const int *source_rates, int source_len,
 	return k;
 }
 
-static int intel_dp_common_rates(struct intel_dp *intel_dp,
-				 int *common_rates)
+int intel_dp_common_rates(struct intel_dp *intel_dp, int *common_rates)
 {
 	const int *source_rates, *sink_rates;
 	int source_len, sink_len;
@@ -344,7 +343,7 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
 			       common_rates);
 }
 
-static bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
+bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
 	struct intel_encoder *intel_encoder = &intel_dig_port->base;
@@ -4624,12 +4623,12 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
 	}
 
 out:
-	if ((status != connector_status_connected) &&
-	    (intel_dp->is_mst == false)) {
-		intel_dp_unset_edid(intel_dp);
+	if (status != connector_status_connected) {
 		intel_dp->upfront_done = false;
 		intel_dp->max_lanes_upfront = 0;
 		intel_dp->max_link_rate_upfront = 0;
+		if (intel_dp->is_mst == false)
+			intel_dp_unset_edid(intel_dp);
 	}
 
 	intel_display_power_put(to_i915(dev), power_domain);
diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c
index 54a9d76..f57c672 100644
--- a/drivers/gpu/drm/i915/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/intel_dp_mst.c
@@ -41,21 +41,30 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
 	int bpp;
 	int lane_count, slots;
 	const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
-	int mst_pbn;
+	int mst_pbn, common_len;
+	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
 
 	pipe_config->dp_encoder_is_mst = true;
 	pipe_config->has_pch_encoder = false;
-	bpp = 24;
+
 	/*
-	 * for MST we always configure max link bw - the spec doesn't
-	 * seem to suggest we should do otherwise.
+	 * For MST we always configure for the maximum trainable link bw -
+	 * the spec doesn't seem to suggest we should do otherwise.  The
+	 * calls to intel_dp_max_lane_count() and intel_dp_common_rates()
+	 * both take successful upfront link training into account, and
+	 * return the DisplayPort max supported values in the event that
+	 * upfront link training was not done.
 	 */
-	lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
+	lane_count = intel_dp_max_lane_count(intel_dp);
 
 	pipe_config->lane_count = lane_count;
 
-	pipe_config->pipe_bpp = 24;
-	pipe_config->port_clock = intel_dp_max_link_rate(intel_dp);
+	pipe_config->pipe_bpp = bpp = 24;
+	common_len = intel_dp_common_rates(intel_dp, common_rates);
+	pipe_config->port_clock = common_rates[common_len - 1];
+
+	DRM_DEBUG_KMS("DP MST link configured for %d lanes @ %d.\n",
+		      pipe_config->lane_count, pipe_config->port_clock);
 
 	state = pipe_config->base.state;
 
@@ -137,6 +146,8 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
 	enum port port = intel_dig_port->port;
 	struct intel_connector *connector =
 		to_intel_connector(conn_state->connector);
+	struct intel_shared_dpll *pll = pipe_config->shared_dpll;
+	struct intel_shared_dpll_config tmp_pll_config;
 	int ret;
 	uint32_t temp;
 	int slots;
@@ -150,21 +161,23 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
 	DRM_DEBUG_KMS("%d\n", intel_dp->active_mst_links);
 
 	if (intel_dp->active_mst_links == 0) {
-		intel_ddi_clk_select(&intel_dig_port->base,
-				     pipe_config->shared_dpll);
-
-		intel_prepare_dp_ddi_buffers(&intel_dig_port->base);
-		intel_dp_set_link_params(intel_dp,
-					 pipe_config->port_clock,
-					 pipe_config->lane_count,
-					 true);
-
-		intel_ddi_init_dp_buf_reg(&intel_dig_port->base);
 
-		intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
+		/* Disable the PLL since we need to acquire the PLL
+		 * based on the link rate in the link training sequence
+		 */
+		tmp_pll_config = pll->config;
+		pll->funcs.disable(dev_priv, pll);
+		pll->config.crtc_mask = 0;
+
+		/* If Link Training fails, send a uevent to generate a
+		 *hotplug
+		 */
+		if (!(intel_ddi_link_train(intel_dp, pipe_config->port_clock,
+					   pipe_config->lane_count, true,
+					   false)))
+			drm_kms_helper_hotplug_event(encoder->base.dev);
+		pll->config = tmp_pll_config;
 
-		intel_dp_start_link_train(intel_dp);
-		intel_dp_stop_link_train(intel_dp);
 	}
 
 	ret = drm_dp_mst_allocate_vcpi(&intel_dp->mst_mgr,
@@ -336,6 +349,25 @@ intel_dp_mst_mode_valid(struct drm_connector *connector,
 			struct drm_display_mode *mode)
 {
 	int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
+	struct intel_connector *intel_connector = to_intel_connector(connector);
+	struct intel_dp *intel_dp = intel_connector->mst_port;
+
+	if (intel_dp->upfront_link_train && !intel_dp->upfront_done) {
+		bool do_upfront_link_train;
+
+		do_upfront_link_train = intel_dp->compliance_test_type !=
+			DP_TEST_LINK_TRAINING;
+		if (do_upfront_link_train) {
+			intel_dp->upfront_done =
+				intel_dp_upfront_link_train(intel_dp);
+			if (intel_dp->upfront_done) {
+				DRM_DEBUG_KMS("MST upfront trained at %d lanes @ %d.",
+					      intel_dp->max_lanes_upfront,
+					      intel_dp->max_link_rate_upfront);
+			} else
+				DRM_DEBUG_KMS("MST upfront link training failed.");
+		}
+	}
 
 	/* TODO - validate mode against available PBN for link */
 	if (mode->clock < 10000)
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 9cf147bd..6d07c2a 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -1420,6 +1420,7 @@ void intel_edp_panel_off(struct intel_dp *intel_dp);
 void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *connector);
 void intel_dp_mst_suspend(struct drm_device *dev);
 void intel_dp_mst_resume(struct drm_device *dev);
+u8 intel_dp_max_lane_count(struct intel_dp *intel_dp);
 int intel_dp_max_link_rate(struct intel_dp *intel_dp);
 int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
 			     int link_rate);
@@ -1450,6 +1451,8 @@ intel_dp_pre_emphasis_max(struct intel_dp *intel_dp, uint8_t voltage_swing);
 void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
 			   uint8_t *link_bw, uint8_t *rate_select);
 bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp);
+int intel_dp_common_rates(struct intel_dp *intel_dp, int *common_rates);
+bool intel_dp_upfront_link_train(struct intel_dp *intel_dp);
 bool
 intel_dp_get_link_status(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE]);
 
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* ✓ Fi.CI.BAT: success for series starting with [v6,1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-16  0:03 ` [PATCH 0/6] " Manasi Navare
                     ` (5 preceding siblings ...)
  2016-09-16  0:04   ` [PATCH v3 6/6] drm/i915/dp/mst: Add support for upfront link training for DP MST Manasi Navare
@ 2016-09-16  0:47   ` Patchwork
  2016-09-16 19:25   ` ✓ Fi.CI.BAT: success for series starting with [v7,1/6] drm/i915: Fallback to lower link rate and lane count during link training (rev2) Patchwork
                     ` (2 subsequent siblings)
  9 siblings, 0 replies; 56+ messages in thread
From: Patchwork @ 2016-09-16  0:47 UTC (permalink / raw)
  To: Navare, Manasi D; +Cc: intel-gfx

== Series Details ==

Series: series starting with [v6,1/6] drm/i915: Fallback to lower link rate and lane count during link training
URL   : https://patchwork.freedesktop.org/series/12534/
State : success

== Summary ==

Series 12534v1 Series without cover letter
https://patchwork.freedesktop.org/api/1.0/series/12534/revisions/1/mbox/

Test kms_pipe_crc_basic:
        Subgroup suspend-read-crc-pipe-b:
                dmesg-warn -> PASS       (fi-byt-j1900)

fi-bdw-5557u     total:244  pass:229  dwarn:0   dfail:0   fail:0   skip:15 
fi-byt-j1900     total:244  pass:211  dwarn:1   dfail:0   fail:1   skip:31 
fi-byt-n2820     total:244  pass:208  dwarn:0   dfail:0   fail:1   skip:35 
fi-hsw-4770k     total:244  pass:226  dwarn:0   dfail:0   fail:0   skip:18 
fi-hsw-4770r     total:244  pass:222  dwarn:0   dfail:0   fail:0   skip:22 
fi-ilk-650       total:244  pass:183  dwarn:0   dfail:0   fail:1   skip:60 
fi-ivb-3520m     total:244  pass:219  dwarn:0   dfail:0   fail:0   skip:25 
fi-ivb-3770      total:244  pass:207  dwarn:0   dfail:0   fail:0   skip:37 
fi-skl-6260u     total:244  pass:230  dwarn:0   dfail:0   fail:0   skip:14 
fi-skl-6700hq    total:244  pass:221  dwarn:0   dfail:0   fail:1   skip:22 
fi-skl-6700k     total:244  pass:219  dwarn:1   dfail:0   fail:0   skip:24 
fi-snb-2520m     total:244  pass:208  dwarn:0   dfail:0   fail:0   skip:36 
fi-snb-2600      total:244  pass:207  dwarn:0   dfail:0   fail:0   skip:37 
fi-skl-6770hq failed to collect. IGT log at Patchwork_2541/fi-skl-6770hq/igt.log

Results at /archive/results/CI_IGT_test/Patchwork_2541/

a6259b0355d72540204077883a71796ac89e7603 drm-intel-nightly: 2016y-09m-15d-14h-43m-41s UTC integration manifest
d7c15e7 drm/i915/dp/mst: Add support for upfront link training for DP MST
e16692c drm/i915/dp: Enable Upfront link training on DDI platforms
8a987ec drm/i915: Code cleanup to use dev_priv and INTEL_GEN
a47436b drm/i915: Change the placement of some static functions in intel_dp.c
1b8ba32 drm/i915: Remove the link rate and lane count loop in compute config
1a67daa drm/i915: Fallback to lower link rate and lane count during link training

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 4/6] drm/i915: Code cleanup to use dev_priv and INTEL_GEN
  2016-09-16  0:04   ` [PATCH 4/6] drm/i915: Code cleanup to use dev_priv and INTEL_GEN Manasi Navare
@ 2016-09-16  7:40     ` Mika Kahola
  2016-09-26 13:45     ` Jani Nikula
  1 sibling, 0 replies; 56+ messages in thread
From: Mika Kahola @ 2016-09-16  7:40 UTC (permalink / raw)
  To: Manasi Navare, intel-gfx

Reviewed-by: Mika Kahola <mika.kahola@intel.com>

On Thu, 2016-09-15 at 17:04 -0700, Manasi Navare wrote:
> Replace dev with dev_priv and INTEL_INFO with INTEL_GEN
> 
> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_dp.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_dp.c
> b/drivers/gpu/drm/i915/intel_dp.c
> index 61d71fa..8061e32 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -230,13 +230,13 @@ static int
>  intel_dp_source_rates(struct intel_dp *intel_dp, const int
> **source_rates)
>  {
>  	struct intel_digital_port *dig_port =
> dp_to_dig_port(intel_dp);
> -	struct drm_device *dev = dig_port->base.base.dev;
> +	struct drm_i915_private *dev_priv = to_i915(dig_port-
> >base.base.dev);
>  	int size;
>  
> -	if (IS_BROXTON(dev)) {
> +	if (IS_BROXTON(dev_priv)) {
>  		*source_rates = bxt_rates;
>  		size = ARRAY_SIZE(bxt_rates);
> -	} else if (IS_SKYLAKE(dev) || IS_KABYLAKE(dev)) {
> +	} else if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
>  		*source_rates = skl_rates;
>  		size = ARRAY_SIZE(skl_rates);
>  	} else {
> @@ -1359,14 +1359,14 @@ intel_dp_aux_init(struct intel_dp *intel_dp)
>  bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp)
>  {
>  	struct intel_digital_port *dig_port =
> dp_to_dig_port(intel_dp);
> -	struct drm_device *dev = dig_port->base.base.dev;
> +	struct drm_i915_private *dev_priv = to_i915(dig_port-
> >base.base.dev);
>  
>  	/* WaDisableHBR2:skl */
> -	if (IS_SKL_REVID(dev, 0, SKL_REVID_B0))
> +	if (IS_SKL_REVID(dev_priv, 0, SKL_REVID_B0))
>  		return false;
>  
> -	if ((IS_HASWELL(dev) && !IS_HSW_ULX(dev)) ||
> IS_BROADWELL(dev) ||
> -	    (INTEL_INFO(dev)->gen >= 9))
> +	if ((IS_HASWELL(dev_priv) && !IS_HSW_ULX(dev_priv)) ||
> +	    IS_BROADWELL(dev_priv) || (INTEL_GEN(dev_priv) >= 9))
>  		return true;
>  	else
>  		return false;
-- 
Mika Kahola - Intel OTC

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 3/6] drm/i915: Change the placement of some static functions in intel_dp.c
  2016-09-16  0:04   ` [PATCH v3 3/6] drm/i915: Change the placement of some static functions in intel_dp.c Manasi Navare
@ 2016-09-16  8:12     ` Mika Kahola
  0 siblings, 0 replies; 56+ messages in thread
From: Mika Kahola @ 2016-09-16  8:12 UTC (permalink / raw)
  To: Manasi Navare, intel-gfx

Reviewed-by: Mika Kahola <mika.kahola@intel.com>

On Thu, 2016-09-15 at 17:04 -0700, Manasi Navare wrote:
> These static helper functions are required to be used within upfront
> link training related functions so they need to be placed at the top
> of the file. It also changes macro dev to dev_priv.
> 
> v3:
> * Add cleanup to other patch (Mika Kahola)
> v2:
> * Dont move around functions declared in intel_drv.h (Rodrigo Vivi)
> 
> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_dp.c | 150 ++++++++++++++++++++--------
> ------------
>  1 file changed, 75 insertions(+), 75 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_dp.c
> b/drivers/gpu/drm/i915/intel_dp.c
> index 65b4559..61d71fa 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -213,6 +213,81 @@ intel_dp_downstream_max_dotclock(struct intel_dp
> *intel_dp)
>  	return max_dotclk;
>  }
>  
> +static int
> +intel_dp_sink_rates(struct intel_dp *intel_dp, const int
> **sink_rates)
> +{
> +	if (intel_dp->num_sink_rates) {
> +		*sink_rates = intel_dp->sink_rates;
> +		return intel_dp->num_sink_rates;
> +	}
> +
> +	*sink_rates = default_rates;
> +
> +	return (intel_dp_max_link_bw(intel_dp) >> 3) + 1;
> +}
> +
> +static int
> +intel_dp_source_rates(struct intel_dp *intel_dp, const int
> **source_rates)
> +{
> +	struct intel_digital_port *dig_port =
> dp_to_dig_port(intel_dp);
> +	struct drm_device *dev = dig_port->base.base.dev;
> +	int size;
> +
> +	if (IS_BROXTON(dev)) {
> +		*source_rates = bxt_rates;
> +		size = ARRAY_SIZE(bxt_rates);
> +	} else if (IS_SKYLAKE(dev) || IS_KABYLAKE(dev)) {
> +		*source_rates = skl_rates;
> +		size = ARRAY_SIZE(skl_rates);
> +	} else {
> +		*source_rates = default_rates;
> +		size = ARRAY_SIZE(default_rates);
> +	}
> +
> +	/* This depends on the fact that 5.4 is last value in the
> array */
> +	if (!intel_dp_source_supports_hbr2(intel_dp))
> +		size--;
> +
> +	return size;
> +}
> +
> +static int intersect_rates(const int *source_rates, int source_len,
> +			   const int *sink_rates, int sink_len,
> +			   int *common_rates)
> +{
> +	int i = 0, j = 0, k = 0;
> +
> +	while (i < source_len && j < sink_len) {
> +		if (source_rates[i] == sink_rates[j]) {
> +			if (WARN_ON(k >= DP_MAX_SUPPORTED_RATES))
> +				return k;
> +			common_rates[k] = source_rates[i];
> +			++k;
> +			++i;
> +			++j;
> +		} else if (source_rates[i] < sink_rates[j]) {
> +			++i;
> +		} else {
> +			++j;
> +		}
> +	}
> +	return k;
> +}
> +
> +static int intel_dp_common_rates(struct intel_dp *intel_dp,
> +				 int *common_rates)
> +{
> +	const int *source_rates, *sink_rates;
> +	int source_len, sink_len;
> +
> +	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
> +	source_len = intel_dp_source_rates(intel_dp, &source_rates);
> +
> +	return intersect_rates(source_rates, source_len,
> +			       sink_rates, sink_len,
> +			       common_rates);
> +}
> +
>  static enum drm_mode_status
>  intel_dp_mode_valid(struct drm_connector *connector,
>  		    struct drm_display_mode *mode)
> @@ -1281,19 +1356,6 @@ intel_dp_aux_init(struct intel_dp *intel_dp)
>  	intel_dp->aux.transfer = intel_dp_aux_transfer;
>  }
>  
> -static int
> -intel_dp_sink_rates(struct intel_dp *intel_dp, const int
> **sink_rates)
> -{
> -	if (intel_dp->num_sink_rates) {
> -		*sink_rates = intel_dp->sink_rates;
> -		return intel_dp->num_sink_rates;
> -	}
> -
> -	*sink_rates = default_rates;
> -
> -	return (intel_dp_max_link_bw(intel_dp) >> 3) + 1;
> -}
> -
>  bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp)
>  {
>  	struct intel_digital_port *dig_port =
> dp_to_dig_port(intel_dp);
> @@ -1310,31 +1372,6 @@ bool intel_dp_source_supports_hbr2(struct
> intel_dp *intel_dp)
>  		return false;
>  }
>  
> -static int
> -intel_dp_source_rates(struct intel_dp *intel_dp, const int
> **source_rates)
> -{
> -	struct intel_digital_port *dig_port =
> dp_to_dig_port(intel_dp);
> -	struct drm_device *dev = dig_port->base.base.dev;
> -	int size;
> -
> -	if (IS_BROXTON(dev)) {
> -		*source_rates = bxt_rates;
> -		size = ARRAY_SIZE(bxt_rates);
> -	} else if (IS_SKYLAKE(dev) || IS_KABYLAKE(dev)) {
> -		*source_rates = skl_rates;
> -		size = ARRAY_SIZE(skl_rates);
> -	} else {
> -		*source_rates = default_rates;
> -		size = ARRAY_SIZE(default_rates);
> -	}
> -
> -	/* This depends on the fact that 5.4 is last value in the
> array */
> -	if (!intel_dp_source_supports_hbr2(intel_dp))
> -		size--;
> -
> -	return size;
> -}
> -
>  static void
>  intel_dp_set_clock(struct intel_encoder *encoder,
>  		   struct intel_crtc_state *pipe_config)
> @@ -1368,43 +1405,6 @@ intel_dp_set_clock(struct intel_encoder
> *encoder,
>  	}
>  }
>  
> -static int intersect_rates(const int *source_rates, int source_len,
> -			   const int *sink_rates, int sink_len,
> -			   int *common_rates)
> -{
> -	int i = 0, j = 0, k = 0;
> -
> -	while (i < source_len && j < sink_len) {
> -		if (source_rates[i] == sink_rates[j]) {
> -			if (WARN_ON(k >= DP_MAX_SUPPORTED_RATES))
> -				return k;
> -			common_rates[k] = source_rates[i];
> -			++k;
> -			++i;
> -			++j;
> -		} else if (source_rates[i] < sink_rates[j]) {
> -			++i;
> -		} else {
> -			++j;
> -		}
> -	}
> -	return k;
> -}
> -
> -static int intel_dp_common_rates(struct intel_dp *intel_dp,
> -				 int *common_rates)
> -{
> -	const int *source_rates, *sink_rates;
> -	int source_len, sink_len;
> -
> -	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
> -	source_len = intel_dp_source_rates(intel_dp, &source_rates);
> -
> -	return intersect_rates(source_rates, source_len,
> -			       sink_rates, sink_len,
> -			       common_rates);
> -}
> -
>  static void snprintf_int_array(char *str, size_t len,
>  			       const int *array, int nelem)
>  {
-- 
Mika Kahola - Intel OTC

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v6 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-16  0:03   ` [PATCH v6 1/6] drm/i915: Fallback to lower link rate and lane count during link training Manasi Navare
@ 2016-09-16  9:29     ` Mika Kahola
  2016-09-16 18:45     ` [PATCH v7 " Manasi Navare
  1 sibling, 0 replies; 56+ messages in thread
From: Mika Kahola @ 2016-09-16  9:29 UTC (permalink / raw)
  To: Manasi Navare, intel-gfx

On Thu, 2016-09-15 at 17:03 -0700, Manasi Navare wrote:
> According to the DisplayPort Spec, in case of Clock Recovery failure
> the link training sequence should fall back to the lower link rate
> followed by lower lane count until CR succeeds.
> On CR success, the sequence proceeds with Channel EQ.
> In case of Channel EQ failures, it should fallback to
> lower link rate and lane count and start the CR phase again.
> 
> v6:
> * Do not split quoted string across line (Mika Kahola)
> v5:
> * Reset the link rate index to the max link rate index
> before lowering the lane count (Jani Nikula)
> * Use the paradigm for loop in intel_dp_link_rate_index
> v4:
> * Fixed the link rate fallback loop (Manasi Navare)
> v3:
> * Fixed some rebase issues (Mika Kahola)
> v2:
> * Add a helper function to return index of requested link rate
> into common_rates array
> * Changed the link rate fallback loop to make use
> of common_rates array (Mika Kahola)
> * Changed INTEL_INFO to INTEL_GEN (David Weinehall)
> 
> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_ddi.c              | 111
> +++++++++++++++++++++++---
>  drivers/gpu/drm/i915/intel_dp.c               |  15 ++++
>  drivers/gpu/drm/i915/intel_dp_link_training.c |  12 ++-
>  drivers/gpu/drm/i915/intel_drv.h              |   6 +-
>  4 files changed, 130 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_ddi.c
> b/drivers/gpu/drm/i915/intel_ddi.c
> index 8065a5f..826d9f7 100644
> --- a/drivers/gpu/drm/i915/intel_ddi.c
> +++ b/drivers/gpu/drm/i915/intel_ddi.c
> @@ -1637,19 +1637,18 @@ void intel_ddi_clk_select(struct
> intel_encoder *encoder,
>  	}
>  }
>  
> -static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> +static void intel_ddi_pre_enable_edp(struct intel_encoder *encoder,
>  				    int link_rate, uint32_t
> lane_count,
> -				    struct intel_shared_dpll *pll,
> -				    bool link_mst)
> +				    struct intel_shared_dpll *pll)
>  {
>  	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
>  	struct drm_i915_private *dev_priv = to_i915(encoder-
> >base.dev);
>  	enum port port = intel_ddi_get_encoder_port(encoder);
>  
>  	intel_dp_set_link_params(intel_dp, link_rate, lane_count,
> -				 link_mst);
> -	if (encoder->type == INTEL_OUTPUT_EDP)
> -		intel_edp_panel_on(intel_dp);
> +				 false);
> +
> +	intel_edp_panel_on(intel_dp);
>  
>  	intel_ddi_clk_select(encoder, pll);
>  	intel_prepare_dp_ddi_buffers(encoder);
> @@ -1660,6 +1659,28 @@ static void intel_ddi_pre_enable_dp(struct
> intel_encoder *encoder,
>  		intel_dp_stop_link_train(intel_dp);
>  }
>  
> +static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> +				    int link_rate, uint32_t
> lane_count,
> +				    struct intel_shared_dpll *pll,
> +				    bool link_mst)
> +{
> +	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
> +	struct drm_i915_private *dev_priv = to_i915(encoder-
> >base.dev);
> +	struct intel_shared_dpll_config tmp_pll_config;
> +
> +	/* Disable the PLL and obtain the PLL for Link Training
> +	 * that starts with highest link rate and lane count.
> +	 */
> +	tmp_pll_config = pll->config;
> +	pll->funcs.disable(dev_priv, pll);
> +	pll->config.crtc_mask = 0;
> +
> +	/* If Link Training fails, send a uevent to generate a
> hotplug */
> +	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count,
> link_mst))
> +		drm_kms_helper_hotplug_event(encoder->base.dev);
> +	pll->config = tmp_pll_config;
> +}
> +
>  static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
>  				      bool has_hdmi_sink,
>  				      struct drm_display_mode
> *adjusted_mode,
> @@ -1693,20 +1714,26 @@ static void intel_ddi_pre_enable(struct
> intel_encoder *intel_encoder,
>  	struct intel_crtc *crtc = to_intel_crtc(encoder->crtc);
>  	int type = intel_encoder->type;
>  
> -	if (type == INTEL_OUTPUT_DP || type == INTEL_OUTPUT_EDP) {
> +	if (type == INTEL_OUTPUT_EDP)
> +		intel_ddi_pre_enable_edp(intel_encoder,
> +					crtc->config->port_clock,
> +					crtc->config->lane_count,
> +					crtc->config->shared_dpll);
> +
> +	if (type == INTEL_OUTPUT_DP)
>  		intel_ddi_pre_enable_dp(intel_encoder,
>  					crtc->config->port_clock,
>  					crtc->config->lane_count,
>  					crtc->config->shared_dpll,
>  					intel_crtc_has_type(crtc-
> >config,
>  							    INTEL_OU
> TPUT_DP_MST));
> -	}
> -	if (type == INTEL_OUTPUT_HDMI) {
> +
> +	if (type == INTEL_OUTPUT_HDMI)
>  		intel_ddi_pre_enable_hdmi(intel_encoder,
>  					  crtc->config-
> >has_hdmi_sink,
>  					  &crtc->config-
> >base.adjusted_mode,
>  					  crtc->config-
> >shared_dpll);
> -	}
> +
>  }
>  
>  static void intel_ddi_post_disable(struct intel_encoder
> *intel_encoder,
> @@ -2435,6 +2462,70 @@ intel_ddi_get_link_dpll(struct intel_dp
> *intel_dp, int clock)
>  	return pll;
>  }
>  
> +bool
> +intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
> +		     uint8_t max_lane_count, bool link_mst)
> +{
> +	struct intel_connector *connector = intel_dp-
> >attached_connector;
> +	struct intel_encoder *encoder = connector->encoder;
> +	struct drm_i915_private *dev_priv = to_i915(encoder-
> >base.dev);
> +	struct intel_shared_dpll *pll;
> +	struct intel_shared_dpll_config tmp_pll_config;
> +	int link_rate, max_link_rate_index, link_rate_index;
> +	uint8_t lane_count;
> +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
> +	bool ret = false;
> +
> +	max_link_rate_index = intel_dp_link_rate_index(intel_dp,
> common_rates,
> +						   max_link_rate);
I didn't spot this last round but the indentation is a bit off here.

> +	if (max_link_rate_index < 0) {
> +		DRM_ERROR("Invalid Link Rate\n");
> +		return false;
> +	}
Maybe one empty line here

> +	for (lane_count = max_lane_count; lane_count > 0; lane_count
> >>= 1) {
> +		for (link_rate_index = max_link_rate_index;
> +		     link_rate_index >= 0; link_rate_index--) {
> +			link_rate = common_rates[link_rate_index];
> +			pll = intel_ddi_get_link_dpll(intel_dp,
> link_rate);
> +			if (pll == NULL) {
> +				DRM_ERROR("Could not find DPLL for
> link training.\n");
> +				return false;
> +			}
> +			tmp_pll_config = pll->config;
> +			pll->funcs.enable(dev_priv, pll);
> +
> +			intel_dp_set_link_params(intel_dp,
> link_rate,
> +						 lane_count,
> link_mst);
> +
> +			intel_ddi_clk_select(encoder, pll);
> +			intel_prepare_dp_ddi_buffers(encoder);
> +			intel_ddi_init_dp_buf_reg(encoder);
> +			intel_dp_sink_dpms(intel_dp,
> DRM_MODE_DPMS_ON);
> +			ret = intel_dp_start_link_train(intel_dp);
> +			if (ret)
> +				break;
> +
> +			/* Disable port followed by PLL for next
> +			 *retry/clean up
> +			 */
> +			intel_ddi_post_disable(encoder, NULL, NULL);
> +			pll->funcs.disable(dev_priv, pll);
> +			pll->config = tmp_pll_config;
> +		}
> +		if (ret) {
> +			DRM_DEBUG_KMS("Link Training successful at
> link rate: %d lane: %d\n",
> +				      link_rate, lane_count);
> +			break;
> +		}
> +	}
Maybe one empty line here too.

> +	intel_dp_stop_link_train(intel_dp);
> +
> +	if (!lane_count)
> +		DRM_ERROR("Link Training Failed\n");
> +
> +	return ret;
> +}
> +
>  void intel_ddi_init(struct drm_device *dev, enum port port)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(dev);
> diff --git a/drivers/gpu/drm/i915/intel_dp.c
> b/drivers/gpu/drm/i915/intel_dp.c
> index 69cee9b..d81c67cb 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -1506,6 +1506,21 @@ intel_dp_max_link_rate(struct intel_dp
> *intel_dp)
>  	return rates[len - 1];
>  }
>  
> +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int
> *common_rates,
> +			     int link_rate)
> +{
> +	int common_len;
> +	int index;
> +
> +	common_len = intel_dp_common_rates(intel_dp, common_rates);
> +	for (index = 0; index < common_len; index++) {
> +		if (link_rate == common_rates[common_len - index -
> 1])
> +			return common_len - index - 1;
> +	}
> +
> +	return -1;
> +}
> +
>  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
>  {
>  	return rate_to_index(rate, intel_dp->sink_rates);
> diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c
> b/drivers/gpu/drm/i915/intel_dp_link_training.c
> index c438b02..f1e08f0 100644
> --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
> +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
> @@ -313,9 +313,15 @@ void intel_dp_stop_link_train(struct intel_dp
> *intel_dp)
>  				DP_TRAINING_PATTERN_DISABLE);
>  }
>  
> -void
> +bool
>  intel_dp_start_link_train(struct intel_dp *intel_dp)
>  {
> -	intel_dp_link_training_clock_recovery(intel_dp);
> -	intel_dp_link_training_channel_equalization(intel_dp);
> +	bool ret;
> +
> +	if (intel_dp_link_training_clock_recovery(intel_dp)) {
> +		ret =
> intel_dp_link_training_channel_equalization(intel_dp);
> +		if (ret)
> +			return true;
> +	}
And maybe one empty line here too
> +	return false;
>  }
> diff --git a/drivers/gpu/drm/i915/intel_drv.h
> b/drivers/gpu/drm/i915/intel_drv.h
> index 8fd16ad..08cb571 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -1164,6 +1164,8 @@ void intel_ddi_clock_get(struct intel_encoder
> *encoder,
>  			 struct intel_crtc_state *pipe_config);
>  void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool
> state);
>  uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
> +bool intel_ddi_link_train(struct intel_dp *intel_dp, int
> max_link_rate,
> +			  uint8_t max_lane_count, bool link_mst);
>  struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp
> *intel_dp,
>  						  int clock);
>  unsigned int intel_fb_align_height(struct drm_device *dev,
> @@ -1385,7 +1387,7 @@ bool intel_dp_init_connector(struct
> intel_digital_port *intel_dig_port,
>  void intel_dp_set_link_params(struct intel_dp *intel_dp,
>  			      int link_rate, uint8_t lane_count,
>  			      bool link_mst);
> -void intel_dp_start_link_train(struct intel_dp *intel_dp);
> +bool intel_dp_start_link_train(struct intel_dp *intel_dp);
>  void intel_dp_stop_link_train(struct intel_dp *intel_dp);
>  void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
>  void intel_dp_encoder_reset(struct drm_encoder *encoder);
> @@ -1407,6 +1409,8 @@ void intel_dp_add_properties(struct intel_dp
> *intel_dp, struct drm_connector *co
>  void intel_dp_mst_suspend(struct drm_device *dev);
>  void intel_dp_mst_resume(struct drm_device *dev);
>  int intel_dp_max_link_rate(struct intel_dp *intel_dp);
> +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int
> *common_rates,
> +			     int link_rate);
>  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
>  void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
>  void intel_power_sequencer_reset(struct drm_i915_private *dev_priv);
-- 
Mika Kahola - Intel OTC

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH v7 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-16  0:03   ` [PATCH v6 1/6] drm/i915: Fallback to lower link rate and lane count during link training Manasi Navare
  2016-09-16  9:29     ` Mika Kahola
@ 2016-09-16 18:45     ` Manasi Navare
  2016-09-26 13:39       ` Jani Nikula
  1 sibling, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-16 18:45 UTC (permalink / raw)
  To: intel-gfx

According to the DisplayPort Spec, in case of Clock Recovery failure
the link training sequence should fall back to the lower link rate
followed by lower lane count until CR succeeds.
On CR success, the sequence proceeds with Channel EQ.
In case of Channel EQ failures, it should fallback to
lower link rate and lane count and start the CR phase again.

v7:
* Address readability concerns (Mika Kahola)
v6:
* Do not split quoted string across line (Mika Kahola)
v5:
* Reset the link rate index to the max link rate index
before lowering the lane count (Jani Nikula)
* Use the paradigm for loop in intel_dp_link_rate_index
v4:
* Fixed the link rate fallback loop (Manasi Navare)
v3:
* Fixed some rebase issues (Mika Kahola)
v2:
* Add a helper function to return index of requested link rate
into common_rates array
* Changed the link rate fallback loop to make use
of common_rates array (Mika Kahola)
* Changed INTEL_INFO to INTEL_GEN (David Weinehall)

Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
---
 drivers/gpu/drm/i915/intel_ddi.c              | 113 +++++++++++++++++++++++---
 drivers/gpu/drm/i915/intel_dp.c               |  15 ++++
 drivers/gpu/drm/i915/intel_dp_link_training.c |  13 ++-
 drivers/gpu/drm/i915/intel_drv.h              |   6 +-
 4 files changed, 133 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
index 8065a5f..093038c 100644
--- a/drivers/gpu/drm/i915/intel_ddi.c
+++ b/drivers/gpu/drm/i915/intel_ddi.c
@@ -1637,19 +1637,18 @@ void intel_ddi_clk_select(struct intel_encoder *encoder,
 	}
 }
 
-static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
+static void intel_ddi_pre_enable_edp(struct intel_encoder *encoder,
 				    int link_rate, uint32_t lane_count,
-				    struct intel_shared_dpll *pll,
-				    bool link_mst)
+				    struct intel_shared_dpll *pll)
 {
 	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
 	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
 	enum port port = intel_ddi_get_encoder_port(encoder);
 
 	intel_dp_set_link_params(intel_dp, link_rate, lane_count,
-				 link_mst);
-	if (encoder->type == INTEL_OUTPUT_EDP)
-		intel_edp_panel_on(intel_dp);
+				 false);
+
+	intel_edp_panel_on(intel_dp);
 
 	intel_ddi_clk_select(encoder, pll);
 	intel_prepare_dp_ddi_buffers(encoder);
@@ -1660,6 +1659,28 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
 		intel_dp_stop_link_train(intel_dp);
 }
 
+static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
+				    int link_rate, uint32_t lane_count,
+				    struct intel_shared_dpll *pll,
+				    bool link_mst)
+{
+	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
+	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+	struct intel_shared_dpll_config tmp_pll_config;
+
+	/* Disable the PLL and obtain the PLL for Link Training
+	 * that starts with highest link rate and lane count.
+	 */
+	tmp_pll_config = pll->config;
+	pll->funcs.disable(dev_priv, pll);
+	pll->config.crtc_mask = 0;
+
+	/* If Link Training fails, send a uevent to generate a hotplug */
+	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
+		drm_kms_helper_hotplug_event(encoder->base.dev);
+	pll->config = tmp_pll_config;
+}
+
 static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
 				      bool has_hdmi_sink,
 				      struct drm_display_mode *adjusted_mode,
@@ -1693,20 +1714,26 @@ static void intel_ddi_pre_enable(struct intel_encoder *intel_encoder,
 	struct intel_crtc *crtc = to_intel_crtc(encoder->crtc);
 	int type = intel_encoder->type;
 
-	if (type == INTEL_OUTPUT_DP || type == INTEL_OUTPUT_EDP) {
+	if (type == INTEL_OUTPUT_EDP)
+		intel_ddi_pre_enable_edp(intel_encoder,
+					crtc->config->port_clock,
+					crtc->config->lane_count,
+					crtc->config->shared_dpll);
+
+	if (type == INTEL_OUTPUT_DP)
 		intel_ddi_pre_enable_dp(intel_encoder,
 					crtc->config->port_clock,
 					crtc->config->lane_count,
 					crtc->config->shared_dpll,
 					intel_crtc_has_type(crtc->config,
 							    INTEL_OUTPUT_DP_MST));
-	}
-	if (type == INTEL_OUTPUT_HDMI) {
+
+	if (type == INTEL_OUTPUT_HDMI)
 		intel_ddi_pre_enable_hdmi(intel_encoder,
 					  crtc->config->has_hdmi_sink,
 					  &crtc->config->base.adjusted_mode,
 					  crtc->config->shared_dpll);
-	}
+
 }
 
 static void intel_ddi_post_disable(struct intel_encoder *intel_encoder,
@@ -2435,6 +2462,72 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
 	return pll;
 }
 
+bool
+intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
+		     uint8_t max_lane_count, bool link_mst)
+{
+	struct intel_connector *connector = intel_dp->attached_connector;
+	struct intel_encoder *encoder = connector->encoder;
+	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+	struct intel_shared_dpll *pll;
+	struct intel_shared_dpll_config tmp_pll_config;
+	int link_rate, max_link_rate_index, link_rate_index;
+	uint8_t lane_count;
+	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
+	bool ret = false;
+
+	max_link_rate_index = intel_dp_link_rate_index(intel_dp, common_rates,
+						       max_link_rate);
+	if (max_link_rate_index < 0) {
+		DRM_ERROR("Invalid Link Rate\n");
+		return false;
+	}
+
+	for (lane_count = max_lane_count; lane_count > 0; lane_count >>= 1) {
+		for (link_rate_index = max_link_rate_index;
+		     link_rate_index >= 0; link_rate_index--) {
+			link_rate = common_rates[link_rate_index];
+			pll = intel_ddi_get_link_dpll(intel_dp, link_rate);
+			if (pll == NULL) {
+				DRM_ERROR("Could not find DPLL for link training.\n");
+				return false;
+			}
+			tmp_pll_config = pll->config;
+			pll->funcs.enable(dev_priv, pll);
+
+			intel_dp_set_link_params(intel_dp, link_rate,
+						 lane_count, link_mst);
+
+			intel_ddi_clk_select(encoder, pll);
+			intel_prepare_dp_ddi_buffers(encoder);
+			intel_ddi_init_dp_buf_reg(encoder);
+			intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
+			ret = intel_dp_start_link_train(intel_dp);
+			if (ret)
+				break;
+
+			/* Disable port followed by PLL for next
+			 *retry/clean up
+			 */
+			intel_ddi_post_disable(encoder, NULL, NULL);
+			pll->funcs.disable(dev_priv, pll);
+			pll->config = tmp_pll_config;
+		}
+		if (ret) {
+			DRM_DEBUG_KMS("Link Training successful at link rate: %d lane: %d\n",
+				      link_rate, lane_count);
+			break;
+		}
+	}
+
+	intel_dp_stop_link_train(intel_dp);
+
+	if (!lane_count)
+		DRM_ERROR("Link Training Failed\n");
+
+	return ret;
+}
+
 void intel_ddi_init(struct drm_device *dev, enum port port)
 {
 	struct drm_i915_private *dev_priv = to_i915(dev);
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 69cee9b..d81c67cb 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -1506,6 +1506,21 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
 	return rates[len - 1];
 }
 
+int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
+			     int link_rate)
+{
+	int common_len;
+	int index;
+
+	common_len = intel_dp_common_rates(intel_dp, common_rates);
+	for (index = 0; index < common_len; index++) {
+		if (link_rate == common_rates[common_len - index - 1])
+			return common_len - index - 1;
+	}
+
+	return -1;
+}
+
 int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
 {
 	return rate_to_index(rate, intel_dp->sink_rates);
diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
index c438b02..6eb5eb6 100644
--- a/drivers/gpu/drm/i915/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
@@ -313,9 +313,16 @@ void intel_dp_stop_link_train(struct intel_dp *intel_dp)
 				DP_TRAINING_PATTERN_DISABLE);
 }
 
-void
+bool
 intel_dp_start_link_train(struct intel_dp *intel_dp)
 {
-	intel_dp_link_training_clock_recovery(intel_dp);
-	intel_dp_link_training_channel_equalization(intel_dp);
+	bool ret;
+
+	if (intel_dp_link_training_clock_recovery(intel_dp)) {
+		ret = intel_dp_link_training_channel_equalization(intel_dp);
+		if (ret)
+			return true;
+	}
+
+	return false;
 }
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 8fd16ad..08cb571 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -1164,6 +1164,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
 			 struct intel_crtc_state *pipe_config);
 void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
 uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
+bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
+			  uint8_t max_lane_count, bool link_mst);
 struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
 						  int clock);
 unsigned int intel_fb_align_height(struct drm_device *dev,
@@ -1385,7 +1387,7 @@ bool intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
 void intel_dp_set_link_params(struct intel_dp *intel_dp,
 			      int link_rate, uint8_t lane_count,
 			      bool link_mst);
-void intel_dp_start_link_train(struct intel_dp *intel_dp);
+bool intel_dp_start_link_train(struct intel_dp *intel_dp);
 void intel_dp_stop_link_train(struct intel_dp *intel_dp);
 void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
 void intel_dp_encoder_reset(struct drm_encoder *encoder);
@@ -1407,6 +1409,8 @@ void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *co
 void intel_dp_mst_suspend(struct drm_device *dev);
 void intel_dp_mst_resume(struct drm_device *dev);
 int intel_dp_max_link_rate(struct intel_dp *intel_dp);
+int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
+			     int link_rate);
 int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
 void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
 void intel_power_sequencer_reset(struct drm_i915_private *dev_priv);
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* ✓ Fi.CI.BAT: success for series starting with [v7,1/6] drm/i915: Fallback to lower link rate and lane count during link training (rev2)
  2016-09-16  0:03 ` [PATCH 0/6] " Manasi Navare
                     ` (6 preceding siblings ...)
  2016-09-16  0:47   ` ✓ Fi.CI.BAT: success for series starting with [v6,1/6] drm/i915: Fallback to lower link rate and lane count during link training Patchwork
@ 2016-09-16 19:25   ` Patchwork
  2016-09-20  8:45   ` [PATCH 0/6] Remaining patches for upfront link training on DDI platforms Jani Nikula
  2016-09-20 22:49   ` ✓ Fi.CI.BAT: success for series starting with [v7,1/6] drm/i915: Fallback to lower link rate and lane count during link training (rev3) Patchwork
  9 siblings, 0 replies; 56+ messages in thread
From: Patchwork @ 2016-09-16 19:25 UTC (permalink / raw)
  To: Navare, Manasi D; +Cc: intel-gfx

== Series Details ==

Series: series starting with [v7,1/6] drm/i915: Fallback to lower link rate and lane count during link training (rev2)
URL   : https://patchwork.freedesktop.org/series/12534/
State : success

== Summary ==

Series 12534v2 Series without cover letter
https://patchwork.freedesktop.org/api/1.0/series/12534/revisions/2/mbox/

Test kms_pipe_crc_basic:
        Subgroup suspend-read-crc-pipe-a:
                skip       -> PASS       (fi-hsw-4770r)
        Subgroup suspend-read-crc-pipe-c:
                incomplete -> PASS       (fi-hsw-4770k)

fi-bdw-5557u     total:244  pass:229  dwarn:0   dfail:0   fail:0   skip:15 
fi-bsw-n3050     total:244  pass:202  dwarn:0   dfail:0   fail:0   skip:42 
fi-byt-n2820     total:244  pass:208  dwarn:0   dfail:0   fail:1   skip:35 
fi-hsw-4770k     total:244  pass:226  dwarn:0   dfail:0   fail:0   skip:18 
fi-hsw-4770r     total:244  pass:222  dwarn:0   dfail:0   fail:0   skip:22 
fi-ilk-650       total:244  pass:183  dwarn:0   dfail:0   fail:1   skip:60 
fi-ivb-3520m     total:244  pass:219  dwarn:0   dfail:0   fail:0   skip:25 
fi-ivb-3770      total:244  pass:207  dwarn:0   dfail:0   fail:0   skip:37 
fi-skl-6260u     total:244  pass:230  dwarn:0   dfail:0   fail:0   skip:14 
fi-skl-6700hq    total:244  pass:221  dwarn:0   dfail:0   fail:1   skip:22 
fi-skl-6700k     total:244  pass:219  dwarn:1   dfail:0   fail:0   skip:24 
fi-snb-2520m     total:244  pass:208  dwarn:0   dfail:0   fail:0   skip:36 
fi-snb-2600      total:244  pass:207  dwarn:0   dfail:0   fail:0   skip:37 
fi-skl-6770hq failed to collect. IGT log at Patchwork_2548/fi-skl-6770hq/igt.log

Results at /archive/results/CI_IGT_test/Patchwork_2548/

e001a39d3a5cf1630ec4e83815794ec7ad507ef6 drm-intel-nightly: 2016y-09m-16d-11h-18m-48s UTC integration manifest
e5c3d22 drm/i915/dp/mst: Add support for upfront link training for DP MST
89c0f06 drm/i915/dp: Enable Upfront link training on DDI platforms
22f5d24 drm/i915: Code cleanup to use dev_priv and INTEL_GEN
32cfde7 drm/i915: Change the placement of some static functions in intel_dp.c
bea603b drm/i915: Remove the link rate and lane count loop in compute config
2f48ded drm/i915: Fallback to lower link rate and lane count during link training

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 5/5] drm/i915/dp/mst: Add support for upfront link training for DP MST
  2016-09-15 19:25     ` Manasi Navare
@ 2016-09-19 17:03       ` Jim Bride
  2016-09-19 17:22         ` Manasi Navare
  0 siblings, 1 reply; 56+ messages in thread
From: Jim Bride @ 2016-09-19 17:03 UTC (permalink / raw)
  To: Manasi Navare; +Cc: intel-gfx, Pandiyan, Dhinakaran

On Thu, Sep 15, 2016 at 12:25:51PM -0700, Manasi Navare wrote:
> On Thu, Sep 15, 2016 at 10:48:17AM -0700, Pandiyan, Dhinakaran wrote:
> > On Tue, 2016-09-13 at 18:08 -0700, Manasi Navare wrote:
> > > From: Jim Bride <jim.bride@linux.intel.com>
> > > 
> > > Add upfront link training to intel_dp_mst_mode_valid() so that we know
> > > topology constraints before we validate the legality of modes to be
> > > checked.
> > > 
> > 
> > The patch seems to do a lot more things than just what is described
> > here. I guess, it would be better to split this into multiple patches or
> > at least provide adequate description here.
> > 
> 
> I think the only other thing its doing is making some functions
> non static so they can be used for MST upfront enabling.
> But I think that can be in the same patch since it is done in order
> to enable upfront for MST.
> Jim, any thoughts?

This is exactly the case.

Jim

> 
> > > v3:
> > > * Reset the upfront values but dont unset the EDID for MST. (Manasi)
> > > v2:
> > > * Rebased on new revision of link training patch. (Manasi)
> > > 
> > > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> > > Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/intel_dp.c     | 15 ++++----
> > >  drivers/gpu/drm/i915/intel_dp_mst.c | 74 +++++++++++++++++++++++++++----------
> > >  drivers/gpu/drm/i915/intel_drv.h    |  3 ++
> > >  3 files changed, 64 insertions(+), 28 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> > > index 9042d28..635830e 100644
> > > --- a/drivers/gpu/drm/i915/intel_dp.c
> > > +++ b/drivers/gpu/drm/i915/intel_dp.c
> > > @@ -131,7 +131,7 @@ static void vlv_steal_power_sequencer(struct drm_device *dev,
> > >  				      enum pipe pipe);
> > >  static void intel_dp_unset_edid(struct intel_dp *intel_dp);
> > >  
> > > -static int
> > > +int
> > >  intel_dp_max_link_bw(struct intel_dp  *intel_dp)
> > >  {
> > >  	int max_link_bw = intel_dp->dpcd[DP_MAX_LINK_RATE];
> > > @@ -150,7 +150,7 @@ intel_dp_max_link_bw(struct intel_dp  *intel_dp)
> > >  	return max_link_bw;
> > >  }
> > >  
> > > -static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
> > > +u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
> > >  {
> > >  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
> > >  	u8 temp, source_max, sink_max;
> > > @@ -296,8 +296,7 @@ static int intersect_rates(const int *source_rates, int source_len,
> > >  	return k;
> > >  }
> > >  
> > > -static int intel_dp_common_rates(struct intel_dp *intel_dp,
> > > -				 int *common_rates)
> > > +int intel_dp_common_rates(struct intel_dp *intel_dp, int *common_rates)
> > >  {
> > >  	const int *source_rates, *sink_rates;
> > >  	int source_len, sink_len;
> > > @@ -321,7 +320,7 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
> > >  			       common_rates);
> > >  }
> > >  
> > > -static bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
> > > +bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
> > >  {
> > >  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
> > >  	struct intel_encoder *intel_encoder = &intel_dig_port->base;
> > > @@ -4545,12 +4544,12 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
> > >  	}
> > >  
> > >  out:
> > > -	if ((status != connector_status_connected) &&
> > > -	    (intel_dp->is_mst == false)) {
> > > -		intel_dp_unset_edid(intel_dp);
> > > +	if (status != connector_status_connected) {
> > >  		intel_dp->upfront_done = false;
> > >  		intel_dp->max_lanes_upfront = 0;
> > >  		intel_dp->max_link_rate_upfront = 0;
> > > +		if (intel_dp->is_mst == false)
> > > +			intel_dp_unset_edid(intel_dp);
> > >  	}
> > >  
> > >  	intel_display_power_put(to_i915(dev), power_domain);
> > > diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c
> > > index 54a9d76..98d45a4 100644
> > > --- a/drivers/gpu/drm/i915/intel_dp_mst.c
> > > +++ b/drivers/gpu/drm/i915/intel_dp_mst.c
> > > @@ -41,21 +41,30 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
> > >  	int bpp;
> > >  	int lane_count, slots;
> > >  	const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
> > > -	int mst_pbn;
> > > +	int mst_pbn, common_len;
> > > +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
> > >  
> > >  	pipe_config->dp_encoder_is_mst = true;
> > >  	pipe_config->has_pch_encoder = false;
> > > -	bpp = 24;
> > > +
> > >  	/*
> > > -	 * for MST we always configure max link bw - the spec doesn't
> > > -	 * seem to suggest we should do otherwise.
> > > +	 * For MST we always configure for the maximum trainable link bw -
> > > +	 * the spec doesn't seem to suggest we should do otherwise.  The
> > > +	 * calls to intel_dp_max_lane_count() and intel_dp_common_rates()
> > > +	 * both take successful upfront link training into account, and
> > > +	 * return the DisplayPort max supported values in the event that
> > > +	 * upfront link training was not done.
> > >  	 */
> > > -	lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
> > > +	lane_count = intel_dp_max_lane_count(intel_dp);
> > >  
> > >  	pipe_config->lane_count = lane_count;
> > >  
> > > -	pipe_config->pipe_bpp = 24;
> > > -	pipe_config->port_clock = intel_dp_max_link_rate(intel_dp);
> > > +	pipe_config->pipe_bpp = bpp = 24;
> > > +	common_len = intel_dp_common_rates(intel_dp, common_rates);
> > > +	pipe_config->port_clock = common_rates[common_len - 1];
> > > +
> > > +	DRM_DEBUG_KMS("DP MST link configured for %d lanes @ %d.\n",
> > > +		      pipe_config->lane_count, pipe_config->port_clock);
> > >  
> > >  	state = pipe_config->base.state;
> > >  
> > > @@ -137,6 +146,8 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
> > >  	enum port port = intel_dig_port->port;
> > >  	struct intel_connector *connector =
> > >  		to_intel_connector(conn_state->connector);
> > > +	struct intel_shared_dpll *pll = pipe_config->shared_dpll;
> > > +	struct intel_shared_dpll_config tmp_pll_config;
> > >  	int ret;
> > >  	uint32_t temp;
> > >  	int slots;
> > > @@ -150,21 +161,23 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
> > >  	DRM_DEBUG_KMS("%d\n", intel_dp->active_mst_links);
> > >  
> > >  	if (intel_dp->active_mst_links == 0) {
> > > -		intel_ddi_clk_select(&intel_dig_port->base,
> > > -				     pipe_config->shared_dpll);
> > > -
> > > -		intel_prepare_dp_ddi_buffers(&intel_dig_port->base);
> > > -		intel_dp_set_link_params(intel_dp,
> > > -					 pipe_config->port_clock,
> > > -					 pipe_config->lane_count,
> > > -					 true);
> > > -
> > > -		intel_ddi_init_dp_buf_reg(&intel_dig_port->base);
> > >  
> > > -		intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
> > > +		/* Disable the PLL since we need to acquire the PLL
> > > +		 * based on the link rate in the link training sequence
> > > +		 */
> > > +		tmp_pll_config = pll->config;
> > > +		pll->funcs.disable(dev_priv, pll);
> > > +		pll->config.crtc_mask = 0;
> > > +
> > > +		/* If Link Training fails, send a uevent to generate a
> > > +		 *hotplug
> > > +		 */
> > > +		if (!(intel_ddi_link_train(intel_dp, pipe_config->port_clock,
> > > +					   pipe_config->lane_count, true,
> > > +					   false)))
> > > +			drm_kms_helper_hotplug_event(encoder->base.dev);
> > > +		pll->config = tmp_pll_config;
> > >  
> > > -		intel_dp_start_link_train(intel_dp);
> > > -		intel_dp_stop_link_train(intel_dp);
> > >  	}
> > >  
> > >  	ret = drm_dp_mst_allocate_vcpi(&intel_dp->mst_mgr,
> > > @@ -336,6 +349,27 @@ intel_dp_mst_mode_valid(struct drm_connector *connector,
> > >  			struct drm_display_mode *mode)
> > >  {
> > >  	int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
> > > +	struct intel_connector *intel_connector = to_intel_connector(connector);
> > > +	struct intel_dp *intel_dp = intel_connector->mst_port;
> > > +
> > > +	if (intel_dp->upfront_link_train && !intel_dp->upfront_done) {
> > > +		bool do_upfront_link_train;
> > > +
> > > +		do_upfront_link_train = intel_dp->compliance_test_type !=
> > > +			DP_TEST_LINK_TRAINING;
> > > +		if (do_upfront_link_train) {
> > > +			intel_dp->upfront_done =
> > > +				intel_dp_upfront_link_train(intel_dp);
> > > +			if (intel_dp->upfront_done) {
> > > +				DRM_DEBUG_KMS("MST upfront trained at "
> > > +					      "%d lanes @ %d.",
> > > +					      intel_dp->max_lanes_upfront,
> > > +					      intel_dp->max_link_rate_upfront);
> > > +			} else
> > > +				DRM_DEBUG_KMS("MST upfront link training "
> > > +					      "failed.");
> > > +		}
> > > +	}
> > >  
> > >  	/* TODO - validate mode against available PBN for link */
> > >  	if (mode->clock < 10000)
> > > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> > > index fc2f1bc..b4bc002 100644
> > > --- a/drivers/gpu/drm/i915/intel_drv.h
> > > +++ b/drivers/gpu/drm/i915/intel_drv.h
> > > @@ -1416,6 +1416,7 @@ void intel_edp_panel_off(struct intel_dp *intel_dp);
> > >  void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *connector);
> > >  void intel_dp_mst_suspend(struct drm_device *dev);
> > >  void intel_dp_mst_resume(struct drm_device *dev);
> > > +u8 intel_dp_max_lane_count(struct intel_dp *intel_dp);
> > >  int intel_dp_max_link_rate(struct intel_dp *intel_dp);
> > >  int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
> > >  			     int link_rate);
> > > @@ -1446,6 +1447,8 @@ intel_dp_pre_emphasis_max(struct intel_dp *intel_dp, uint8_t voltage_swing);
> > >  void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
> > >  			   uint8_t *link_bw, uint8_t *rate_select);
> > >  bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp);
> > > +int intel_dp_common_rates(struct intel_dp *intel_dp, int *common_rates);
> > > +bool intel_dp_upfront_link_train(struct intel_dp *intel_dp);
> > >  bool
> > >  intel_dp_get_link_status(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE]);
> > >  
> > 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 5/5] drm/i915/dp/mst: Add support for upfront link training for DP MST
  2016-09-19 17:03       ` Jim Bride
@ 2016-09-19 17:22         ` Manasi Navare
  0 siblings, 0 replies; 56+ messages in thread
From: Manasi Navare @ 2016-09-19 17:22 UTC (permalink / raw)
  To: Jim Bride; +Cc: intel-gfx, Pandiyan, Dhinakaran

On Mon, Sep 19, 2016 at 10:03:29AM -0700, Jim Bride wrote:
> On Thu, Sep 15, 2016 at 12:25:51PM -0700, Manasi Navare wrote:
> > On Thu, Sep 15, 2016 at 10:48:17AM -0700, Pandiyan, Dhinakaran wrote:
> > > On Tue, 2016-09-13 at 18:08 -0700, Manasi Navare wrote:
> > > > From: Jim Bride <jim.bride@linux.intel.com>
> > > > 
> > > > Add upfront link training to intel_dp_mst_mode_valid() so that we know
> > > > topology constraints before we validate the legality of modes to be
> > > > checked.
> > > > 
> > > 
> > > The patch seems to do a lot more things than just what is described
> > > here. I guess, it would be better to split this into multiple patches or
> > > at least provide adequate description here.
> > > 
> > 
> > I think the only other thing its doing is making some functions
> > non static so they can be used for MST upfront enabling.
> > But I think that can be in the same patch since it is done in order
> > to enable upfront for MST.
> > Jim, any thoughts?
> 
> This is exactly the case.
> 
> Jim

I have already resubmitted a newer version of the patch with the description
more coherent to what the patch is doing.

Manasi
> 
> > 
> > > > v3:
> > > > * Reset the upfront values but dont unset the EDID for MST. (Manasi)
> > > > v2:
> > > > * Rebased on new revision of link training patch. (Manasi)
> > > > 
> > > > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> > > > Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
> > > > ---
> > > >  drivers/gpu/drm/i915/intel_dp.c     | 15 ++++----
> > > >  drivers/gpu/drm/i915/intel_dp_mst.c | 74 +++++++++++++++++++++++++++----------
> > > >  drivers/gpu/drm/i915/intel_drv.h    |  3 ++
> > > >  3 files changed, 64 insertions(+), 28 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> > > > index 9042d28..635830e 100644
> > > > --- a/drivers/gpu/drm/i915/intel_dp.c
> > > > +++ b/drivers/gpu/drm/i915/intel_dp.c
> > > > @@ -131,7 +131,7 @@ static void vlv_steal_power_sequencer(struct drm_device *dev,
> > > >  				      enum pipe pipe);
> > > >  static void intel_dp_unset_edid(struct intel_dp *intel_dp);
> > > >  
> > > > -static int
> > > > +int
> > > >  intel_dp_max_link_bw(struct intel_dp  *intel_dp)
> > > >  {
> > > >  	int max_link_bw = intel_dp->dpcd[DP_MAX_LINK_RATE];
> > > > @@ -150,7 +150,7 @@ intel_dp_max_link_bw(struct intel_dp  *intel_dp)
> > > >  	return max_link_bw;
> > > >  }
> > > >  
> > > > -static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
> > > > +u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
> > > >  {
> > > >  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
> > > >  	u8 temp, source_max, sink_max;
> > > > @@ -296,8 +296,7 @@ static int intersect_rates(const int *source_rates, int source_len,
> > > >  	return k;
> > > >  }
> > > >  
> > > > -static int intel_dp_common_rates(struct intel_dp *intel_dp,
> > > > -				 int *common_rates)
> > > > +int intel_dp_common_rates(struct intel_dp *intel_dp, int *common_rates)
> > > >  {
> > > >  	const int *source_rates, *sink_rates;
> > > >  	int source_len, sink_len;
> > > > @@ -321,7 +320,7 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
> > > >  			       common_rates);
> > > >  }
> > > >  
> > > > -static bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
> > > > +bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
> > > >  {
> > > >  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
> > > >  	struct intel_encoder *intel_encoder = &intel_dig_port->base;
> > > > @@ -4545,12 +4544,12 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
> > > >  	}
> > > >  
> > > >  out:
> > > > -	if ((status != connector_status_connected) &&
> > > > -	    (intel_dp->is_mst == false)) {
> > > > -		intel_dp_unset_edid(intel_dp);
> > > > +	if (status != connector_status_connected) {
> > > >  		intel_dp->upfront_done = false;
> > > >  		intel_dp->max_lanes_upfront = 0;
> > > >  		intel_dp->max_link_rate_upfront = 0;
> > > > +		if (intel_dp->is_mst == false)
> > > > +			intel_dp_unset_edid(intel_dp);
> > > >  	}
> > > >  
> > > >  	intel_display_power_put(to_i915(dev), power_domain);
> > > > diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c
> > > > index 54a9d76..98d45a4 100644
> > > > --- a/drivers/gpu/drm/i915/intel_dp_mst.c
> > > > +++ b/drivers/gpu/drm/i915/intel_dp_mst.c
> > > > @@ -41,21 +41,30 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
> > > >  	int bpp;
> > > >  	int lane_count, slots;
> > > >  	const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
> > > > -	int mst_pbn;
> > > > +	int mst_pbn, common_len;
> > > > +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
> > > >  
> > > >  	pipe_config->dp_encoder_is_mst = true;
> > > >  	pipe_config->has_pch_encoder = false;
> > > > -	bpp = 24;
> > > > +
> > > >  	/*
> > > > -	 * for MST we always configure max link bw - the spec doesn't
> > > > -	 * seem to suggest we should do otherwise.
> > > > +	 * For MST we always configure for the maximum trainable link bw -
> > > > +	 * the spec doesn't seem to suggest we should do otherwise.  The
> > > > +	 * calls to intel_dp_max_lane_count() and intel_dp_common_rates()
> > > > +	 * both take successful upfront link training into account, and
> > > > +	 * return the DisplayPort max supported values in the event that
> > > > +	 * upfront link training was not done.
> > > >  	 */
> > > > -	lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
> > > > +	lane_count = intel_dp_max_lane_count(intel_dp);
> > > >  
> > > >  	pipe_config->lane_count = lane_count;
> > > >  
> > > > -	pipe_config->pipe_bpp = 24;
> > > > -	pipe_config->port_clock = intel_dp_max_link_rate(intel_dp);
> > > > +	pipe_config->pipe_bpp = bpp = 24;
> > > > +	common_len = intel_dp_common_rates(intel_dp, common_rates);
> > > > +	pipe_config->port_clock = common_rates[common_len - 1];
> > > > +
> > > > +	DRM_DEBUG_KMS("DP MST link configured for %d lanes @ %d.\n",
> > > > +		      pipe_config->lane_count, pipe_config->port_clock);
> > > >  
> > > >  	state = pipe_config->base.state;
> > > >  
> > > > @@ -137,6 +146,8 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
> > > >  	enum port port = intel_dig_port->port;
> > > >  	struct intel_connector *connector =
> > > >  		to_intel_connector(conn_state->connector);
> > > > +	struct intel_shared_dpll *pll = pipe_config->shared_dpll;
> > > > +	struct intel_shared_dpll_config tmp_pll_config;
> > > >  	int ret;
> > > >  	uint32_t temp;
> > > >  	int slots;
> > > > @@ -150,21 +161,23 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
> > > >  	DRM_DEBUG_KMS("%d\n", intel_dp->active_mst_links);
> > > >  
> > > >  	if (intel_dp->active_mst_links == 0) {
> > > > -		intel_ddi_clk_select(&intel_dig_port->base,
> > > > -				     pipe_config->shared_dpll);
> > > > -
> > > > -		intel_prepare_dp_ddi_buffers(&intel_dig_port->base);
> > > > -		intel_dp_set_link_params(intel_dp,
> > > > -					 pipe_config->port_clock,
> > > > -					 pipe_config->lane_count,
> > > > -					 true);
> > > > -
> > > > -		intel_ddi_init_dp_buf_reg(&intel_dig_port->base);
> > > >  
> > > > -		intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
> > > > +		/* Disable the PLL since we need to acquire the PLL
> > > > +		 * based on the link rate in the link training sequence
> > > > +		 */
> > > > +		tmp_pll_config = pll->config;
> > > > +		pll->funcs.disable(dev_priv, pll);
> > > > +		pll->config.crtc_mask = 0;
> > > > +
> > > > +		/* If Link Training fails, send a uevent to generate a
> > > > +		 *hotplug
> > > > +		 */
> > > > +		if (!(intel_ddi_link_train(intel_dp, pipe_config->port_clock,
> > > > +					   pipe_config->lane_count, true,
> > > > +					   false)))
> > > > +			drm_kms_helper_hotplug_event(encoder->base.dev);
> > > > +		pll->config = tmp_pll_config;
> > > >  
> > > > -		intel_dp_start_link_train(intel_dp);
> > > > -		intel_dp_stop_link_train(intel_dp);
> > > >  	}
> > > >  
> > > >  	ret = drm_dp_mst_allocate_vcpi(&intel_dp->mst_mgr,
> > > > @@ -336,6 +349,27 @@ intel_dp_mst_mode_valid(struct drm_connector *connector,
> > > >  			struct drm_display_mode *mode)
> > > >  {
> > > >  	int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
> > > > +	struct intel_connector *intel_connector = to_intel_connector(connector);
> > > > +	struct intel_dp *intel_dp = intel_connector->mst_port;
> > > > +
> > > > +	if (intel_dp->upfront_link_train && !intel_dp->upfront_done) {
> > > > +		bool do_upfront_link_train;
> > > > +
> > > > +		do_upfront_link_train = intel_dp->compliance_test_type !=
> > > > +			DP_TEST_LINK_TRAINING;
> > > > +		if (do_upfront_link_train) {
> > > > +			intel_dp->upfront_done =
> > > > +				intel_dp_upfront_link_train(intel_dp);
> > > > +			if (intel_dp->upfront_done) {
> > > > +				DRM_DEBUG_KMS("MST upfront trained at "
> > > > +					      "%d lanes @ %d.",
> > > > +					      intel_dp->max_lanes_upfront,
> > > > +					      intel_dp->max_link_rate_upfront);
> > > > +			} else
> > > > +				DRM_DEBUG_KMS("MST upfront link training "
> > > > +					      "failed.");
> > > > +		}
> > > > +	}
> > > >  
> > > >  	/* TODO - validate mode against available PBN for link */
> > > >  	if (mode->clock < 10000)
> > > > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> > > > index fc2f1bc..b4bc002 100644
> > > > --- a/drivers/gpu/drm/i915/intel_drv.h
> > > > +++ b/drivers/gpu/drm/i915/intel_drv.h
> > > > @@ -1416,6 +1416,7 @@ void intel_edp_panel_off(struct intel_dp *intel_dp);
> > > >  void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *connector);
> > > >  void intel_dp_mst_suspend(struct drm_device *dev);
> > > >  void intel_dp_mst_resume(struct drm_device *dev);
> > > > +u8 intel_dp_max_lane_count(struct intel_dp *intel_dp);
> > > >  int intel_dp_max_link_rate(struct intel_dp *intel_dp);
> > > >  int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
> > > >  			     int link_rate);
> > > > @@ -1446,6 +1447,8 @@ intel_dp_pre_emphasis_max(struct intel_dp *intel_dp, uint8_t voltage_swing);
> > > >  void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
> > > >  			   uint8_t *link_bw, uint8_t *rate_select);
> > > >  bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp);
> > > > +int intel_dp_common_rates(struct intel_dp *intel_dp, int *common_rates);
> > > > +bool intel_dp_upfront_link_train(struct intel_dp *intel_dp);
> > > >  bool
> > > >  intel_dp_get_link_status(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE]);
> > > >  
> > > 
> > _______________________________________________
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 0/6] Remaining patches for upfront link training on DDI platforms
  2016-09-16  0:03 ` [PATCH 0/6] " Manasi Navare
                     ` (7 preceding siblings ...)
  2016-09-16 19:25   ` ✓ Fi.CI.BAT: success for series starting with [v7,1/6] drm/i915: Fallback to lower link rate and lane count during link training (rev2) Patchwork
@ 2016-09-20  8:45   ` Jani Nikula
  2016-09-20 22:49   ` ✓ Fi.CI.BAT: success for series starting with [v7,1/6] drm/i915: Fallback to lower link rate and lane count during link training (rev3) Patchwork
  9 siblings, 0 replies; 56+ messages in thread
From: Jani Nikula @ 2016-09-20  8:45 UTC (permalink / raw)
  To: Manasi Navare, intel-gfx

On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> This patch series includes some of the remaining patches to enable
> upfront link training on DDI platforms for DP SST and MST.
> They are based on some of the patches submitted earlier by
> Ander and Durgadoss.

When you post new versions of an entire series, please post them in a
fresh thread, without --in-reply-to.

Thanks,
Jani.

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH v18 5/6] drm/i915/dp: Enable Upfront link training on DDI platforms
  2016-09-16  0:04   ` [PATCH v17 5/6] drm/i915/dp: Enable Upfront link training on DDI platforms Manasi Navare
@ 2016-09-20 22:04     ` Manasi Navare
  2016-09-27 13:59       ` Jani Nikula
  2016-09-29 12:15       ` Jani Nikula
  0 siblings, 2 replies; 56+ messages in thread
From: Manasi Navare @ 2016-09-20 22:04 UTC (permalink / raw)
  To: intel-gfx

To support USB type C alternate DP mode, the display driver needs to
know the number of lanes required by the DP panel as well as number
of lanes that can be supported by the type-C cable. Sometimes, the
type-C cable may limit the bandwidth even if Panel can support
more lanes. To address these scenarios we need to train the link before
modeset. This upfront link training caches the values of max link rate
and max lane count that get used later during modeset. Upfront link
training does not change any HW state, the link is disabled and PLL
values are reset to previous values after upfront link tarining so
that subsequent modeset is not aware of these changes.

This patch is based on prior work done by
R,Durgadoss <durgadoss.r@intel.com>

Changes since v17:
* Rebased on the latest nightly
Changes since v16:
* Use HAS_DDI macro for enabling this feature (Rodrigo Vivi)
* Fix some unnecessary removals/changes due to rebase (Rodrigo Vivi)

Changes since v15:
* Split this patch into two patches - one with functional
changes to enable upfront and other with moving the existing
functions around so that they can be used for upfront (Jani Nikula)
* Cleaned up the commit message

Signed-off-by: Durgadoss R <durgadoss.r@intel.com>
Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
---
 drivers/gpu/drm/i915/intel_ddi.c              |  21 ++-
 drivers/gpu/drm/i915/intel_dp.c               | 190 +++++++++++++++++++++++++-
 drivers/gpu/drm/i915/intel_dp_link_training.c |   1 -
 drivers/gpu/drm/i915/intel_drv.h              |  14 +-
 4 files changed, 218 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
index 093038c..8e52507 100644
--- a/drivers/gpu/drm/i915/intel_ddi.c
+++ b/drivers/gpu/drm/i915/intel_ddi.c
@@ -1676,7 +1676,8 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
 	pll->config.crtc_mask = 0;
 
 	/* If Link Training fails, send a uevent to generate a hotplug */
-	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
+	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst,
+				  false))
 		drm_kms_helper_hotplug_event(encoder->base.dev);
 	pll->config = tmp_pll_config;
 }
@@ -2464,7 +2465,7 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
 
 bool
 intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
-		     uint8_t max_lane_count, bool link_mst)
+		     uint8_t max_lane_count, bool link_mst, bool is_upfront)
 {
 	struct intel_connector *connector = intel_dp->attached_connector;
 	struct intel_encoder *encoder = connector->encoder;
@@ -2513,6 +2514,7 @@ intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
 			pll->funcs.disable(dev_priv, pll);
 			pll->config = tmp_pll_config;
 		}
+
 		if (ret) {
 			DRM_DEBUG_KMS("Link Training successful at link rate: %d lane: %d\n",
 				      link_rate, lane_count);
@@ -2522,6 +2524,21 @@ intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
 
 	intel_dp_stop_link_train(intel_dp);
 
+	if (is_upfront) {
+		DRM_DEBUG_KMS("Upfront link train %s: link_clock:%d lanes:%d\n",
+			      ret ? "Passed" : "Failed",
+			      link_rate, lane_count);
+		/* Disable port followed by PLL for next retry/clean up */
+		intel_ddi_post_disable(encoder, NULL, NULL);
+		pll->funcs.disable(dev_priv, pll);
+		pll->config = tmp_pll_config;
+		if (ret) {
+			/* Save the upfront values */
+			intel_dp->max_lanes_upfront = lane_count;
+			intel_dp->max_link_rate_upfront = link_rate;
+		}
+	}
+
 	if (!lane_count)
 		DRM_ERROR("Link Training Failed\n");
 
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 8d9a8ab..a058d5d 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -153,12 +153,21 @@ intel_dp_max_link_bw(struct intel_dp  *intel_dp)
 static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
 {
 	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
-	u8 source_max, sink_max;
+	u8 temp, source_max, sink_max;
 
 	source_max = intel_dig_port->max_lanes;
 	sink_max = drm_dp_max_lane_count(intel_dp->dpcd);
 
-	return min(source_max, sink_max);
+	temp = min(source_max, sink_max);
+
+	/*
+	 * Limit max lanes w.r.t to the max value found
+	 * using Upfront link training also.
+	 */
+	if (intel_dp->max_lanes_upfront)
+		return min(temp, intel_dp->max_lanes_upfront);
+	else
+		return temp;
 }
 
 /*
@@ -190,6 +199,42 @@ intel_dp_max_data_rate(int max_link_clock, int max_lanes)
 	return (max_link_clock * max_lanes * 8) / 10;
 }
 
+static int intel_dp_upfront_crtc_disable(struct intel_crtc *crtc,
+					 struct drm_modeset_acquire_ctx *ctx,
+					 bool enable)
+{
+	int ret;
+	struct drm_atomic_state *state;
+	struct intel_crtc_state *crtc_state;
+	struct drm_device *dev = crtc->base.dev;
+	enum pipe pipe = crtc->pipe;
+
+	state = drm_atomic_state_alloc(dev);
+	if (!state)
+		return -ENOMEM;
+
+	state->acquire_ctx = ctx;
+
+	crtc_state = intel_atomic_get_crtc_state(state, crtc);
+	if (IS_ERR(crtc_state)) {
+		ret = PTR_ERR(crtc_state);
+		drm_atomic_state_free(state);
+		return ret;
+	}
+
+	DRM_DEBUG_KMS("%sabling crtc %c %s upfront link train\n",
+			enable ? "En" : "Dis",
+			pipe_name(pipe),
+			enable ? "after" : "before");
+
+	crtc_state->base.active = enable;
+	ret = drm_atomic_commit(state);
+	if (ret)
+		drm_atomic_state_free(state);
+
+	return ret;
+}
+
 static int
 intel_dp_downstream_max_dotclock(struct intel_dp *intel_dp)
 {
@@ -281,6 +326,17 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
 	int source_len, sink_len;
 
 	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
+
+	/* Cap sink rates w.r.t upfront values */
+	if (intel_dp->max_link_rate_upfront) {
+		int len = sink_len - 1;
+
+		while (len > 0 && sink_rates[len] >
+		       intel_dp->max_link_rate_upfront)
+			len--;
+		sink_len = len + 1;
+	}
+
 	source_len = intel_dp_source_rates(intel_dp, &source_rates);
 
 	return intersect_rates(source_rates, source_len,
@@ -288,6 +344,92 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
 			       common_rates);
 }
 
+static bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
+{
+	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
+	struct intel_encoder *intel_encoder = &intel_dig_port->base;
+	struct drm_device *dev = intel_encoder->base.dev;
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	struct drm_mode_config *config = &dev->mode_config;
+	struct drm_modeset_acquire_ctx ctx;
+	struct intel_crtc *intel_crtc;
+	struct drm_crtc *crtc = NULL;
+	struct intel_shared_dpll *pll;
+	struct intel_shared_dpll_config tmp_pll_config;
+	bool disable_dpll = false;
+	int ret;
+	bool done = false, has_mst = false;
+	uint8_t max_lanes;
+	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
+	int common_len;
+	enum intel_display_power_domain power_domain;
+
+	power_domain = intel_display_port_power_domain(intel_encoder);
+	intel_display_power_get(dev_priv, power_domain);
+
+	common_len = intel_dp_common_rates(intel_dp, common_rates);
+	max_lanes = intel_dp_max_lane_count(intel_dp);
+	if (WARN_ON(common_len <= 0))
+		return true;
+
+	drm_modeset_acquire_init(&ctx, 0);
+retry:
+	ret = drm_modeset_lock(&config->connection_mutex, &ctx);
+	if (ret)
+		goto exit_fail;
+
+	if (intel_encoder->base.crtc) {
+		crtc = intel_encoder->base.crtc;
+
+		ret = drm_modeset_lock(&crtc->mutex, &ctx);
+		if (ret)
+			goto exit_fail;
+
+		ret = drm_modeset_lock(&crtc->primary->mutex, &ctx);
+		if (ret)
+			goto exit_fail;
+
+		intel_crtc = to_intel_crtc(crtc);
+		pll = intel_crtc->config->shared_dpll;
+		disable_dpll = true;
+		has_mst = intel_crtc_has_type(intel_crtc->config,
+					      INTEL_OUTPUT_DP_MST);
+		ret = intel_dp_upfront_crtc_disable(intel_crtc, &ctx, false);
+		if (ret)
+			goto exit_fail;
+	}
+
+	mutex_lock(&dev_priv->dpll_lock);
+	if (disable_dpll) {
+		/* Clear the PLL config state */
+		tmp_pll_config = pll->config;
+		pll->config.crtc_mask = 0;
+	}
+
+	done = intel_dp->upfront_link_train(intel_dp,
+					    common_rates[common_len-1],
+					    max_lanes,
+					    has_mst,
+					    true);
+	if (disable_dpll)
+		pll->config = tmp_pll_config;
+
+	mutex_unlock(&dev_priv->dpll_lock);
+
+	if (crtc)
+		ret = intel_dp_upfront_crtc_disable(intel_crtc, &ctx, true);
+
+exit_fail:
+	if (ret == -EDEADLK) {
+		drm_modeset_backoff(&ctx);
+		goto retry;
+	}
+	drm_modeset_drop_locks(&ctx);
+	drm_modeset_acquire_fini(&ctx);
+	intel_display_power_put(dev_priv, power_domain);
+	return done;
+}
+
 static enum drm_mode_status
 intel_dp_mode_valid(struct drm_connector *connector,
 		    struct drm_display_mode *mode)
@@ -311,6 +453,19 @@ intel_dp_mode_valid(struct drm_connector *connector,
 		target_clock = fixed_mode->clock;
 	}
 
+	if (intel_dp->upfront_link_train && !intel_dp->upfront_done) {
+		bool do_upfront_link_train;
+		/* Do not do upfront link train, if it is a compliance
+		 * request
+		 */
+		do_upfront_link_train = !intel_dp->upfront_done &&
+			(intel_dp->compliance_test_type !=
+			 DP_TEST_LINK_TRAINING);
+
+		if (do_upfront_link_train)
+			intel_dp->upfront_done = intel_dp_upfront_link_train(intel_dp);
+	}
+
 	max_link_clock = intel_dp_max_link_rate(intel_dp);
 	max_lanes = intel_dp_max_lane_count(intel_dp);
 
@@ -1499,6 +1654,9 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
 	int rates[DP_MAX_SUPPORTED_RATES] = {};
 	int len;
 
+	if (intel_dp->max_link_rate_upfront)
+		return intel_dp->max_link_rate_upfront;
+
 	len = intel_dp_common_rates(intel_dp, rates);
 	if (WARN_ON(len <= 0))
 		return 162000;
@@ -1644,6 +1802,21 @@ intel_dp_compute_config(struct intel_encoder *encoder,
 	for (; bpp >= 6*3; bpp -= 2*3) {
 		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
 						   bpp);
+
+		if (!is_edp(intel_dp) && intel_dp->upfront_done) {
+			clock = max_clock;
+			lane_count = intel_dp->max_lanes_upfront;
+			link_clock = intel_dp->max_link_rate_upfront;
+			link_avail = intel_dp_max_data_rate(link_clock,
+							    lane_count);
+			mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
+							   bpp);
+			if (mode_rate <= link_avail)
+				goto found;
+			else
+				continue;
+		}
+
 		clock = max_clock;
 		lane_count = max_lane_count;
 		link_clock = common_rates[clock];
@@ -1672,7 +1845,6 @@ found:
 	}
 
 	pipe_config->lane_count = lane_count;
-
 	pipe_config->pipe_bpp = bpp;
 	pipe_config->port_clock = common_rates[clock];
 
@@ -4453,8 +4625,12 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
 
 out:
 	if ((status != connector_status_connected) &&
-	    (intel_dp->is_mst == false))
+	    (intel_dp->is_mst == false)) {
 		intel_dp_unset_edid(intel_dp);
+		intel_dp->upfront_done = false;
+		intel_dp->max_lanes_upfront = 0;
+		intel_dp->max_link_rate_upfront = 0;
+	}
 
 	intel_display_power_put(to_i915(dev), power_domain);
 	return;
@@ -5698,6 +5874,12 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
 	if (type == DRM_MODE_CONNECTOR_eDP)
 		intel_encoder->type = INTEL_OUTPUT_EDP;
 
+	/* Initialize upfront link training vfunc for DP */
+	if (intel_encoder->type != INTEL_OUTPUT_EDP) {
+		if (HAS_DDI(dev_priv))
+			intel_dp->upfront_link_train = intel_ddi_link_train;
+	}
+
 	/* eDP only on port B and/or C on vlv/chv */
 	if (WARN_ON((IS_VALLEYVIEW(dev) || IS_CHERRYVIEW(dev)) &&
 		    is_edp(intel_dp) && port != PORT_B && port != PORT_C))
diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
index 6eb5eb6..782a919 100644
--- a/drivers/gpu/drm/i915/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
@@ -304,7 +304,6 @@ intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
 	intel_dp_set_idle_link_train(intel_dp);
 
 	return intel_dp->channel_eq_status;
-
 }
 
 void intel_dp_stop_link_train(struct intel_dp *intel_dp)
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 0aeb317..fdfc0b6 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -887,6 +887,12 @@ struct intel_dp {
 	enum hdmi_force_audio force_audio;
 	bool limited_color_range;
 	bool color_range_auto;
+
+	/* Upfront link train parameters */
+	int max_link_rate_upfront;
+	uint8_t max_lanes_upfront;
+	bool upfront_done;
+
 	uint8_t dpcd[DP_RECEIVER_CAP_SIZE];
 	uint8_t psr_dpcd[EDP_PSR_RECEIVER_CAP_SIZE];
 	uint8_t downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
@@ -944,6 +950,11 @@ struct intel_dp {
 	/* This is called before a link training is starterd */
 	void (*prepare_link_retrain)(struct intel_dp *intel_dp);
 
+	/* For Upfront link training */
+	bool (*upfront_link_train)(struct intel_dp *intel_dp, int clock,
+				   uint8_t lane_count, bool link_mst,
+				   bool is_upfront);
+
 	/* Displayport compliance testing */
 	unsigned long compliance_test_type;
 	unsigned long compliance_test_data;
@@ -1166,7 +1177,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
 void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
 uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
 bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
-			  uint8_t max_lane_count, bool link_mst);
+			  uint8_t max_lane_count, bool link_mst,
+			  bool is_upfront);
 struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
 						  int clock);
 unsigned int intel_fb_align_height(struct drm_device *dev,
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* ✓ Fi.CI.BAT: success for series starting with [v7,1/6] drm/i915: Fallback to lower link rate and lane count during link training (rev3)
  2016-09-16  0:03 ` [PATCH 0/6] " Manasi Navare
                     ` (8 preceding siblings ...)
  2016-09-20  8:45   ` [PATCH 0/6] Remaining patches for upfront link training on DDI platforms Jani Nikula
@ 2016-09-20 22:49   ` Patchwork
  9 siblings, 0 replies; 56+ messages in thread
From: Patchwork @ 2016-09-20 22:49 UTC (permalink / raw)
  To: Navare, Manasi D; +Cc: intel-gfx

== Series Details ==

Series: series starting with [v7,1/6] drm/i915: Fallback to lower link rate and lane count during link training (rev3)
URL   : https://patchwork.freedesktop.org/series/12534/
State : success

== Summary ==

Series 12534v3 Series without cover letter
https://patchwork.freedesktop.org/api/1.0/series/12534/revisions/3/mbox/

Test kms_psr_sink_crc:
        Subgroup psr_basic:
                dmesg-warn -> PASS       (fi-skl-6700hq)

fi-bdw-5557u     total:244  pass:229  dwarn:0   dfail:0   fail:0   skip:15 
fi-bsw-n3050     total:244  pass:202  dwarn:0   dfail:0   fail:0   skip:42 
fi-byt-n2820     total:244  pass:208  dwarn:0   dfail:0   fail:1   skip:35 
fi-hsw-4770k     total:244  pass:222  dwarn:0   dfail:0   fail:0   skip:22 
fi-hsw-4770r     total:244  pass:222  dwarn:0   dfail:0   fail:0   skip:22 
fi-ilk-650       total:244  pass:182  dwarn:0   dfail:0   fail:2   skip:60 
fi-ivb-3520m     total:244  pass:219  dwarn:0   dfail:0   fail:0   skip:25 
fi-ivb-3770      total:244  pass:207  dwarn:0   dfail:0   fail:0   skip:37 
fi-skl-6260u     total:244  pass:230  dwarn:0   dfail:0   fail:0   skip:14 
fi-skl-6700hq    total:244  pass:222  dwarn:0   dfail:0   fail:0   skip:22 
fi-skl-6700k     total:244  pass:219  dwarn:1   dfail:0   fail:0   skip:24 
fi-snb-2520m     total:244  pass:208  dwarn:0   dfail:0   fail:0   skip:36 
fi-snb-2600      total:244  pass:207  dwarn:0   dfail:0   fail:0   skip:37 
fi-skl-6770hq failed to collect. IGT log at Patchwork_2563/fi-skl-6770hq/igt.log

Results at /archive/results/CI_IGT_test/Patchwork_2563/

4ca90e7c3b6e429e033b93fc56fc156da8f222ef drm-intel-nightly: 2016y-09m-20d-12h-43m-32s UTC integration manifest
fbf93bd drm/i915/dp/mst: Add support for upfront link training for DP MST
c35524e drm/i915/dp: Enable Upfront link training on DDI platforms
3e061c1 drm/i915: Code cleanup to use dev_priv and INTEL_GEN
ad24e7c drm/i915: Change the placement of some static functions in intel_dp.c
789b7b3 drm/i915: Remove the link rate and lane count loop in compute config
61d14b7 drm/i915: Fallback to lower link rate and lane count during link training

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v7 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-16 18:45     ` [PATCH v7 " Manasi Navare
@ 2016-09-26 13:39       ` Jani Nikula
  2016-09-27 15:25         ` Manasi Navare
  0 siblings, 1 reply; 56+ messages in thread
From: Jani Nikula @ 2016-09-26 13:39 UTC (permalink / raw)
  To: Manasi Navare, intel-gfx

On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> According to the DisplayPort Spec, in case of Clock Recovery failure
> the link training sequence should fall back to the lower link rate
> followed by lower lane count until CR succeeds.
> On CR success, the sequence proceeds with Channel EQ.
> In case of Channel EQ failures, it should fallback to
> lower link rate and lane count and start the CR phase again.

This change makes the link training start at the max lane count and max
link rate. This is not ideal, as it wastes the link. And it is not a
spec requirement. "The Link Policy Maker of the upstream device may
choose any link count and link rate as long as they do not exceed the
capabilities of the DP receiver."

Our current code starts at the minimum required bandwidth for the mode,
therefore we can't fall back to lower link rate and lane count without
reducing the mode.

AFAICT this patch here makes it possible for the link bandwidth to drop
below what is required for the mode. This is unacceptable.

BR,
Jani.


>
> v7:
> * Address readability concerns (Mika Kahola)
> v6:
> * Do not split quoted string across line (Mika Kahola)
> v5:
> * Reset the link rate index to the max link rate index
> before lowering the lane count (Jani Nikula)
> * Use the paradigm for loop in intel_dp_link_rate_index
> v4:
> * Fixed the link rate fallback loop (Manasi Navare)
> v3:
> * Fixed some rebase issues (Mika Kahola)
> v2:
> * Add a helper function to return index of requested link rate
> into common_rates array
> * Changed the link rate fallback loop to make use
> of common_rates array (Mika Kahola)
> * Changed INTEL_INFO to INTEL_GEN (David Weinehall)
>
> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_ddi.c              | 113 +++++++++++++++++++++++---
>  drivers/gpu/drm/i915/intel_dp.c               |  15 ++++
>  drivers/gpu/drm/i915/intel_dp_link_training.c |  13 ++-
>  drivers/gpu/drm/i915/intel_drv.h              |   6 +-
>  4 files changed, 133 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
> index 8065a5f..093038c 100644
> --- a/drivers/gpu/drm/i915/intel_ddi.c
> +++ b/drivers/gpu/drm/i915/intel_ddi.c
> @@ -1637,19 +1637,18 @@ void intel_ddi_clk_select(struct intel_encoder *encoder,
>  	}
>  }
>  
> -static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> +static void intel_ddi_pre_enable_edp(struct intel_encoder *encoder,
>  				    int link_rate, uint32_t lane_count,
> -				    struct intel_shared_dpll *pll,
> -				    bool link_mst)
> +				    struct intel_shared_dpll *pll)
>  {
>  	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
>  	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
>  	enum port port = intel_ddi_get_encoder_port(encoder);
>  
>  	intel_dp_set_link_params(intel_dp, link_rate, lane_count,
> -				 link_mst);
> -	if (encoder->type == INTEL_OUTPUT_EDP)
> -		intel_edp_panel_on(intel_dp);
> +				 false);
> +
> +	intel_edp_panel_on(intel_dp);
>  
>  	intel_ddi_clk_select(encoder, pll);
>  	intel_prepare_dp_ddi_buffers(encoder);
> @@ -1660,6 +1659,28 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
>  		intel_dp_stop_link_train(intel_dp);
>  }
>  
> +static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> +				    int link_rate, uint32_t lane_count,
> +				    struct intel_shared_dpll *pll,
> +				    bool link_mst)
> +{
> +	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
> +	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> +	struct intel_shared_dpll_config tmp_pll_config;
> +
> +	/* Disable the PLL and obtain the PLL for Link Training
> +	 * that starts with highest link rate and lane count.
> +	 */
> +	tmp_pll_config = pll->config;
> +	pll->funcs.disable(dev_priv, pll);
> +	pll->config.crtc_mask = 0;
> +
> +	/* If Link Training fails, send a uevent to generate a hotplug */
> +	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
> +		drm_kms_helper_hotplug_event(encoder->base.dev);
> +	pll->config = tmp_pll_config;
> +}
> +
>  static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
>  				      bool has_hdmi_sink,
>  				      struct drm_display_mode *adjusted_mode,
> @@ -1693,20 +1714,26 @@ static void intel_ddi_pre_enable(struct intel_encoder *intel_encoder,
>  	struct intel_crtc *crtc = to_intel_crtc(encoder->crtc);
>  	int type = intel_encoder->type;
>  
> -	if (type == INTEL_OUTPUT_DP || type == INTEL_OUTPUT_EDP) {
> +	if (type == INTEL_OUTPUT_EDP)
> +		intel_ddi_pre_enable_edp(intel_encoder,
> +					crtc->config->port_clock,
> +					crtc->config->lane_count,
> +					crtc->config->shared_dpll);
> +
> +	if (type == INTEL_OUTPUT_DP)
>  		intel_ddi_pre_enable_dp(intel_encoder,
>  					crtc->config->port_clock,
>  					crtc->config->lane_count,
>  					crtc->config->shared_dpll,
>  					intel_crtc_has_type(crtc->config,
>  							    INTEL_OUTPUT_DP_MST));
> -	}
> -	if (type == INTEL_OUTPUT_HDMI) {
> +
> +	if (type == INTEL_OUTPUT_HDMI)
>  		intel_ddi_pre_enable_hdmi(intel_encoder,
>  					  crtc->config->has_hdmi_sink,
>  					  &crtc->config->base.adjusted_mode,
>  					  crtc->config->shared_dpll);
> -	}
> +
>  }
>  
>  static void intel_ddi_post_disable(struct intel_encoder *intel_encoder,
> @@ -2435,6 +2462,72 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
>  	return pll;
>  }
>  
> +bool
> +intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
> +		     uint8_t max_lane_count, bool link_mst)
> +{
> +	struct intel_connector *connector = intel_dp->attached_connector;
> +	struct intel_encoder *encoder = connector->encoder;
> +	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> +	struct intel_shared_dpll *pll;
> +	struct intel_shared_dpll_config tmp_pll_config;
> +	int link_rate, max_link_rate_index, link_rate_index;
> +	uint8_t lane_count;
> +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
> +	bool ret = false;
> +
> +	max_link_rate_index = intel_dp_link_rate_index(intel_dp, common_rates,
> +						       max_link_rate);
> +	if (max_link_rate_index < 0) {
> +		DRM_ERROR("Invalid Link Rate\n");
> +		return false;
> +	}
> +
> +	for (lane_count = max_lane_count; lane_count > 0; lane_count >>= 1) {
> +		for (link_rate_index = max_link_rate_index;
> +		     link_rate_index >= 0; link_rate_index--) {
> +			link_rate = common_rates[link_rate_index];
> +			pll = intel_ddi_get_link_dpll(intel_dp, link_rate);
> +			if (pll == NULL) {
> +				DRM_ERROR("Could not find DPLL for link training.\n");
> +				return false;
> +			}
> +			tmp_pll_config = pll->config;
> +			pll->funcs.enable(dev_priv, pll);
> +
> +			intel_dp_set_link_params(intel_dp, link_rate,
> +						 lane_count, link_mst);
> +
> +			intel_ddi_clk_select(encoder, pll);
> +			intel_prepare_dp_ddi_buffers(encoder);
> +			intel_ddi_init_dp_buf_reg(encoder);
> +			intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
> +			ret = intel_dp_start_link_train(intel_dp);
> +			if (ret)
> +				break;
> +
> +			/* Disable port followed by PLL for next
> +			 *retry/clean up
> +			 */
> +			intel_ddi_post_disable(encoder, NULL, NULL);
> +			pll->funcs.disable(dev_priv, pll);
> +			pll->config = tmp_pll_config;
> +		}
> +		if (ret) {
> +			DRM_DEBUG_KMS("Link Training successful at link rate: %d lane: %d\n",
> +				      link_rate, lane_count);
> +			break;
> +		}
> +	}
> +
> +	intel_dp_stop_link_train(intel_dp);
> +
> +	if (!lane_count)
> +		DRM_ERROR("Link Training Failed\n");
> +
> +	return ret;
> +}
> +
>  void intel_ddi_init(struct drm_device *dev, enum port port)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(dev);
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index 69cee9b..d81c67cb 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -1506,6 +1506,21 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
>  	return rates[len - 1];
>  }
>  
> +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
> +			     int link_rate)
> +{
> +	int common_len;
> +	int index;
> +
> +	common_len = intel_dp_common_rates(intel_dp, common_rates);
> +	for (index = 0; index < common_len; index++) {
> +		if (link_rate == common_rates[common_len - index - 1])
> +			return common_len - index - 1;
> +	}
> +
> +	return -1;
> +}
> +
>  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
>  {
>  	return rate_to_index(rate, intel_dp->sink_rates);
> diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
> index c438b02..6eb5eb6 100644
> --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
> +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
> @@ -313,9 +313,16 @@ void intel_dp_stop_link_train(struct intel_dp *intel_dp)
>  				DP_TRAINING_PATTERN_DISABLE);
>  }
>  
> -void
> +bool
>  intel_dp_start_link_train(struct intel_dp *intel_dp)
>  {
> -	intel_dp_link_training_clock_recovery(intel_dp);
> -	intel_dp_link_training_channel_equalization(intel_dp);
> +	bool ret;
> +
> +	if (intel_dp_link_training_clock_recovery(intel_dp)) {
> +		ret = intel_dp_link_training_channel_equalization(intel_dp);
> +		if (ret)
> +			return true;
> +	}
> +
> +	return false;
>  }
> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> index 8fd16ad..08cb571 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -1164,6 +1164,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
>  			 struct intel_crtc_state *pipe_config);
>  void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
>  uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
> +bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
> +			  uint8_t max_lane_count, bool link_mst);
>  struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
>  						  int clock);
>  unsigned int intel_fb_align_height(struct drm_device *dev,
> @@ -1385,7 +1387,7 @@ bool intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
>  void intel_dp_set_link_params(struct intel_dp *intel_dp,
>  			      int link_rate, uint8_t lane_count,
>  			      bool link_mst);
> -void intel_dp_start_link_train(struct intel_dp *intel_dp);
> +bool intel_dp_start_link_train(struct intel_dp *intel_dp);
>  void intel_dp_stop_link_train(struct intel_dp *intel_dp);
>  void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
>  void intel_dp_encoder_reset(struct drm_encoder *encoder);
> @@ -1407,6 +1409,8 @@ void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *co
>  void intel_dp_mst_suspend(struct drm_device *dev);
>  void intel_dp_mst_resume(struct drm_device *dev);
>  int intel_dp_max_link_rate(struct intel_dp *intel_dp);
> +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
> +			     int link_rate);
>  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
>  void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
>  void intel_power_sequencer_reset(struct drm_i915_private *dev_priv);

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config
  2016-09-16  0:04   ` [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config Manasi Navare
@ 2016-09-26 13:41     ` Jani Nikula
  2016-09-27 13:39       ` Jani Nikula
  2016-09-27 21:55       ` Manasi Navare
  0 siblings, 2 replies; 56+ messages in thread
From: Jani Nikula @ 2016-09-26 13:41 UTC (permalink / raw)
  To: Manasi Navare, intel-gfx

On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> While configuring the pipe during modeset, it should use
> max clock and max lane count and reduce the bpp until
> the requested mode rate is less than or equal to
> available link BW.
> This is required to pass DP Compliance.

As I wrote in reply to patch 1/6, this is not a DP spec requirement. The
link policy maker can freely choose the link parameters as long as the
sink supports them.

BR,
Jani.


>
> v3:
> * Add Debug print if requested mode cannot be supported
> during modeset (Dhinakaran Pandiyan)
> v2:
> * Removed the loop since we use max values of clock
> and lane count (Dhinakaran Pandiyan)
>
> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_dp.c | 22 ++++++++--------------
>  1 file changed, 8 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index d81c67cb..65b4559 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -1644,23 +1644,17 @@ intel_dp_compute_config(struct intel_encoder *encoder,
>  	for (; bpp >= 6*3; bpp -= 2*3) {
>  		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
>  						   bpp);
> +		clock = max_clock;
> +		lane_count = max_lane_count;
> +		link_clock = common_rates[clock];
> +		link_avail = intel_dp_max_data_rate(link_clock,
> +						    lane_count);
>  
> -		for (clock = min_clock; clock <= max_clock; clock++) {
> -			for (lane_count = min_lane_count;
> -				lane_count <= max_lane_count;
> -				lane_count <<= 1) {
> -
> -				link_clock = common_rates[clock];
> -				link_avail = intel_dp_max_data_rate(link_clock,
> -								    lane_count);
> -
> -				if (mode_rate <= link_avail) {
> -					goto found;
> -				}
> -			}
> -		}
> +		if (mode_rate <= link_avail)
> +			goto found;
>  	}
>  
> +	DRM_DEBUG_KMS("Requested Mode Rate not supported\n");
>  	return false;
>  
>  found:

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 4/6] drm/i915: Code cleanup to use dev_priv and INTEL_GEN
  2016-09-16  0:04   ` [PATCH 4/6] drm/i915: Code cleanup to use dev_priv and INTEL_GEN Manasi Navare
  2016-09-16  7:40     ` Mika Kahola
@ 2016-09-26 13:45     ` Jani Nikula
  2016-09-28  0:03       ` Manasi Navare
  1 sibling, 1 reply; 56+ messages in thread
From: Jani Nikula @ 2016-09-26 13:45 UTC (permalink / raw)
  To: Manasi Navare, intel-gfx

On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> Replace dev with dev_priv and INTEL_INFO with INTEL_GEN

Patches like this could easily sent separately, or at the very least as
the first patches in the series. Then we could have merged this
already. Now it conflicts, please rebase.

BR,
Jani.

>
> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_dp.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index 61d71fa..8061e32 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -230,13 +230,13 @@ static int
>  intel_dp_source_rates(struct intel_dp *intel_dp, const int **source_rates)
>  {
>  	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
> -	struct drm_device *dev = dig_port->base.base.dev;
> +	struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
>  	int size;
>  
> -	if (IS_BROXTON(dev)) {
> +	if (IS_BROXTON(dev_priv)) {
>  		*source_rates = bxt_rates;
>  		size = ARRAY_SIZE(bxt_rates);
> -	} else if (IS_SKYLAKE(dev) || IS_KABYLAKE(dev)) {
> +	} else if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
>  		*source_rates = skl_rates;
>  		size = ARRAY_SIZE(skl_rates);
>  	} else {
> @@ -1359,14 +1359,14 @@ intel_dp_aux_init(struct intel_dp *intel_dp)
>  bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp)
>  {
>  	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
> -	struct drm_device *dev = dig_port->base.base.dev;
> +	struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
>  
>  	/* WaDisableHBR2:skl */
> -	if (IS_SKL_REVID(dev, 0, SKL_REVID_B0))
> +	if (IS_SKL_REVID(dev_priv, 0, SKL_REVID_B0))
>  		return false;
>  
> -	if ((IS_HASWELL(dev) && !IS_HSW_ULX(dev)) || IS_BROADWELL(dev) ||
> -	    (INTEL_INFO(dev)->gen >= 9))
> +	if ((IS_HASWELL(dev_priv) && !IS_HSW_ULX(dev_priv)) ||
> +	    IS_BROADWELL(dev_priv) || (INTEL_GEN(dev_priv) >= 9))
>  		return true;
>  	else
>  		return false;

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config
  2016-09-26 13:41     ` Jani Nikula
@ 2016-09-27 13:39       ` Jani Nikula
  2016-09-27 22:13         ` Manasi Navare
  2016-09-27 21:55       ` Manasi Navare
  1 sibling, 1 reply; 56+ messages in thread
From: Jani Nikula @ 2016-09-27 13:39 UTC (permalink / raw)
  To: Manasi Navare, intel-gfx

On Mon, 26 Sep 2016, Jani Nikula <jani.nikula@linux.intel.com> wrote:
> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
>> While configuring the pipe during modeset, it should use
>> max clock and max lane count and reduce the bpp until
>> the requested mode rate is less than or equal to
>> available link BW.
>> This is required to pass DP Compliance.
>
> As I wrote in reply to patch 1/6, this is not a DP spec requirement. The
> link policy maker can freely choose the link parameters as long as the
> sink supports them.

Also double checked the DP link CTS spec. AFAICT none of the tests
expect the source to use the max clock or max lane count
directly. (Automated test request is another matter, and we should look
at it.)

I think patches 1-2 are based on an incorrect interpretation of the spec
and tests.

BR,
Jani.


>
> BR,
> Jani.
>
>
>>
>> v3:
>> * Add Debug print if requested mode cannot be supported
>> during modeset (Dhinakaran Pandiyan)
>> v2:
>> * Removed the loop since we use max values of clock
>> and lane count (Dhinakaran Pandiyan)
>>
>> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
>> ---
>>  drivers/gpu/drm/i915/intel_dp.c | 22 ++++++++--------------
>>  1 file changed, 8 insertions(+), 14 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
>> index d81c67cb..65b4559 100644
>> --- a/drivers/gpu/drm/i915/intel_dp.c
>> +++ b/drivers/gpu/drm/i915/intel_dp.c
>> @@ -1644,23 +1644,17 @@ intel_dp_compute_config(struct intel_encoder *encoder,
>>  	for (; bpp >= 6*3; bpp -= 2*3) {
>>  		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
>>  						   bpp);
>> +		clock = max_clock;
>> +		lane_count = max_lane_count;
>> +		link_clock = common_rates[clock];
>> +		link_avail = intel_dp_max_data_rate(link_clock,
>> +						    lane_count);
>>  
>> -		for (clock = min_clock; clock <= max_clock; clock++) {
>> -			for (lane_count = min_lane_count;
>> -				lane_count <= max_lane_count;
>> -				lane_count <<= 1) {
>> -
>> -				link_clock = common_rates[clock];
>> -				link_avail = intel_dp_max_data_rate(link_clock,
>> -								    lane_count);
>> -
>> -				if (mode_rate <= link_avail) {
>> -					goto found;
>> -				}
>> -			}
>> -		}
>> +		if (mode_rate <= link_avail)
>> +			goto found;
>>  	}
>>  
>> +	DRM_DEBUG_KMS("Requested Mode Rate not supported\n");
>>  	return false;
>>  
>>  found:

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v18 5/6] drm/i915/dp: Enable Upfront link training on DDI platforms
  2016-09-20 22:04     ` [PATCH v18 " Manasi Navare
@ 2016-09-27 13:59       ` Jani Nikula
  2016-09-29 12:15       ` Jani Nikula
  1 sibling, 0 replies; 56+ messages in thread
From: Jani Nikula @ 2016-09-27 13:59 UTC (permalink / raw)
  To: Manasi Navare, intel-gfx

On Wed, 21 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> To support USB type C alternate DP mode, the display driver needs to
> know the number of lanes required by the DP panel as well as number
> of lanes that can be supported by the type-C cable. Sometimes, the
> type-C cable may limit the bandwidth even if Panel can support
> more lanes. To address these scenarios we need to train the link before
> modeset. This upfront link training caches the values of max link rate
> and max lane count that get used later during modeset. Upfront link
> training does not change any HW state, the link is disabled and PLL
> values are reset to previous values after upfront link tarining so
> that subsequent modeset is not aware of these changes.

I think we should call timeout on this patch, and focus on the DP
compliance parts first. Frankly, I think this patch is really scary. If
I got a bisect result for a regression on this, I would have absolutely
no clue what exactly caused it.

Please correct me if you think I'm wrong, but I don't think upfront link
training is strictly required for DP compliance. (Conversely, if you
think this is required for DP compliance, the rationale is absolutely
required in the commit message in more than just a few words.)

BR,
Jani.


>
> This patch is based on prior work done by
> R,Durgadoss <durgadoss.r@intel.com>
>
> Changes since v17:
> * Rebased on the latest nightly
> Changes since v16:
> * Use HAS_DDI macro for enabling this feature (Rodrigo Vivi)
> * Fix some unnecessary removals/changes due to rebase (Rodrigo Vivi)
>
> Changes since v15:
> * Split this patch into two patches - one with functional
> changes to enable upfront and other with moving the existing
> functions around so that they can be used for upfront (Jani Nikula)
> * Cleaned up the commit message
>
> Signed-off-by: Durgadoss R <durgadoss.r@intel.com>
> Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_ddi.c              |  21 ++-
>  drivers/gpu/drm/i915/intel_dp.c               | 190 +++++++++++++++++++++++++-
>  drivers/gpu/drm/i915/intel_dp_link_training.c |   1 -
>  drivers/gpu/drm/i915/intel_drv.h              |  14 +-
>  4 files changed, 218 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
> index 093038c..8e52507 100644
> --- a/drivers/gpu/drm/i915/intel_ddi.c
> +++ b/drivers/gpu/drm/i915/intel_ddi.c
> @@ -1676,7 +1676,8 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
>  	pll->config.crtc_mask = 0;
>  
>  	/* If Link Training fails, send a uevent to generate a hotplug */
> -	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
> +	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst,
> +				  false))
>  		drm_kms_helper_hotplug_event(encoder->base.dev);
>  	pll->config = tmp_pll_config;
>  }
> @@ -2464,7 +2465,7 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
>  
>  bool
>  intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
> -		     uint8_t max_lane_count, bool link_mst)
> +		     uint8_t max_lane_count, bool link_mst, bool is_upfront)
>  {
>  	struct intel_connector *connector = intel_dp->attached_connector;
>  	struct intel_encoder *encoder = connector->encoder;
> @@ -2513,6 +2514,7 @@ intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
>  			pll->funcs.disable(dev_priv, pll);
>  			pll->config = tmp_pll_config;
>  		}
> +
>  		if (ret) {
>  			DRM_DEBUG_KMS("Link Training successful at link rate: %d lane: %d\n",
>  				      link_rate, lane_count);
> @@ -2522,6 +2524,21 @@ intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
>  
>  	intel_dp_stop_link_train(intel_dp);
>  
> +	if (is_upfront) {
> +		DRM_DEBUG_KMS("Upfront link train %s: link_clock:%d lanes:%d\n",
> +			      ret ? "Passed" : "Failed",
> +			      link_rate, lane_count);
> +		/* Disable port followed by PLL for next retry/clean up */
> +		intel_ddi_post_disable(encoder, NULL, NULL);
> +		pll->funcs.disable(dev_priv, pll);
> +		pll->config = tmp_pll_config;
> +		if (ret) {
> +			/* Save the upfront values */
> +			intel_dp->max_lanes_upfront = lane_count;
> +			intel_dp->max_link_rate_upfront = link_rate;
> +		}
> +	}
> +
>  	if (!lane_count)
>  		DRM_ERROR("Link Training Failed\n");
>  
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index 8d9a8ab..a058d5d 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -153,12 +153,21 @@ intel_dp_max_link_bw(struct intel_dp  *intel_dp)
>  static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
>  {
>  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
> -	u8 source_max, sink_max;
> +	u8 temp, source_max, sink_max;
>  
>  	source_max = intel_dig_port->max_lanes;
>  	sink_max = drm_dp_max_lane_count(intel_dp->dpcd);
>  
> -	return min(source_max, sink_max);
> +	temp = min(source_max, sink_max);
> +
> +	/*
> +	 * Limit max lanes w.r.t to the max value found
> +	 * using Upfront link training also.
> +	 */
> +	if (intel_dp->max_lanes_upfront)
> +		return min(temp, intel_dp->max_lanes_upfront);
> +	else
> +		return temp;
>  }
>  
>  /*
> @@ -190,6 +199,42 @@ intel_dp_max_data_rate(int max_link_clock, int max_lanes)
>  	return (max_link_clock * max_lanes * 8) / 10;
>  }
>  
> +static int intel_dp_upfront_crtc_disable(struct intel_crtc *crtc,
> +					 struct drm_modeset_acquire_ctx *ctx,
> +					 bool enable)
> +{
> +	int ret;
> +	struct drm_atomic_state *state;
> +	struct intel_crtc_state *crtc_state;
> +	struct drm_device *dev = crtc->base.dev;
> +	enum pipe pipe = crtc->pipe;
> +
> +	state = drm_atomic_state_alloc(dev);
> +	if (!state)
> +		return -ENOMEM;
> +
> +	state->acquire_ctx = ctx;
> +
> +	crtc_state = intel_atomic_get_crtc_state(state, crtc);
> +	if (IS_ERR(crtc_state)) {
> +		ret = PTR_ERR(crtc_state);
> +		drm_atomic_state_free(state);
> +		return ret;
> +	}
> +
> +	DRM_DEBUG_KMS("%sabling crtc %c %s upfront link train\n",
> +			enable ? "En" : "Dis",
> +			pipe_name(pipe),
> +			enable ? "after" : "before");
> +
> +	crtc_state->base.active = enable;
> +	ret = drm_atomic_commit(state);
> +	if (ret)
> +		drm_atomic_state_free(state);
> +
> +	return ret;
> +}
> +
>  static int
>  intel_dp_downstream_max_dotclock(struct intel_dp *intel_dp)
>  {
> @@ -281,6 +326,17 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
>  	int source_len, sink_len;
>  
>  	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
> +
> +	/* Cap sink rates w.r.t upfront values */
> +	if (intel_dp->max_link_rate_upfront) {
> +		int len = sink_len - 1;
> +
> +		while (len > 0 && sink_rates[len] >
> +		       intel_dp->max_link_rate_upfront)
> +			len--;
> +		sink_len = len + 1;
> +	}
> +
>  	source_len = intel_dp_source_rates(intel_dp, &source_rates);
>  
>  	return intersect_rates(source_rates, source_len,
> @@ -288,6 +344,92 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
>  			       common_rates);
>  }
>  
> +static bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
> +{
> +	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
> +	struct intel_encoder *intel_encoder = &intel_dig_port->base;
> +	struct drm_device *dev = intel_encoder->base.dev;
> +	struct drm_i915_private *dev_priv = to_i915(dev);
> +	struct drm_mode_config *config = &dev->mode_config;
> +	struct drm_modeset_acquire_ctx ctx;
> +	struct intel_crtc *intel_crtc;
> +	struct drm_crtc *crtc = NULL;
> +	struct intel_shared_dpll *pll;
> +	struct intel_shared_dpll_config tmp_pll_config;
> +	bool disable_dpll = false;
> +	int ret;
> +	bool done = false, has_mst = false;
> +	uint8_t max_lanes;
> +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
> +	int common_len;
> +	enum intel_display_power_domain power_domain;
> +
> +	power_domain = intel_display_port_power_domain(intel_encoder);
> +	intel_display_power_get(dev_priv, power_domain);
> +
> +	common_len = intel_dp_common_rates(intel_dp, common_rates);
> +	max_lanes = intel_dp_max_lane_count(intel_dp);
> +	if (WARN_ON(common_len <= 0))
> +		return true;
> +
> +	drm_modeset_acquire_init(&ctx, 0);
> +retry:
> +	ret = drm_modeset_lock(&config->connection_mutex, &ctx);
> +	if (ret)
> +		goto exit_fail;
> +
> +	if (intel_encoder->base.crtc) {
> +		crtc = intel_encoder->base.crtc;
> +
> +		ret = drm_modeset_lock(&crtc->mutex, &ctx);
> +		if (ret)
> +			goto exit_fail;
> +
> +		ret = drm_modeset_lock(&crtc->primary->mutex, &ctx);
> +		if (ret)
> +			goto exit_fail;
> +
> +		intel_crtc = to_intel_crtc(crtc);
> +		pll = intel_crtc->config->shared_dpll;
> +		disable_dpll = true;
> +		has_mst = intel_crtc_has_type(intel_crtc->config,
> +					      INTEL_OUTPUT_DP_MST);
> +		ret = intel_dp_upfront_crtc_disable(intel_crtc, &ctx, false);
> +		if (ret)
> +			goto exit_fail;
> +	}
> +
> +	mutex_lock(&dev_priv->dpll_lock);
> +	if (disable_dpll) {
> +		/* Clear the PLL config state */
> +		tmp_pll_config = pll->config;
> +		pll->config.crtc_mask = 0;
> +	}
> +
> +	done = intel_dp->upfront_link_train(intel_dp,
> +					    common_rates[common_len-1],
> +					    max_lanes,
> +					    has_mst,
> +					    true);
> +	if (disable_dpll)
> +		pll->config = tmp_pll_config;
> +
> +	mutex_unlock(&dev_priv->dpll_lock);
> +
> +	if (crtc)
> +		ret = intel_dp_upfront_crtc_disable(intel_crtc, &ctx, true);
> +
> +exit_fail:
> +	if (ret == -EDEADLK) {
> +		drm_modeset_backoff(&ctx);
> +		goto retry;
> +	}
> +	drm_modeset_drop_locks(&ctx);
> +	drm_modeset_acquire_fini(&ctx);
> +	intel_display_power_put(dev_priv, power_domain);
> +	return done;
> +}
> +
>  static enum drm_mode_status
>  intel_dp_mode_valid(struct drm_connector *connector,
>  		    struct drm_display_mode *mode)
> @@ -311,6 +453,19 @@ intel_dp_mode_valid(struct drm_connector *connector,
>  		target_clock = fixed_mode->clock;
>  	}
>  
> +	if (intel_dp->upfront_link_train && !intel_dp->upfront_done) {
> +		bool do_upfront_link_train;
> +		/* Do not do upfront link train, if it is a compliance
> +		 * request
> +		 */
> +		do_upfront_link_train = !intel_dp->upfront_done &&
> +			(intel_dp->compliance_test_type !=
> +			 DP_TEST_LINK_TRAINING);
> +
> +		if (do_upfront_link_train)
> +			intel_dp->upfront_done = intel_dp_upfront_link_train(intel_dp);
> +	}
> +
>  	max_link_clock = intel_dp_max_link_rate(intel_dp);
>  	max_lanes = intel_dp_max_lane_count(intel_dp);
>  
> @@ -1499,6 +1654,9 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
>  	int rates[DP_MAX_SUPPORTED_RATES] = {};
>  	int len;
>  
> +	if (intel_dp->max_link_rate_upfront)
> +		return intel_dp->max_link_rate_upfront;
> +
>  	len = intel_dp_common_rates(intel_dp, rates);
>  	if (WARN_ON(len <= 0))
>  		return 162000;
> @@ -1644,6 +1802,21 @@ intel_dp_compute_config(struct intel_encoder *encoder,
>  	for (; bpp >= 6*3; bpp -= 2*3) {
>  		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
>  						   bpp);
> +
> +		if (!is_edp(intel_dp) && intel_dp->upfront_done) {
> +			clock = max_clock;
> +			lane_count = intel_dp->max_lanes_upfront;
> +			link_clock = intel_dp->max_link_rate_upfront;
> +			link_avail = intel_dp_max_data_rate(link_clock,
> +							    lane_count);
> +			mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
> +							   bpp);
> +			if (mode_rate <= link_avail)
> +				goto found;
> +			else
> +				continue;
> +		}
> +
>  		clock = max_clock;
>  		lane_count = max_lane_count;
>  		link_clock = common_rates[clock];
> @@ -1672,7 +1845,6 @@ found:
>  	}
>  
>  	pipe_config->lane_count = lane_count;
> -
>  	pipe_config->pipe_bpp = bpp;
>  	pipe_config->port_clock = common_rates[clock];
>  
> @@ -4453,8 +4625,12 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
>  
>  out:
>  	if ((status != connector_status_connected) &&
> -	    (intel_dp->is_mst == false))
> +	    (intel_dp->is_mst == false)) {
>  		intel_dp_unset_edid(intel_dp);
> +		intel_dp->upfront_done = false;
> +		intel_dp->max_lanes_upfront = 0;
> +		intel_dp->max_link_rate_upfront = 0;
> +	}
>  
>  	intel_display_power_put(to_i915(dev), power_domain);
>  	return;
> @@ -5698,6 +5874,12 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
>  	if (type == DRM_MODE_CONNECTOR_eDP)
>  		intel_encoder->type = INTEL_OUTPUT_EDP;
>  
> +	/* Initialize upfront link training vfunc for DP */
> +	if (intel_encoder->type != INTEL_OUTPUT_EDP) {
> +		if (HAS_DDI(dev_priv))
> +			intel_dp->upfront_link_train = intel_ddi_link_train;
> +	}
> +
>  	/* eDP only on port B and/or C on vlv/chv */
>  	if (WARN_ON((IS_VALLEYVIEW(dev) || IS_CHERRYVIEW(dev)) &&
>  		    is_edp(intel_dp) && port != PORT_B && port != PORT_C))
> diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
> index 6eb5eb6..782a919 100644
> --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
> +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
> @@ -304,7 +304,6 @@ intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
>  	intel_dp_set_idle_link_train(intel_dp);
>  
>  	return intel_dp->channel_eq_status;
> -
>  }
>  
>  void intel_dp_stop_link_train(struct intel_dp *intel_dp)
> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> index 0aeb317..fdfc0b6 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -887,6 +887,12 @@ struct intel_dp {
>  	enum hdmi_force_audio force_audio;
>  	bool limited_color_range;
>  	bool color_range_auto;
> +
> +	/* Upfront link train parameters */
> +	int max_link_rate_upfront;
> +	uint8_t max_lanes_upfront;
> +	bool upfront_done;
> +
>  	uint8_t dpcd[DP_RECEIVER_CAP_SIZE];
>  	uint8_t psr_dpcd[EDP_PSR_RECEIVER_CAP_SIZE];
>  	uint8_t downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
> @@ -944,6 +950,11 @@ struct intel_dp {
>  	/* This is called before a link training is starterd */
>  	void (*prepare_link_retrain)(struct intel_dp *intel_dp);
>  
> +	/* For Upfront link training */
> +	bool (*upfront_link_train)(struct intel_dp *intel_dp, int clock,
> +				   uint8_t lane_count, bool link_mst,
> +				   bool is_upfront);
> +
>  	/* Displayport compliance testing */
>  	unsigned long compliance_test_type;
>  	unsigned long compliance_test_data;
> @@ -1166,7 +1177,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
>  void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
>  uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
>  bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
> -			  uint8_t max_lane_count, bool link_mst);
> +			  uint8_t max_lane_count, bool link_mst,
> +			  bool is_upfront);
>  struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
>  						  int clock);
>  unsigned int intel_fb_align_height(struct drm_device *dev,

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v7 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-26 13:39       ` Jani Nikula
@ 2016-09-27 15:25         ` Manasi Navare
  2016-09-27 17:07           ` Jani Nikula
  0 siblings, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-27 15:25 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

On Mon, Sep 26, 2016 at 04:39:34PM +0300, Jani Nikula wrote:
> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > According to the DisplayPort Spec, in case of Clock Recovery failure
> > the link training sequence should fall back to the lower link rate
> > followed by lower lane count until CR succeeds.
> > On CR success, the sequence proceeds with Channel EQ.
> > In case of Channel EQ failures, it should fallback to
> > lower link rate and lane count and start the CR phase again.
> 
> This change makes the link training start at the max lane count and max
> link rate. This is not ideal, as it wastes the link. And it is not a
> spec requirement. "The Link Policy Maker of the upstream device may
> choose any link count and link rate as long as they do not exceed the
> capabilities of the DP receiver."
> 
> Our current code starts at the minimum required bandwidth for the mode,
> therefore we can't fall back to lower link rate and lane count without
> reducing the mode.
> 
> AFAICT this patch here makes it possible for the link bandwidth to drop
> below what is required for the mode. This is unacceptable.
> 
> BR,
> Jani.
> 
>

Thanks Jani for your review comments.
Yes in this change we start at the max link rate and lane count. This change was
made according to the design document discussions we had before strating this DP 
Redesign project. The main reason for starting at the maxlink rate and max lane
count was for ensuring proper behavior of DP MST. In case of DP MST, we want to
train the link at the maximum supported link rate/lane count based on an early/
upfront link training result so that we dont fail when we try to connect a
higher resolution monitor as a second monitor. This a trade off between wsting 
the link or higher power vs. needing to retrain for every monitor that requests
a higher BW in case of DP MST.
 
Actually this is also the reason for enabling upfront link training in the 
following patch where we train the link much ahead in the modeset sequence 
to understand the link rate and lane count values at which the link can be 
successfully trained and then the link training through modeset will always start
at the upfront values (maximum supported values of lane count and link rate based
on upfront link training).

As per the CTS, all the test 4.3.1.4 requires that you fall back to the lower link
rate after trying to train at the maximum link rate advertised through the DPCD 
registers.

This will not drop the link BW to a number below what is required for the mode 
because the requested modes are pruned or validated in intel_dp_mode_valid
based on the upfront link training results in the following patch. And these 
values are used here as the starting values of link rate and lane count.

I almost feel that the upfront link training patch and this patch should be 
combined so that insead of starting from the max link rate and lane count it
is clear that we are starting from the upfront values.

Regards,
Manasi 
> >
> > v7:
> > * Address readability concerns (Mika Kahola)
> > v6:
> > * Do not split quoted string across line (Mika Kahola)
> > v5:
> > * Reset the link rate index to the max link rate index
> > before lowering the lane count (Jani Nikula)
> > * Use the paradigm for loop in intel_dp_link_rate_index
> > v4:
> > * Fixed the link rate fallback loop (Manasi Navare)
> > v3:
> > * Fixed some rebase issues (Mika Kahola)
> > v2:
> > * Add a helper function to return index of requested link rate
> > into common_rates array
> > * Changed the link rate fallback loop to make use
> > of common_rates array (Mika Kahola)
> > * Changed INTEL_INFO to INTEL_GEN (David Weinehall)
> >
> > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> > ---
> >  drivers/gpu/drm/i915/intel_ddi.c              | 113 +++++++++++++++++++++++---
> >  drivers/gpu/drm/i915/intel_dp.c               |  15 ++++
> >  drivers/gpu/drm/i915/intel_dp_link_training.c |  13 ++-
> >  drivers/gpu/drm/i915/intel_drv.h              |   6 +-
> >  4 files changed, 133 insertions(+), 14 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
> > index 8065a5f..093038c 100644
> > --- a/drivers/gpu/drm/i915/intel_ddi.c
> > +++ b/drivers/gpu/drm/i915/intel_ddi.c
> > @@ -1637,19 +1637,18 @@ void intel_ddi_clk_select(struct intel_encoder *encoder,
> >  	}
> >  }
> >  
> > -static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> > +static void intel_ddi_pre_enable_edp(struct intel_encoder *encoder,
> >  				    int link_rate, uint32_t lane_count,
> > -				    struct intel_shared_dpll *pll,
> > -				    bool link_mst)
> > +				    struct intel_shared_dpll *pll)
> >  {
> >  	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
> >  	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> >  	enum port port = intel_ddi_get_encoder_port(encoder);
> >  
> >  	intel_dp_set_link_params(intel_dp, link_rate, lane_count,
> > -				 link_mst);
> > -	if (encoder->type == INTEL_OUTPUT_EDP)
> > -		intel_edp_panel_on(intel_dp);
> > +				 false);
> > +
> > +	intel_edp_panel_on(intel_dp);
> >  
> >  	intel_ddi_clk_select(encoder, pll);
> >  	intel_prepare_dp_ddi_buffers(encoder);
> > @@ -1660,6 +1659,28 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> >  		intel_dp_stop_link_train(intel_dp);
> >  }
> >  
> > +static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> > +				    int link_rate, uint32_t lane_count,
> > +				    struct intel_shared_dpll *pll,
> > +				    bool link_mst)
> > +{
> > +	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
> > +	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> > +	struct intel_shared_dpll_config tmp_pll_config;
> > +
> > +	/* Disable the PLL and obtain the PLL for Link Training
> > +	 * that starts with highest link rate and lane count.
> > +	 */
> > +	tmp_pll_config = pll->config;
> > +	pll->funcs.disable(dev_priv, pll);
> > +	pll->config.crtc_mask = 0;
> > +
> > +	/* If Link Training fails, send a uevent to generate a hotplug */
> > +	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
> > +		drm_kms_helper_hotplug_event(encoder->base.dev);
> > +	pll->config = tmp_pll_config;
> > +}
> > +
> >  static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
> >  				      bool has_hdmi_sink,
> >  				      struct drm_display_mode *adjusted_mode,
> > @@ -1693,20 +1714,26 @@ static void intel_ddi_pre_enable(struct intel_encoder *intel_encoder,
> >  	struct intel_crtc *crtc = to_intel_crtc(encoder->crtc);
> >  	int type = intel_encoder->type;
> >  
> > -	if (type == INTEL_OUTPUT_DP || type == INTEL_OUTPUT_EDP) {
> > +	if (type == INTEL_OUTPUT_EDP)
> > +		intel_ddi_pre_enable_edp(intel_encoder,
> > +					crtc->config->port_clock,
> > +					crtc->config->lane_count,
> > +					crtc->config->shared_dpll);
> > +
> > +	if (type == INTEL_OUTPUT_DP)
> >  		intel_ddi_pre_enable_dp(intel_encoder,
> >  					crtc->config->port_clock,
> >  					crtc->config->lane_count,
> >  					crtc->config->shared_dpll,
> >  					intel_crtc_has_type(crtc->config,
> >  							    INTEL_OUTPUT_DP_MST));
> > -	}
> > -	if (type == INTEL_OUTPUT_HDMI) {
> > +
> > +	if (type == INTEL_OUTPUT_HDMI)
> >  		intel_ddi_pre_enable_hdmi(intel_encoder,
> >  					  crtc->config->has_hdmi_sink,
> >  					  &crtc->config->base.adjusted_mode,
> >  					  crtc->config->shared_dpll);
> > -	}
> > +
> >  }
> >  
> >  static void intel_ddi_post_disable(struct intel_encoder *intel_encoder,
> > @@ -2435,6 +2462,72 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
> >  	return pll;
> >  }
> >  
> > +bool
> > +intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
> > +		     uint8_t max_lane_count, bool link_mst)
> > +{
> > +	struct intel_connector *connector = intel_dp->attached_connector;
> > +	struct intel_encoder *encoder = connector->encoder;
> > +	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> > +	struct intel_shared_dpll *pll;
> > +	struct intel_shared_dpll_config tmp_pll_config;
> > +	int link_rate, max_link_rate_index, link_rate_index;
> > +	uint8_t lane_count;
> > +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
> > +	bool ret = false;
> > +
> > +	max_link_rate_index = intel_dp_link_rate_index(intel_dp, common_rates,
> > +						       max_link_rate);
> > +	if (max_link_rate_index < 0) {
> > +		DRM_ERROR("Invalid Link Rate\n");
> > +		return false;
> > +	}
> > +
> > +	for (lane_count = max_lane_count; lane_count > 0; lane_count >>= 1) {
> > +		for (link_rate_index = max_link_rate_index;
> > +		     link_rate_index >= 0; link_rate_index--) {
> > +			link_rate = common_rates[link_rate_index];
> > +			pll = intel_ddi_get_link_dpll(intel_dp, link_rate);
> > +			if (pll == NULL) {
> > +				DRM_ERROR("Could not find DPLL for link training.\n");
> > +				return false;
> > +			}
> > +			tmp_pll_config = pll->config;
> > +			pll->funcs.enable(dev_priv, pll);
> > +
> > +			intel_dp_set_link_params(intel_dp, link_rate,
> > +						 lane_count, link_mst);
> > +
> > +			intel_ddi_clk_select(encoder, pll);
> > +			intel_prepare_dp_ddi_buffers(encoder);
> > +			intel_ddi_init_dp_buf_reg(encoder);
> > +			intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
> > +			ret = intel_dp_start_link_train(intel_dp);
> > +			if (ret)
> > +				break;
> > +
> > +			/* Disable port followed by PLL for next
> > +			 *retry/clean up
> > +			 */
> > +			intel_ddi_post_disable(encoder, NULL, NULL);
> > +			pll->funcs.disable(dev_priv, pll);
> > +			pll->config = tmp_pll_config;
> > +		}
> > +		if (ret) {
> > +			DRM_DEBUG_KMS("Link Training successful at link rate: %d lane: %d\n",
> > +				      link_rate, lane_count);
> > +			break;
> > +		}
> > +	}
> > +
> > +	intel_dp_stop_link_train(intel_dp);
> > +
> > +	if (!lane_count)
> > +		DRM_ERROR("Link Training Failed\n");
> > +
> > +	return ret;
> > +}
> > +
> >  void intel_ddi_init(struct drm_device *dev, enum port port)
> >  {
> >  	struct drm_i915_private *dev_priv = to_i915(dev);
> > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> > index 69cee9b..d81c67cb 100644
> > --- a/drivers/gpu/drm/i915/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/intel_dp.c
> > @@ -1506,6 +1506,21 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
> >  	return rates[len - 1];
> >  }
> >  
> > +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
> > +			     int link_rate)
> > +{
> > +	int common_len;
> > +	int index;
> > +
> > +	common_len = intel_dp_common_rates(intel_dp, common_rates);
> > +	for (index = 0; index < common_len; index++) {
> > +		if (link_rate == common_rates[common_len - index - 1])
> > +			return common_len - index - 1;
> > +	}
> > +
> > +	return -1;
> > +}
> > +
> >  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
> >  {
> >  	return rate_to_index(rate, intel_dp->sink_rates);
> > diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
> > index c438b02..6eb5eb6 100644
> > --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
> > +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
> > @@ -313,9 +313,16 @@ void intel_dp_stop_link_train(struct intel_dp *intel_dp)
> >  				DP_TRAINING_PATTERN_DISABLE);
> >  }
> >  
> > -void
> > +bool
> >  intel_dp_start_link_train(struct intel_dp *intel_dp)
> >  {
> > -	intel_dp_link_training_clock_recovery(intel_dp);
> > -	intel_dp_link_training_channel_equalization(intel_dp);
> > +	bool ret;
> > +
> > +	if (intel_dp_link_training_clock_recovery(intel_dp)) {
> > +		ret = intel_dp_link_training_channel_equalization(intel_dp);
> > +		if (ret)
> > +			return true;
> > +	}
> > +
> > +	return false;
> >  }
> > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> > index 8fd16ad..08cb571 100644
> > --- a/drivers/gpu/drm/i915/intel_drv.h
> > +++ b/drivers/gpu/drm/i915/intel_drv.h
> > @@ -1164,6 +1164,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
> >  			 struct intel_crtc_state *pipe_config);
> >  void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
> >  uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
> > +bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
> > +			  uint8_t max_lane_count, bool link_mst);
> >  struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
> >  						  int clock);
> >  unsigned int intel_fb_align_height(struct drm_device *dev,
> > @@ -1385,7 +1387,7 @@ bool intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
> >  void intel_dp_set_link_params(struct intel_dp *intel_dp,
> >  			      int link_rate, uint8_t lane_count,
> >  			      bool link_mst);
> > -void intel_dp_start_link_train(struct intel_dp *intel_dp);
> > +bool intel_dp_start_link_train(struct intel_dp *intel_dp);
> >  void intel_dp_stop_link_train(struct intel_dp *intel_dp);
> >  void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
> >  void intel_dp_encoder_reset(struct drm_encoder *encoder);
> > @@ -1407,6 +1409,8 @@ void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *co
> >  void intel_dp_mst_suspend(struct drm_device *dev);
> >  void intel_dp_mst_resume(struct drm_device *dev);
> >  int intel_dp_max_link_rate(struct intel_dp *intel_dp);
> > +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
> > +			     int link_rate);
> >  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
> >  void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
> >  void intel_power_sequencer_reset(struct drm_i915_private *dev_priv);
> 
> -- 
> Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v7 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-27 15:25         ` Manasi Navare
@ 2016-09-27 17:07           ` Jani Nikula
  2016-09-29  6:41             ` Manasi Navare
  0 siblings, 1 reply; 56+ messages in thread
From: Jani Nikula @ 2016-09-27 17:07 UTC (permalink / raw)
  To: Manasi Navare; +Cc: intel-gfx

On Tue, 27 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> On Mon, Sep 26, 2016 at 04:39:34PM +0300, Jani Nikula wrote:
>> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
>> > According to the DisplayPort Spec, in case of Clock Recovery failure
>> > the link training sequence should fall back to the lower link rate
>> > followed by lower lane count until CR succeeds.
>> > On CR success, the sequence proceeds with Channel EQ.
>> > In case of Channel EQ failures, it should fallback to
>> > lower link rate and lane count and start the CR phase again.
>> 
>> This change makes the link training start at the max lane count and max
>> link rate. This is not ideal, as it wastes the link. And it is not a
>> spec requirement. "The Link Policy Maker of the upstream device may
>> choose any link count and link rate as long as they do not exceed the
>> capabilities of the DP receiver."
>> 
>> Our current code starts at the minimum required bandwidth for the mode,
>> therefore we can't fall back to lower link rate and lane count without
>> reducing the mode.
>> 
>> AFAICT this patch here makes it possible for the link bandwidth to drop
>> below what is required for the mode. This is unacceptable.
>> 
>> BR,
>> Jani.
>> 
>>
>
> Thanks Jani for your review comments.
> Yes in this change we start at the max link rate and lane count. This
> change was made according to the design document discussions we had
> before strating this DP Redesign project. The main reason for starting
> at the maxlink rate and max lane count was for ensuring proper
> behavior of DP MST. In case of DP MST, we want to train the link at
> the maximum supported link rate/lane count based on an early/ upfront
> link training result so that we dont fail when we try to connect a
> higher resolution monitor as a second monitor. This a trade off
> between wsting the link or higher power vs. needing to retrain for
> every monitor that requests a higher BW in case of DP MST.

We already train at max bandwidth for DP MST, which seems to be the
sensible thing to do.

> Actually this is also the reason for enabling upfront link training in
> the following patch where we train the link much ahead in the modeset
> sequence to understand the link rate and lane count values at which
> the link can be successfully trained and then the link training
> through modeset will always start at the upfront values (maximum
> supported values of lane count and link rate based on upfront link
> training).

I don't see a need to do this for DP SST.

> As per the CTS, all the test 4.3.1.4 requires that you fall back to
> the lower link rate after trying to train at the maximum link rate
> advertised through the DPCD registers.

That test does not require the source DUT to default to maximum lane
count or link rate of the sink. The source may freely choose the lane
count and link rate as long as they don't exceed sink capabilities.

For the purposes of the test, the test setup can request specific
parameters to be used, but that does not mean using maximum by
*default*.

We currently lack the feature to reduce lane count and link rate. The
key to understand here is that starting at max and reducing down to the
sufficient parameters for the mode (which is where we start now) offers
no real benefit for any use case. What we're lacking is a feature to
reduce the link parameters *below* what's required by the mode the
userspace wants. This can only be achieved through cooperation with
userspace.

> This will not drop the link BW to a number below what is required for
> the mode because the requested modes are pruned or validated in
> intel_dp_mode_valid based on the upfront link training results in the
> following patch. And these values are used here as the starting values
> of link rate and lane count.

Each patch must be a worthwhile change on its own. By my reading of this
patch, we can go under the required bandwidth. You can't justify that by
saying the follow-up patch fixes it.

> I almost feel that the upfront link training patch and this patch should be 
> combined so that insead of starting from the max link rate and lane count it
> is clear that we are starting from the upfront values.

I am still reading and gathering more feedback on the upfront link
training patch. I will get back to you. But the impression I'm currently
getting is that we can't do this. The upfront link training patch was
originally written for USB type C. But if DP compliance has priority,
the order of business should be getting compliance without upfront link
training. I am also still not convinced upfront link training is
required for compliance.

To be continued...

BR,
Jani.



>
> Regards,
> Manasi 
>> >
>> > v7:
>> > * Address readability concerns (Mika Kahola)
>> > v6:
>> > * Do not split quoted string across line (Mika Kahola)
>> > v5:
>> > * Reset the link rate index to the max link rate index
>> > before lowering the lane count (Jani Nikula)
>> > * Use the paradigm for loop in intel_dp_link_rate_index
>> > v4:
>> > * Fixed the link rate fallback loop (Manasi Navare)
>> > v3:
>> > * Fixed some rebase issues (Mika Kahola)
>> > v2:
>> > * Add a helper function to return index of requested link rate
>> > into common_rates array
>> > * Changed the link rate fallback loop to make use
>> > of common_rates array (Mika Kahola)
>> > * Changed INTEL_INFO to INTEL_GEN (David Weinehall)
>> >
>> > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
>> > ---
>> >  drivers/gpu/drm/i915/intel_ddi.c              | 113 +++++++++++++++++++++++---
>> >  drivers/gpu/drm/i915/intel_dp.c               |  15 ++++
>> >  drivers/gpu/drm/i915/intel_dp_link_training.c |  13 ++-
>> >  drivers/gpu/drm/i915/intel_drv.h              |   6 +-
>> >  4 files changed, 133 insertions(+), 14 deletions(-)
>> >
>> > diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
>> > index 8065a5f..093038c 100644
>> > --- a/drivers/gpu/drm/i915/intel_ddi.c
>> > +++ b/drivers/gpu/drm/i915/intel_ddi.c
>> > @@ -1637,19 +1637,18 @@ void intel_ddi_clk_select(struct intel_encoder *encoder,
>> >  	}
>> >  }
>> >  
>> > -static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
>> > +static void intel_ddi_pre_enable_edp(struct intel_encoder *encoder,
>> >  				    int link_rate, uint32_t lane_count,
>> > -				    struct intel_shared_dpll *pll,
>> > -				    bool link_mst)
>> > +				    struct intel_shared_dpll *pll)
>> >  {
>> >  	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
>> >  	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
>> >  	enum port port = intel_ddi_get_encoder_port(encoder);
>> >  
>> >  	intel_dp_set_link_params(intel_dp, link_rate, lane_count,
>> > -				 link_mst);
>> > -	if (encoder->type == INTEL_OUTPUT_EDP)
>> > -		intel_edp_panel_on(intel_dp);
>> > +				 false);
>> > +
>> > +	intel_edp_panel_on(intel_dp);
>> >  
>> >  	intel_ddi_clk_select(encoder, pll);
>> >  	intel_prepare_dp_ddi_buffers(encoder);
>> > @@ -1660,6 +1659,28 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
>> >  		intel_dp_stop_link_train(intel_dp);
>> >  }
>> >  
>> > +static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
>> > +				    int link_rate, uint32_t lane_count,
>> > +				    struct intel_shared_dpll *pll,
>> > +				    bool link_mst)
>> > +{
>> > +	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
>> > +	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
>> > +	struct intel_shared_dpll_config tmp_pll_config;
>> > +
>> > +	/* Disable the PLL and obtain the PLL for Link Training
>> > +	 * that starts with highest link rate and lane count.
>> > +	 */
>> > +	tmp_pll_config = pll->config;
>> > +	pll->funcs.disable(dev_priv, pll);
>> > +	pll->config.crtc_mask = 0;
>> > +
>> > +	/* If Link Training fails, send a uevent to generate a hotplug */
>> > +	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
>> > +		drm_kms_helper_hotplug_event(encoder->base.dev);
>> > +	pll->config = tmp_pll_config;
>> > +}
>> > +
>> >  static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
>> >  				      bool has_hdmi_sink,
>> >  				      struct drm_display_mode *adjusted_mode,
>> > @@ -1693,20 +1714,26 @@ static void intel_ddi_pre_enable(struct intel_encoder *intel_encoder,
>> >  	struct intel_crtc *crtc = to_intel_crtc(encoder->crtc);
>> >  	int type = intel_encoder->type;
>> >  
>> > -	if (type == INTEL_OUTPUT_DP || type == INTEL_OUTPUT_EDP) {
>> > +	if (type == INTEL_OUTPUT_EDP)
>> > +		intel_ddi_pre_enable_edp(intel_encoder,
>> > +					crtc->config->port_clock,
>> > +					crtc->config->lane_count,
>> > +					crtc->config->shared_dpll);
>> > +
>> > +	if (type == INTEL_OUTPUT_DP)
>> >  		intel_ddi_pre_enable_dp(intel_encoder,
>> >  					crtc->config->port_clock,
>> >  					crtc->config->lane_count,
>> >  					crtc->config->shared_dpll,
>> >  					intel_crtc_has_type(crtc->config,
>> >  							    INTEL_OUTPUT_DP_MST));
>> > -	}
>> > -	if (type == INTEL_OUTPUT_HDMI) {
>> > +
>> > +	if (type == INTEL_OUTPUT_HDMI)
>> >  		intel_ddi_pre_enable_hdmi(intel_encoder,
>> >  					  crtc->config->has_hdmi_sink,
>> >  					  &crtc->config->base.adjusted_mode,
>> >  					  crtc->config->shared_dpll);
>> > -	}
>> > +
>> >  }
>> >  
>> >  static void intel_ddi_post_disable(struct intel_encoder *intel_encoder,
>> > @@ -2435,6 +2462,72 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
>> >  	return pll;
>> >  }
>> >  
>> > +bool
>> > +intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
>> > +		     uint8_t max_lane_count, bool link_mst)
>> > +{
>> > +	struct intel_connector *connector = intel_dp->attached_connector;
>> > +	struct intel_encoder *encoder = connector->encoder;
>> > +	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
>> > +	struct intel_shared_dpll *pll;
>> > +	struct intel_shared_dpll_config tmp_pll_config;
>> > +	int link_rate, max_link_rate_index, link_rate_index;
>> > +	uint8_t lane_count;
>> > +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
>> > +	bool ret = false;
>> > +
>> > +	max_link_rate_index = intel_dp_link_rate_index(intel_dp, common_rates,
>> > +						       max_link_rate);
>> > +	if (max_link_rate_index < 0) {
>> > +		DRM_ERROR("Invalid Link Rate\n");
>> > +		return false;
>> > +	}
>> > +
>> > +	for (lane_count = max_lane_count; lane_count > 0; lane_count >>= 1) {
>> > +		for (link_rate_index = max_link_rate_index;
>> > +		     link_rate_index >= 0; link_rate_index--) {
>> > +			link_rate = common_rates[link_rate_index];
>> > +			pll = intel_ddi_get_link_dpll(intel_dp, link_rate);
>> > +			if (pll == NULL) {
>> > +				DRM_ERROR("Could not find DPLL for link training.\n");
>> > +				return false;
>> > +			}
>> > +			tmp_pll_config = pll->config;
>> > +			pll->funcs.enable(dev_priv, pll);
>> > +
>> > +			intel_dp_set_link_params(intel_dp, link_rate,
>> > +						 lane_count, link_mst);
>> > +
>> > +			intel_ddi_clk_select(encoder, pll);
>> > +			intel_prepare_dp_ddi_buffers(encoder);
>> > +			intel_ddi_init_dp_buf_reg(encoder);
>> > +			intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
>> > +			ret = intel_dp_start_link_train(intel_dp);
>> > +			if (ret)
>> > +				break;
>> > +
>> > +			/* Disable port followed by PLL for next
>> > +			 *retry/clean up
>> > +			 */
>> > +			intel_ddi_post_disable(encoder, NULL, NULL);
>> > +			pll->funcs.disable(dev_priv, pll);
>> > +			pll->config = tmp_pll_config;
>> > +		}
>> > +		if (ret) {
>> > +			DRM_DEBUG_KMS("Link Training successful at link rate: %d lane: %d\n",
>> > +				      link_rate, lane_count);
>> > +			break;
>> > +		}
>> > +	}
>> > +
>> > +	intel_dp_stop_link_train(intel_dp);
>> > +
>> > +	if (!lane_count)
>> > +		DRM_ERROR("Link Training Failed\n");
>> > +
>> > +	return ret;
>> > +}
>> > +
>> >  void intel_ddi_init(struct drm_device *dev, enum port port)
>> >  {
>> >  	struct drm_i915_private *dev_priv = to_i915(dev);
>> > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
>> > index 69cee9b..d81c67cb 100644
>> > --- a/drivers/gpu/drm/i915/intel_dp.c
>> > +++ b/drivers/gpu/drm/i915/intel_dp.c
>> > @@ -1506,6 +1506,21 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
>> >  	return rates[len - 1];
>> >  }
>> >  
>> > +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
>> > +			     int link_rate)
>> > +{
>> > +	int common_len;
>> > +	int index;
>> > +
>> > +	common_len = intel_dp_common_rates(intel_dp, common_rates);
>> > +	for (index = 0; index < common_len; index++) {
>> > +		if (link_rate == common_rates[common_len - index - 1])
>> > +			return common_len - index - 1;
>> > +	}
>> > +
>> > +	return -1;
>> > +}
>> > +
>> >  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
>> >  {
>> >  	return rate_to_index(rate, intel_dp->sink_rates);
>> > diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
>> > index c438b02..6eb5eb6 100644
>> > --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
>> > +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
>> > @@ -313,9 +313,16 @@ void intel_dp_stop_link_train(struct intel_dp *intel_dp)
>> >  				DP_TRAINING_PATTERN_DISABLE);
>> >  }
>> >  
>> > -void
>> > +bool
>> >  intel_dp_start_link_train(struct intel_dp *intel_dp)
>> >  {
>> > -	intel_dp_link_training_clock_recovery(intel_dp);
>> > -	intel_dp_link_training_channel_equalization(intel_dp);
>> > +	bool ret;
>> > +
>> > +	if (intel_dp_link_training_clock_recovery(intel_dp)) {
>> > +		ret = intel_dp_link_training_channel_equalization(intel_dp);
>> > +		if (ret)
>> > +			return true;
>> > +	}
>> > +
>> > +	return false;
>> >  }
>> > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
>> > index 8fd16ad..08cb571 100644
>> > --- a/drivers/gpu/drm/i915/intel_drv.h
>> > +++ b/drivers/gpu/drm/i915/intel_drv.h
>> > @@ -1164,6 +1164,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
>> >  			 struct intel_crtc_state *pipe_config);
>> >  void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
>> >  uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
>> > +bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
>> > +			  uint8_t max_lane_count, bool link_mst);
>> >  struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
>> >  						  int clock);
>> >  unsigned int intel_fb_align_height(struct drm_device *dev,
>> > @@ -1385,7 +1387,7 @@ bool intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
>> >  void intel_dp_set_link_params(struct intel_dp *intel_dp,
>> >  			      int link_rate, uint8_t lane_count,
>> >  			      bool link_mst);
>> > -void intel_dp_start_link_train(struct intel_dp *intel_dp);
>> > +bool intel_dp_start_link_train(struct intel_dp *intel_dp);
>> >  void intel_dp_stop_link_train(struct intel_dp *intel_dp);
>> >  void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
>> >  void intel_dp_encoder_reset(struct drm_encoder *encoder);
>> > @@ -1407,6 +1409,8 @@ void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *co
>> >  void intel_dp_mst_suspend(struct drm_device *dev);
>> >  void intel_dp_mst_resume(struct drm_device *dev);
>> >  int intel_dp_max_link_rate(struct intel_dp *intel_dp);
>> > +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
>> > +			     int link_rate);
>> >  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
>> >  void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
>> >  void intel_power_sequencer_reset(struct drm_i915_private *dev_priv);
>> 
>> -- 
>> Jani Nikula, Intel Open Source Technology Center

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config
  2016-09-26 13:41     ` Jani Nikula
  2016-09-27 13:39       ` Jani Nikula
@ 2016-09-27 21:55       ` Manasi Navare
  2016-09-28  7:38         ` Jani Nikula
  1 sibling, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-27 21:55 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

On Mon, Sep 26, 2016 at 04:41:27PM +0300, Jani Nikula wrote:
> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > While configuring the pipe during modeset, it should use
> > max clock and max lane count and reduce the bpp until
> > the requested mode rate is less than or equal to
> > available link BW.
> > This is required to pass DP Compliance.
> 
> As I wrote in reply to patch 1/6, this is not a DP spec requirement. The
> link policy maker can freely choose the link parameters as long as the
> sink supports them.
> 
> BR,
> Jani.
> 
>

Thanks for your review feedback.
This change was driven by Video Pattern generation tests in CTS spec. Eg: In
test 4.3.3.1, the test requests 640x480 @ max link rate of 2.7Gbps and 4 lanes.
The test will pass if it sets the link rate to 2.7 and lane count = 4.
But in the existing implementation, this video mode request triggers a modeset
but the compute_config function starts with the lowest link rate and lane count and
trains the link at 1.62 and 4 lanes which does not match the expeced values of link
rate = 2.7 and lane count = 4 and the test fails. 

Regards
Manasi
 
> >
> > v3:
> > * Add Debug print if requested mode cannot be supported
> > during modeset (Dhinakaran Pandiyan)
> > v2:
> > * Removed the loop since we use max values of clock
> > and lane count (Dhinakaran Pandiyan)
> >
> > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> > ---
> >  drivers/gpu/drm/i915/intel_dp.c | 22 ++++++++--------------
> >  1 file changed, 8 insertions(+), 14 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> > index d81c67cb..65b4559 100644
> > --- a/drivers/gpu/drm/i915/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/intel_dp.c
> > @@ -1644,23 +1644,17 @@ intel_dp_compute_config(struct intel_encoder *encoder,
> >  	for (; bpp >= 6*3; bpp -= 2*3) {
> >  		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
> >  						   bpp);
> > +		clock = max_clock;
> > +		lane_count = max_lane_count;
> > +		link_clock = common_rates[clock];
> > +		link_avail = intel_dp_max_data_rate(link_clock,
> > +						    lane_count);
> >  
> > -		for (clock = min_clock; clock <= max_clock; clock++) {
> > -			for (lane_count = min_lane_count;
> > -				lane_count <= max_lane_count;
> > -				lane_count <<= 1) {
> > -
> > -				link_clock = common_rates[clock];
> > -				link_avail = intel_dp_max_data_rate(link_clock,
> > -								    lane_count);
> > -
> > -				if (mode_rate <= link_avail) {
> > -					goto found;
> > -				}
> > -			}
> > -		}
> > +		if (mode_rate <= link_avail)
> > +			goto found;
> >  	}
> >  
> > +	DRM_DEBUG_KMS("Requested Mode Rate not supported\n");
> >  	return false;
> >  
> >  found:
> 
> -- 
> Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config
  2016-09-27 13:39       ` Jani Nikula
@ 2016-09-27 22:13         ` Manasi Navare
  2016-09-28  7:14           ` Jani Nikula
  0 siblings, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-27 22:13 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

On Tue, Sep 27, 2016 at 04:39:38PM +0300, Jani Nikula wrote:
> On Mon, 26 Sep 2016, Jani Nikula <jani.nikula@linux.intel.com> wrote:
> > On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> >> While configuring the pipe during modeset, it should use
> >> max clock and max lane count and reduce the bpp until
> >> the requested mode rate is less than or equal to
> >> available link BW.
> >> This is required to pass DP Compliance.
> >
> > As I wrote in reply to patch 1/6, this is not a DP spec requirement. The
> > link policy maker can freely choose the link parameters as long as the
> > sink supports them.
> 
> Also double checked the DP link CTS spec. AFAICT none of the tests
> expect the source to use the max clock or max lane count
> directly. (Automated test request is another matter, and we should look
> at it.)
> 
> I think patches 1-2 are based on an incorrect interpretation of the spec
> and tests.
> 
> BR,
> Jani.
>

I have the patches for handling the automated test request from DPR for
compliance testing as mentioned in the CTS spec. But they have dependencies
on these patches (1-6) so I will submit them after these get merged.

Regards
Manasi
 
> 
> >
> > BR,
> > Jani.
> >
> >
> >>
> >> v3:
> >> * Add Debug print if requested mode cannot be supported
> >> during modeset (Dhinakaran Pandiyan)
> >> v2:
> >> * Removed the loop since we use max values of clock
> >> and lane count (Dhinakaran Pandiyan)
> >>
> >> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> >> ---
> >>  drivers/gpu/drm/i915/intel_dp.c | 22 ++++++++--------------
> >>  1 file changed, 8 insertions(+), 14 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> >> index d81c67cb..65b4559 100644
> >> --- a/drivers/gpu/drm/i915/intel_dp.c
> >> +++ b/drivers/gpu/drm/i915/intel_dp.c
> >> @@ -1644,23 +1644,17 @@ intel_dp_compute_config(struct intel_encoder *encoder,
> >>  	for (; bpp >= 6*3; bpp -= 2*3) {
> >>  		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
> >>  						   bpp);
> >> +		clock = max_clock;
> >> +		lane_count = max_lane_count;
> >> +		link_clock = common_rates[clock];
> >> +		link_avail = intel_dp_max_data_rate(link_clock,
> >> +						    lane_count);
> >>  
> >> -		for (clock = min_clock; clock <= max_clock; clock++) {
> >> -			for (lane_count = min_lane_count;
> >> -				lane_count <= max_lane_count;
> >> -				lane_count <<= 1) {
> >> -
> >> -				link_clock = common_rates[clock];
> >> -				link_avail = intel_dp_max_data_rate(link_clock,
> >> -								    lane_count);
> >> -
> >> -				if (mode_rate <= link_avail) {
> >> -					goto found;
> >> -				}
> >> -			}
> >> -		}
> >> +		if (mode_rate <= link_avail)
> >> +			goto found;
> >>  	}
> >>  
> >> +	DRM_DEBUG_KMS("Requested Mode Rate not supported\n");
> >>  	return false;
> >>  
> >>  found:
> 
> -- 
> Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 4/6] drm/i915: Code cleanup to use dev_priv and INTEL_GEN
  2016-09-26 13:45     ` Jani Nikula
@ 2016-09-28  0:03       ` Manasi Navare
  0 siblings, 0 replies; 56+ messages in thread
From: Manasi Navare @ 2016-09-28  0:03 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

On Mon, Sep 26, 2016 at 04:45:05PM +0300, Jani Nikula wrote:
> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > Replace dev with dev_priv and INTEL_INFO with INTEL_GEN
> 
> Patches like this could easily sent separately, or at the very least as
> the first patches in the series. Then we could have merged this
> already. Now it conflicts, please rebase.
> 
> BR,
> Jani.
>

Thanks for your feedback. Yes I have sent it as a separate patch.

Regards
Manasi 
> >
> > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> > ---
> >  drivers/gpu/drm/i915/intel_dp.c | 14 +++++++-------
> >  1 file changed, 7 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> > index 61d71fa..8061e32 100644
> > --- a/drivers/gpu/drm/i915/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/intel_dp.c
> > @@ -230,13 +230,13 @@ static int
> >  intel_dp_source_rates(struct intel_dp *intel_dp, const int **source_rates)
> >  {
> >  	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
> > -	struct drm_device *dev = dig_port->base.base.dev;
> > +	struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
> >  	int size;
> >  
> > -	if (IS_BROXTON(dev)) {
> > +	if (IS_BROXTON(dev_priv)) {
> >  		*source_rates = bxt_rates;
> >  		size = ARRAY_SIZE(bxt_rates);
> > -	} else if (IS_SKYLAKE(dev) || IS_KABYLAKE(dev)) {
> > +	} else if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
> >  		*source_rates = skl_rates;
> >  		size = ARRAY_SIZE(skl_rates);
> >  	} else {
> > @@ -1359,14 +1359,14 @@ intel_dp_aux_init(struct intel_dp *intel_dp)
> >  bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp)
> >  {
> >  	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
> > -	struct drm_device *dev = dig_port->base.base.dev;
> > +	struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
> >  
> >  	/* WaDisableHBR2:skl */
> > -	if (IS_SKL_REVID(dev, 0, SKL_REVID_B0))
> > +	if (IS_SKL_REVID(dev_priv, 0, SKL_REVID_B0))
> >  		return false;
> >  
> > -	if ((IS_HASWELL(dev) && !IS_HSW_ULX(dev)) || IS_BROADWELL(dev) ||
> > -	    (INTEL_INFO(dev)->gen >= 9))
> > +	if ((IS_HASWELL(dev_priv) && !IS_HSW_ULX(dev_priv)) ||
> > +	    IS_BROADWELL(dev_priv) || (INTEL_GEN(dev_priv) >= 9))
> >  		return true;
> >  	else
> >  		return false;
> 
> -- 
> Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config
  2016-09-27 22:13         ` Manasi Navare
@ 2016-09-28  7:14           ` Jani Nikula
  2016-09-28 22:30             ` Manasi Navare
  0 siblings, 1 reply; 56+ messages in thread
From: Jani Nikula @ 2016-09-28  7:14 UTC (permalink / raw)
  To: Manasi Navare; +Cc: intel-gfx

On Wed, 28 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> On Tue, Sep 27, 2016 at 04:39:38PM +0300, Jani Nikula wrote:
>> On Mon, 26 Sep 2016, Jani Nikula <jani.nikula@linux.intel.com> wrote:
>> > On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
>> >> While configuring the pipe during modeset, it should use
>> >> max clock and max lane count and reduce the bpp until
>> >> the requested mode rate is less than or equal to
>> >> available link BW.
>> >> This is required to pass DP Compliance.
>> >
>> > As I wrote in reply to patch 1/6, this is not a DP spec requirement. The
>> > link policy maker can freely choose the link parameters as long as the
>> > sink supports them.
>> 
>> Also double checked the DP link CTS spec. AFAICT none of the tests
>> expect the source to use the max clock or max lane count
>> directly. (Automated test request is another matter, and we should look
>> at it.)
>> 
>> I think patches 1-2 are based on an incorrect interpretation of the spec
>> and tests.
>> 
>> BR,
>> Jani.
>>
>
> I have the patches for handling the automated test request from DPR for
> compliance testing as mentioned in the CTS spec. But they have dependencies
> on these patches (1-6) so I will submit them after these get merged.

We need to re-evaluate the ordering of the patches.

BR,
Jani.

>
> Regards
> Manasi
>  
>> 
>> >
>> > BR,
>> > Jani.
>> >
>> >
>> >>
>> >> v3:
>> >> * Add Debug print if requested mode cannot be supported
>> >> during modeset (Dhinakaran Pandiyan)
>> >> v2:
>> >> * Removed the loop since we use max values of clock
>> >> and lane count (Dhinakaran Pandiyan)
>> >>
>> >> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
>> >> ---
>> >>  drivers/gpu/drm/i915/intel_dp.c | 22 ++++++++--------------
>> >>  1 file changed, 8 insertions(+), 14 deletions(-)
>> >>
>> >> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
>> >> index d81c67cb..65b4559 100644
>> >> --- a/drivers/gpu/drm/i915/intel_dp.c
>> >> +++ b/drivers/gpu/drm/i915/intel_dp.c
>> >> @@ -1644,23 +1644,17 @@ intel_dp_compute_config(struct intel_encoder *encoder,
>> >>  	for (; bpp >= 6*3; bpp -= 2*3) {
>> >>  		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
>> >>  						   bpp);
>> >> +		clock = max_clock;
>> >> +		lane_count = max_lane_count;
>> >> +		link_clock = common_rates[clock];
>> >> +		link_avail = intel_dp_max_data_rate(link_clock,
>> >> +						    lane_count);
>> >>  
>> >> -		for (clock = min_clock; clock <= max_clock; clock++) {
>> >> -			for (lane_count = min_lane_count;
>> >> -				lane_count <= max_lane_count;
>> >> -				lane_count <<= 1) {
>> >> -
>> >> -				link_clock = common_rates[clock];
>> >> -				link_avail = intel_dp_max_data_rate(link_clock,
>> >> -								    lane_count);
>> >> -
>> >> -				if (mode_rate <= link_avail) {
>> >> -					goto found;
>> >> -				}
>> >> -			}
>> >> -		}
>> >> +		if (mode_rate <= link_avail)
>> >> +			goto found;
>> >>  	}
>> >>  
>> >> +	DRM_DEBUG_KMS("Requested Mode Rate not supported\n");
>> >>  	return false;
>> >>  
>> >>  found:
>> 
>> -- 
>> Jani Nikula, Intel Open Source Technology Center

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config
  2016-09-27 21:55       ` Manasi Navare
@ 2016-09-28  7:38         ` Jani Nikula
  2016-09-28 16:45           ` Manasi Navare
  0 siblings, 1 reply; 56+ messages in thread
From: Jani Nikula @ 2016-09-28  7:38 UTC (permalink / raw)
  To: Manasi Navare; +Cc: intel-gfx

On Wed, 28 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> On Mon, Sep 26, 2016 at 04:41:27PM +0300, Jani Nikula wrote:
>> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
>> > While configuring the pipe during modeset, it should use
>> > max clock and max lane count and reduce the bpp until
>> > the requested mode rate is less than or equal to
>> > available link BW.
>> > This is required to pass DP Compliance.
>> 
>> As I wrote in reply to patch 1/6, this is not a DP spec requirement. The
>> link policy maker can freely choose the link parameters as long as the
>> sink supports them.
>> 
>> BR,
>> Jani.
>> 
>>
>
> Thanks for your review feedback.
> This change was driven by Video Pattern generation tests in CTS spec. Eg: In
> test 4.3.3.1, the test requests 640x480 @ max link rate of 2.7Gbps and 4 lanes.
> The test will pass if it sets the link rate to 2.7 and lane count = 4.
> But in the existing implementation, this video mode request triggers a modeset
> but the compute_config function starts with the lowest link rate and lane count and
> trains the link at 1.62 and 4 lanes which does not match the expeced values of link
> rate = 2.7 and lane count = 4 and the test fails. 

Again, the test does not require us to use the maximum parameters by
default. It allows us to use optimal parameters by default, and use the
sink issued automated test request to change the link parameters to what
the test wants.

Look at the table in CTS 4.3.3.1. There's a test for 640x480 with 1.62
Gbps and 1 lane. And then there's a test for 640x480 with 2.7 Gbps and 4
lanes. What you're suggesting is to use excessive bandwidth for the mode
by default just because the test has been designed to be lax and allow
certain parameters at a minimum, instead of requiring optimal
parameters.

I do not think this is a change we want to make for DP SST, and it is
not a DP spec or compliance requirement.

BR,
Jani.


>
> Regards
> Manasi
>  
>> >
>> > v3:
>> > * Add Debug print if requested mode cannot be supported
>> > during modeset (Dhinakaran Pandiyan)
>> > v2:
>> > * Removed the loop since we use max values of clock
>> > and lane count (Dhinakaran Pandiyan)
>> >
>> > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
>> > ---
>> >  drivers/gpu/drm/i915/intel_dp.c | 22 ++++++++--------------
>> >  1 file changed, 8 insertions(+), 14 deletions(-)
>> >
>> > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
>> > index d81c67cb..65b4559 100644
>> > --- a/drivers/gpu/drm/i915/intel_dp.c
>> > +++ b/drivers/gpu/drm/i915/intel_dp.c
>> > @@ -1644,23 +1644,17 @@ intel_dp_compute_config(struct intel_encoder *encoder,
>> >  	for (; bpp >= 6*3; bpp -= 2*3) {
>> >  		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
>> >  						   bpp);
>> > +		clock = max_clock;
>> > +		lane_count = max_lane_count;
>> > +		link_clock = common_rates[clock];
>> > +		link_avail = intel_dp_max_data_rate(link_clock,
>> > +						    lane_count);
>> >  
>> > -		for (clock = min_clock; clock <= max_clock; clock++) {
>> > -			for (lane_count = min_lane_count;
>> > -				lane_count <= max_lane_count;
>> > -				lane_count <<= 1) {
>> > -
>> > -				link_clock = common_rates[clock];
>> > -				link_avail = intel_dp_max_data_rate(link_clock,
>> > -								    lane_count);
>> > -
>> > -				if (mode_rate <= link_avail) {
>> > -					goto found;
>> > -				}
>> > -			}
>> > -		}
>> > +		if (mode_rate <= link_avail)
>> > +			goto found;
>> >  	}
>> >  
>> > +	DRM_DEBUG_KMS("Requested Mode Rate not supported\n");
>> >  	return false;
>> >  
>> >  found:
>> 
>> -- 
>> Jani Nikula, Intel Open Source Technology Center

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config
  2016-09-28  7:38         ` Jani Nikula
@ 2016-09-28 16:45           ` Manasi Navare
  2016-09-29 14:52             ` Jani Nikula
  0 siblings, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-28 16:45 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

On Wed, Sep 28, 2016 at 10:38:37AM +0300, Jani Nikula wrote:
> On Wed, 28 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > On Mon, Sep 26, 2016 at 04:41:27PM +0300, Jani Nikula wrote:
> >> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> >> > While configuring the pipe during modeset, it should use
> >> > max clock and max lane count and reduce the bpp until
> >> > the requested mode rate is less than or equal to
> >> > available link BW.
> >> > This is required to pass DP Compliance.
> >> 
> >> As I wrote in reply to patch 1/6, this is not a DP spec requirement. The
> >> link policy maker can freely choose the link parameters as long as the
> >> sink supports them.
> >> 
> >> BR,
> >> Jani.
> >> 
> >>
> >
> > Thanks for your review feedback.
> > This change was driven by Video Pattern generation tests in CTS spec. Eg: In
> > test 4.3.3.1, the test requests 640x480 @ max link rate of 2.7Gbps and 4 lanes.
> > The test will pass if it sets the link rate to 2.7 and lane count = 4.
> > But in the existing implementation, this video mode request triggers a modeset
> > but the compute_config function starts with the lowest link rate and lane count and
> > trains the link at 1.62 and 4 lanes which does not match the expeced values of link
> > rate = 2.7 and lane count = 4 and the test fails. 
> 
> Again, the test does not require us to use the maximum parameters by
> default. It allows us to use optimal parameters by default, and use the
> sink issued automated test request to change the link parameters to what
> the test wants.
> 
> Look at the table in CTS 4.3.3.1. There's a test for 640x480 with 1.62
> Gbps and 1 lane. And then there's a test for 640x480 with 2.7 Gbps and 4
> lanes. What you're suggesting is to use excessive bandwidth for the mode
> by default just because the test has been designed to be lax and allow
> certain parameters at a minimum, instead of requiring optimal
> parameters.
> 
> I do not think this is a change we want to make for DP SST, and it is
> not a DP spec or compliance requirement.
> 
> BR,
> Jani.
> 
>

So if we let the driver chose the optimal link rate and lane count then for
640x480 it will chose 1.62 and 4 lanes. So then the automated test request
will issue the test request for the maximum link rate lets say 5.4 and 4 lanes.
At this point we will have to re set the plls and the clocks to train the link at 
5.4link rate and 4 lane count before proceeding to handling the video pattern request.
Are you recommending to doing the entire pll set up and retraining of the link here to the 
target link rate which will be the max link rate?

What about the tests 4.3.1.4 that expect the link rate to fall back to the lower link
rate due the forced failures in the CR/Channel EQ phases? For these cases we do need
upfront link training and starting the link training at the upfront values falling back
to the lower values. What do you think? 

Regards
Manasi 
> >
> > Regards
> > Manasi
> >  
> >> >
> >> > v3:
> >> > * Add Debug print if requested mode cannot be supported
> >> > during modeset (Dhinakaran Pandiyan)
> >> > v2:
> >> > * Removed the loop since we use max values of clock
> >> > and lane count (Dhinakaran Pandiyan)
> >> >
> >> > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> >> > ---
> >> >  drivers/gpu/drm/i915/intel_dp.c | 22 ++++++++--------------
> >> >  1 file changed, 8 insertions(+), 14 deletions(-)
> >> >
> >> > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> >> > index d81c67cb..65b4559 100644
> >> > --- a/drivers/gpu/drm/i915/intel_dp.c
> >> > +++ b/drivers/gpu/drm/i915/intel_dp.c
> >> > @@ -1644,23 +1644,17 @@ intel_dp_compute_config(struct intel_encoder *encoder,
> >> >  	for (; bpp >= 6*3; bpp -= 2*3) {
> >> >  		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
> >> >  						   bpp);
> >> > +		clock = max_clock;
> >> > +		lane_count = max_lane_count;
> >> > +		link_clock = common_rates[clock];
> >> > +		link_avail = intel_dp_max_data_rate(link_clock,
> >> > +						    lane_count);
> >> >  
> >> > -		for (clock = min_clock; clock <= max_clock; clock++) {
> >> > -			for (lane_count = min_lane_count;
> >> > -				lane_count <= max_lane_count;
> >> > -				lane_count <<= 1) {
> >> > -
> >> > -				link_clock = common_rates[clock];
> >> > -				link_avail = intel_dp_max_data_rate(link_clock,
> >> > -								    lane_count);
> >> > -
> >> > -				if (mode_rate <= link_avail) {
> >> > -					goto found;
> >> > -				}
> >> > -			}
> >> > -		}
> >> > +		if (mode_rate <= link_avail)
> >> > +			goto found;
> >> >  	}
> >> >  
> >> > +	DRM_DEBUG_KMS("Requested Mode Rate not supported\n");
> >> >  	return false;
> >> >  
> >> >  found:
> >> 
> >> -- 
> >> Jani Nikula, Intel Open Source Technology Center
> 
> -- 
> Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config
  2016-09-28  7:14           ` Jani Nikula
@ 2016-09-28 22:30             ` Manasi Navare
  0 siblings, 0 replies; 56+ messages in thread
From: Manasi Navare @ 2016-09-28 22:30 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

On Wed, Sep 28, 2016 at 10:14:45AM +0300, Jani Nikula wrote:
> On Wed, 28 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > On Tue, Sep 27, 2016 at 04:39:38PM +0300, Jani Nikula wrote:
> >> On Mon, 26 Sep 2016, Jani Nikula <jani.nikula@linux.intel.com> wrote:
> >> > On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> >> >> While configuring the pipe during modeset, it should use
> >> >> max clock and max lane count and reduce the bpp until
> >> >> the requested mode rate is less than or equal to
> >> >> available link BW.
> >> >> This is required to pass DP Compliance.
> >> >
> >> > As I wrote in reply to patch 1/6, this is not a DP spec requirement. The
> >> > link policy maker can freely choose the link parameters as long as the
> >> > sink supports them.
> >> 
> >> Also double checked the DP link CTS spec. AFAICT none of the tests
> >> expect the source to use the max clock or max lane count
> >> directly. (Automated test request is another matter, and we should look
> >> at it.)
> >> 
> >> I think patches 1-2 are based on an incorrect interpretation of the spec
> >> and tests.
> >> 
> >> BR,
> >> Jani.
> >>
> >
> > I have the patches for handling the automated test request from DPR for
> > compliance testing as mentioned in the CTS spec. But they have dependencies
> > on these patches (1-6) so I will submit them after these get merged.
> 
> We need to re-evaluate the ordering of the patches.
> 
> BR,
> Jani.
>

Thanks for your feedback.
I am reevaluating the need for this patch based on your recommendation.
I am currently testing it with keeping the optimal values of the link rate
here and only train at the max link rate when we recieve the test request
from DPR 120. 
In this case we would need to configure the PLLs base don the requested target
link rate and train the link at that rate and after the test is done restore
the original state of the PLLs. Does that sound like a good approach, is this
what you were suggesting?

Regards
Manasi


> >
> > Regards
> > Manasi
> >  
> >> 
> >> >
> >> > BR,
> >> > Jani.
> >> >
> >> >
> >> >>
> >> >> v3:
> >> >> * Add Debug print if requested mode cannot be supported
> >> >> during modeset (Dhinakaran Pandiyan)
> >> >> v2:
> >> >> * Removed the loop since we use max values of clock
> >> >> and lane count (Dhinakaran Pandiyan)
> >> >>
> >> >> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> >> >> ---
> >> >>  drivers/gpu/drm/i915/intel_dp.c | 22 ++++++++--------------
> >> >>  1 file changed, 8 insertions(+), 14 deletions(-)
> >> >>
> >> >> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> >> >> index d81c67cb..65b4559 100644
> >> >> --- a/drivers/gpu/drm/i915/intel_dp.c
> >> >> +++ b/drivers/gpu/drm/i915/intel_dp.c
> >> >> @@ -1644,23 +1644,17 @@ intel_dp_compute_config(struct intel_encoder *encoder,
> >> >>  	for (; bpp >= 6*3; bpp -= 2*3) {
> >> >>  		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
> >> >>  						   bpp);
> >> >> +		clock = max_clock;
> >> >> +		lane_count = max_lane_count;
> >> >> +		link_clock = common_rates[clock];
> >> >> +		link_avail = intel_dp_max_data_rate(link_clock,
> >> >> +						    lane_count);
> >> >>  
> >> >> -		for (clock = min_clock; clock <= max_clock; clock++) {
> >> >> -			for (lane_count = min_lane_count;
> >> >> -				lane_count <= max_lane_count;
> >> >> -				lane_count <<= 1) {
> >> >> -
> >> >> -				link_clock = common_rates[clock];
> >> >> -				link_avail = intel_dp_max_data_rate(link_clock,
> >> >> -								    lane_count);
> >> >> -
> >> >> -				if (mode_rate <= link_avail) {
> >> >> -					goto found;
> >> >> -				}
> >> >> -			}
> >> >> -		}
> >> >> +		if (mode_rate <= link_avail)
> >> >> +			goto found;
> >> >>  	}
> >> >>  
> >> >> +	DRM_DEBUG_KMS("Requested Mode Rate not supported\n");
> >> >>  	return false;
> >> >>  
> >> >>  found:
> >> 
> >> -- 
> >> Jani Nikula, Intel Open Source Technology Center
> 
> -- 
> Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v7 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-27 17:07           ` Jani Nikula
@ 2016-09-29  6:41             ` Manasi Navare
  2016-09-29 11:26               ` Jani Nikula
  0 siblings, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-29  6:41 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

On Tue, Sep 27, 2016 at 08:07:01PM +0300, Jani Nikula wrote:
> On Tue, 27 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > On Mon, Sep 26, 2016 at 04:39:34PM +0300, Jani Nikula wrote:
> >> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> >> > According to the DisplayPort Spec, in case of Clock Recovery failure
> >> > the link training sequence should fall back to the lower link rate
> >> > followed by lower lane count until CR succeeds.
> >> > On CR success, the sequence proceeds with Channel EQ.
> >> > In case of Channel EQ failures, it should fallback to
> >> > lower link rate and lane count and start the CR phase again.
> >> 
> >> This change makes the link training start at the max lane count and max
> >> link rate. This is not ideal, as it wastes the link. And it is not a
> >> spec requirement. "The Link Policy Maker of the upstream device may
> >> choose any link count and link rate as long as they do not exceed the
> >> capabilities of the DP receiver."
> >> 
> >> Our current code starts at the minimum required bandwidth for the mode,
> >> therefore we can't fall back to lower link rate and lane count without
> >> reducing the mode.
> >> 
> >> AFAICT this patch here makes it possible for the link bandwidth to drop
> >> below what is required for the mode. This is unacceptable.
> >> 
> >> BR,
> >> Jani.
> >> 
> >>
> >
> > Thanks Jani for your review comments.
> > Yes in this change we start at the max link rate and lane count. This
> > change was made according to the design document discussions we had
> > before strating this DP Redesign project. The main reason for starting
> > at the maxlink rate and max lane count was for ensuring proper
> > behavior of DP MST. In case of DP MST, we want to train the link at
> > the maximum supported link rate/lane count based on an early/ upfront
> > link training result so that we dont fail when we try to connect a
> > higher resolution monitor as a second monitor. This a trade off
> > between wsting the link or higher power vs. needing to retrain for
> > every monitor that requests a higher BW in case of DP MST.
> 
> We already train at max bandwidth for DP MST, which seems to be the
> sensible thing to do.
> 
> > Actually this is also the reason for enabling upfront link training in
> > the following patch where we train the link much ahead in the modeset
> > sequence to understand the link rate and lane count values at which
> > the link can be successfully trained and then the link training
> > through modeset will always start at the upfront values (maximum
> > supported values of lane count and link rate based on upfront link
> > training).
> 
> I don't see a need to do this for DP SST.
> 
> > As per the CTS, all the test 4.3.1.4 requires that you fall back to
> > the lower link rate after trying to train at the maximum link rate
> > advertised through the DPCD registers.
> 
> That test does not require the source DUT to default to maximum lane
> count or link rate of the sink. The source may freely choose the lane
> count and link rate as long as they don't exceed sink capabilities.
> 
> For the purposes of the test, the test setup can request specific
> parameters to be used, but that does not mean using maximum by
> *default*.
> 
> We currently lack the feature to reduce lane count and link rate. The
> key to understand here is that starting at max and reducing down to the
> sufficient parameters for the mode (which is where we start now) offers
> no real benefit for any use case. What we're lacking is a feature to
> reduce the link parameters *below* what's required by the mode the
> userspace wants. This can only be achieved through cooperation with
> userspace.
> 

We can train at the optimal link rate required for the requested mode as
done in the existing implementation and retrain whenever the link training
test request is sent. 
For the test 4.3.1.4 in CTS, it does force a failure in CR and expects the
driver to fall back to even lower link rate. We do not implement this in the
current driver and so this test fails. Could you elaborate on how this can
be achieved with the the cooperation with userspace?
Should we send a uevent to the userspace asking to retry at a lower resolution
after retraining at the lower link rate?
This is pertty much the place where majority of the compliance tests are failing.
How can we pass compliance with regards to this feature?

Regards
Manasi 
> > This will not drop the link BW to a number below what is required for
> > the mode because the requested modes are pruned or validated in
> > intel_dp_mode_valid based on the upfront link training results in the
> > following patch. And these values are used here as the starting values
> > of link rate and lane count.
> 
> Each patch must be a worthwhile change on its own. By my reading of this
> patch, we can go under the required bandwidth. You can't justify that by
> saying the follow-up patch fixes it.
> 
> > I almost feel that the upfront link training patch and this patch should be 
> > combined so that insead of starting from the max link rate and lane count it
> > is clear that we are starting from the upfront values.
> 
> I am still reading and gathering more feedback on the upfront link
> training patch. I will get back to you. But the impression I'm currently
> getting is that we can't do this. The upfront link training patch was
> originally written for USB type C. But if DP compliance has priority,
> the order of business should be getting compliance without upfront link
> training. I am also still not convinced upfront link training is
> required for compliance.
> 
> To be continued...
> 
> BR,
> Jani.
> 
> 
> 
> >
> > Regards,
> > Manasi 
> >> >
> >> > v7:
> >> > * Address readability concerns (Mika Kahola)
> >> > v6:
> >> > * Do not split quoted string across line (Mika Kahola)
> >> > v5:
> >> > * Reset the link rate index to the max link rate index
> >> > before lowering the lane count (Jani Nikula)
> >> > * Use the paradigm for loop in intel_dp_link_rate_index
> >> > v4:
> >> > * Fixed the link rate fallback loop (Manasi Navare)
> >> > v3:
> >> > * Fixed some rebase issues (Mika Kahola)
> >> > v2:
> >> > * Add a helper function to return index of requested link rate
> >> > into common_rates array
> >> > * Changed the link rate fallback loop to make use
> >> > of common_rates array (Mika Kahola)
> >> > * Changed INTEL_INFO to INTEL_GEN (David Weinehall)
> >> >
> >> > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> >> > ---
> >> >  drivers/gpu/drm/i915/intel_ddi.c              | 113 +++++++++++++++++++++++---
> >> >  drivers/gpu/drm/i915/intel_dp.c               |  15 ++++
> >> >  drivers/gpu/drm/i915/intel_dp_link_training.c |  13 ++-
> >> >  drivers/gpu/drm/i915/intel_drv.h              |   6 +-
> >> >  4 files changed, 133 insertions(+), 14 deletions(-)
> >> >
> >> > diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
> >> > index 8065a5f..093038c 100644
> >> > --- a/drivers/gpu/drm/i915/intel_ddi.c
> >> > +++ b/drivers/gpu/drm/i915/intel_ddi.c
> >> > @@ -1637,19 +1637,18 @@ void intel_ddi_clk_select(struct intel_encoder *encoder,
> >> >  	}
> >> >  }
> >> >  
> >> > -static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> >> > +static void intel_ddi_pre_enable_edp(struct intel_encoder *encoder,
> >> >  				    int link_rate, uint32_t lane_count,
> >> > -				    struct intel_shared_dpll *pll,
> >> > -				    bool link_mst)
> >> > +				    struct intel_shared_dpll *pll)
> >> >  {
> >> >  	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
> >> >  	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> >> >  	enum port port = intel_ddi_get_encoder_port(encoder);
> >> >  
> >> >  	intel_dp_set_link_params(intel_dp, link_rate, lane_count,
> >> > -				 link_mst);
> >> > -	if (encoder->type == INTEL_OUTPUT_EDP)
> >> > -		intel_edp_panel_on(intel_dp);
> >> > +				 false);
> >> > +
> >> > +	intel_edp_panel_on(intel_dp);
> >> >  
> >> >  	intel_ddi_clk_select(encoder, pll);
> >> >  	intel_prepare_dp_ddi_buffers(encoder);
> >> > @@ -1660,6 +1659,28 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> >> >  		intel_dp_stop_link_train(intel_dp);
> >> >  }
> >> >  
> >> > +static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
> >> > +				    int link_rate, uint32_t lane_count,
> >> > +				    struct intel_shared_dpll *pll,
> >> > +				    bool link_mst)
> >> > +{
> >> > +	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
> >> > +	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> >> > +	struct intel_shared_dpll_config tmp_pll_config;
> >> > +
> >> > +	/* Disable the PLL and obtain the PLL for Link Training
> >> > +	 * that starts with highest link rate and lane count.
> >> > +	 */
> >> > +	tmp_pll_config = pll->config;
> >> > +	pll->funcs.disable(dev_priv, pll);
> >> > +	pll->config.crtc_mask = 0;
> >> > +
> >> > +	/* If Link Training fails, send a uevent to generate a hotplug */
> >> > +	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
> >> > +		drm_kms_helper_hotplug_event(encoder->base.dev);
> >> > +	pll->config = tmp_pll_config;
> >> > +}
> >> > +
> >> >  static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
> >> >  				      bool has_hdmi_sink,
> >> >  				      struct drm_display_mode *adjusted_mode,
> >> > @@ -1693,20 +1714,26 @@ static void intel_ddi_pre_enable(struct intel_encoder *intel_encoder,
> >> >  	struct intel_crtc *crtc = to_intel_crtc(encoder->crtc);
> >> >  	int type = intel_encoder->type;
> >> >  
> >> > -	if (type == INTEL_OUTPUT_DP || type == INTEL_OUTPUT_EDP) {
> >> > +	if (type == INTEL_OUTPUT_EDP)
> >> > +		intel_ddi_pre_enable_edp(intel_encoder,
> >> > +					crtc->config->port_clock,
> >> > +					crtc->config->lane_count,
> >> > +					crtc->config->shared_dpll);
> >> > +
> >> > +	if (type == INTEL_OUTPUT_DP)
> >> >  		intel_ddi_pre_enable_dp(intel_encoder,
> >> >  					crtc->config->port_clock,
> >> >  					crtc->config->lane_count,
> >> >  					crtc->config->shared_dpll,
> >> >  					intel_crtc_has_type(crtc->config,
> >> >  							    INTEL_OUTPUT_DP_MST));
> >> > -	}
> >> > -	if (type == INTEL_OUTPUT_HDMI) {
> >> > +
> >> > +	if (type == INTEL_OUTPUT_HDMI)
> >> >  		intel_ddi_pre_enable_hdmi(intel_encoder,
> >> >  					  crtc->config->has_hdmi_sink,
> >> >  					  &crtc->config->base.adjusted_mode,
> >> >  					  crtc->config->shared_dpll);
> >> > -	}
> >> > +
> >> >  }
> >> >  
> >> >  static void intel_ddi_post_disable(struct intel_encoder *intel_encoder,
> >> > @@ -2435,6 +2462,72 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
> >> >  	return pll;
> >> >  }
> >> >  
> >> > +bool
> >> > +intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
> >> > +		     uint8_t max_lane_count, bool link_mst)
> >> > +{
> >> > +	struct intel_connector *connector = intel_dp->attached_connector;
> >> > +	struct intel_encoder *encoder = connector->encoder;
> >> > +	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
> >> > +	struct intel_shared_dpll *pll;
> >> > +	struct intel_shared_dpll_config tmp_pll_config;
> >> > +	int link_rate, max_link_rate_index, link_rate_index;
> >> > +	uint8_t lane_count;
> >> > +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
> >> > +	bool ret = false;
> >> > +
> >> > +	max_link_rate_index = intel_dp_link_rate_index(intel_dp, common_rates,
> >> > +						       max_link_rate);
> >> > +	if (max_link_rate_index < 0) {
> >> > +		DRM_ERROR("Invalid Link Rate\n");
> >> > +		return false;
> >> > +	}
> >> > +
> >> > +	for (lane_count = max_lane_count; lane_count > 0; lane_count >>= 1) {
> >> > +		for (link_rate_index = max_link_rate_index;
> >> > +		     link_rate_index >= 0; link_rate_index--) {
> >> > +			link_rate = common_rates[link_rate_index];
> >> > +			pll = intel_ddi_get_link_dpll(intel_dp, link_rate);
> >> > +			if (pll == NULL) {
> >> > +				DRM_ERROR("Could not find DPLL for link training.\n");
> >> > +				return false;
> >> > +			}
> >> > +			tmp_pll_config = pll->config;
> >> > +			pll->funcs.enable(dev_priv, pll);
> >> > +
> >> > +			intel_dp_set_link_params(intel_dp, link_rate,
> >> > +						 lane_count, link_mst);
> >> > +
> >> > +			intel_ddi_clk_select(encoder, pll);
> >> > +			intel_prepare_dp_ddi_buffers(encoder);
> >> > +			intel_ddi_init_dp_buf_reg(encoder);
> >> > +			intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
> >> > +			ret = intel_dp_start_link_train(intel_dp);
> >> > +			if (ret)
> >> > +				break;
> >> > +
> >> > +			/* Disable port followed by PLL for next
> >> > +			 *retry/clean up
> >> > +			 */
> >> > +			intel_ddi_post_disable(encoder, NULL, NULL);
> >> > +			pll->funcs.disable(dev_priv, pll);
> >> > +			pll->config = tmp_pll_config;
> >> > +		}
> >> > +		if (ret) {
> >> > +			DRM_DEBUG_KMS("Link Training successful at link rate: %d lane: %d\n",
> >> > +				      link_rate, lane_count);
> >> > +			break;
> >> > +		}
> >> > +	}
> >> > +
> >> > +	intel_dp_stop_link_train(intel_dp);
> >> > +
> >> > +	if (!lane_count)
> >> > +		DRM_ERROR("Link Training Failed\n");
> >> > +
> >> > +	return ret;
> >> > +}
> >> > +
> >> >  void intel_ddi_init(struct drm_device *dev, enum port port)
> >> >  {
> >> >  	struct drm_i915_private *dev_priv = to_i915(dev);
> >> > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> >> > index 69cee9b..d81c67cb 100644
> >> > --- a/drivers/gpu/drm/i915/intel_dp.c
> >> > +++ b/drivers/gpu/drm/i915/intel_dp.c
> >> > @@ -1506,6 +1506,21 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
> >> >  	return rates[len - 1];
> >> >  }
> >> >  
> >> > +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
> >> > +			     int link_rate)
> >> > +{
> >> > +	int common_len;
> >> > +	int index;
> >> > +
> >> > +	common_len = intel_dp_common_rates(intel_dp, common_rates);
> >> > +	for (index = 0; index < common_len; index++) {
> >> > +		if (link_rate == common_rates[common_len - index - 1])
> >> > +			return common_len - index - 1;
> >> > +	}
> >> > +
> >> > +	return -1;
> >> > +}
> >> > +
> >> >  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
> >> >  {
> >> >  	return rate_to_index(rate, intel_dp->sink_rates);
> >> > diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
> >> > index c438b02..6eb5eb6 100644
> >> > --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
> >> > +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
> >> > @@ -313,9 +313,16 @@ void intel_dp_stop_link_train(struct intel_dp *intel_dp)
> >> >  				DP_TRAINING_PATTERN_DISABLE);
> >> >  }
> >> >  
> >> > -void
> >> > +bool
> >> >  intel_dp_start_link_train(struct intel_dp *intel_dp)
> >> >  {
> >> > -	intel_dp_link_training_clock_recovery(intel_dp);
> >> > -	intel_dp_link_training_channel_equalization(intel_dp);
> >> > +	bool ret;
> >> > +
> >> > +	if (intel_dp_link_training_clock_recovery(intel_dp)) {
> >> > +		ret = intel_dp_link_training_channel_equalization(intel_dp);
> >> > +		if (ret)
> >> > +			return true;
> >> > +	}
> >> > +
> >> > +	return false;
> >> >  }
> >> > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> >> > index 8fd16ad..08cb571 100644
> >> > --- a/drivers/gpu/drm/i915/intel_drv.h
> >> > +++ b/drivers/gpu/drm/i915/intel_drv.h
> >> > @@ -1164,6 +1164,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
> >> >  			 struct intel_crtc_state *pipe_config);
> >> >  void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
> >> >  uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
> >> > +bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
> >> > +			  uint8_t max_lane_count, bool link_mst);
> >> >  struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
> >> >  						  int clock);
> >> >  unsigned int intel_fb_align_height(struct drm_device *dev,
> >> > @@ -1385,7 +1387,7 @@ bool intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
> >> >  void intel_dp_set_link_params(struct intel_dp *intel_dp,
> >> >  			      int link_rate, uint8_t lane_count,
> >> >  			      bool link_mst);
> >> > -void intel_dp_start_link_train(struct intel_dp *intel_dp);
> >> > +bool intel_dp_start_link_train(struct intel_dp *intel_dp);
> >> >  void intel_dp_stop_link_train(struct intel_dp *intel_dp);
> >> >  void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
> >> >  void intel_dp_encoder_reset(struct drm_encoder *encoder);
> >> > @@ -1407,6 +1409,8 @@ void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *co
> >> >  void intel_dp_mst_suspend(struct drm_device *dev);
> >> >  void intel_dp_mst_resume(struct drm_device *dev);
> >> >  int intel_dp_max_link_rate(struct intel_dp *intel_dp);
> >> > +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
> >> > +			     int link_rate);
> >> >  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
> >> >  void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
> >> >  void intel_power_sequencer_reset(struct drm_i915_private *dev_priv);
> >> 
> >> -- 
> >> Jani Nikula, Intel Open Source Technology Center
> 
> -- 
> Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v7 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-29  6:41             ` Manasi Navare
@ 2016-09-29 11:26               ` Jani Nikula
  2016-09-29 11:44                 ` Chris Wilson
  0 siblings, 1 reply; 56+ messages in thread
From: Jani Nikula @ 2016-09-29 11:26 UTC (permalink / raw)
  To: Manasi Navare; +Cc: intel-gfx

On Thu, 29 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> On Tue, Sep 27, 2016 at 08:07:01PM +0300, Jani Nikula wrote:
>> On Tue, 27 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
>> > On Mon, Sep 26, 2016 at 04:39:34PM +0300, Jani Nikula wrote:
>> >> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
>> >> > According to the DisplayPort Spec, in case of Clock Recovery failure
>> >> > the link training sequence should fall back to the lower link rate
>> >> > followed by lower lane count until CR succeeds.
>> >> > On CR success, the sequence proceeds with Channel EQ.
>> >> > In case of Channel EQ failures, it should fallback to
>> >> > lower link rate and lane count and start the CR phase again.
>> >> 
>> >> This change makes the link training start at the max lane count and max
>> >> link rate. This is not ideal, as it wastes the link. And it is not a
>> >> spec requirement. "The Link Policy Maker of the upstream device may
>> >> choose any link count and link rate as long as they do not exceed the
>> >> capabilities of the DP receiver."
>> >> 
>> >> Our current code starts at the minimum required bandwidth for the mode,
>> >> therefore we can't fall back to lower link rate and lane count without
>> >> reducing the mode.
>> >> 
>> >> AFAICT this patch here makes it possible for the link bandwidth to drop
>> >> below what is required for the mode. This is unacceptable.
>> >> 
>> >> BR,
>> >> Jani.
>> >> 
>> >>
>> >
>> > Thanks Jani for your review comments.
>> > Yes in this change we start at the max link rate and lane count. This
>> > change was made according to the design document discussions we had
>> > before strating this DP Redesign project. The main reason for starting
>> > at the maxlink rate and max lane count was for ensuring proper
>> > behavior of DP MST. In case of DP MST, we want to train the link at
>> > the maximum supported link rate/lane count based on an early/ upfront
>> > link training result so that we dont fail when we try to connect a
>> > higher resolution monitor as a second monitor. This a trade off
>> > between wsting the link or higher power vs. needing to retrain for
>> > every monitor that requests a higher BW in case of DP MST.
>> 
>> We already train at max bandwidth for DP MST, which seems to be the
>> sensible thing to do.
>> 
>> > Actually this is also the reason for enabling upfront link training in
>> > the following patch where we train the link much ahead in the modeset
>> > sequence to understand the link rate and lane count values at which
>> > the link can be successfully trained and then the link training
>> > through modeset will always start at the upfront values (maximum
>> > supported values of lane count and link rate based on upfront link
>> > training).
>> 
>> I don't see a need to do this for DP SST.
>> 
>> > As per the CTS, all the test 4.3.1.4 requires that you fall back to
>> > the lower link rate after trying to train at the maximum link rate
>> > advertised through the DPCD registers.
>> 
>> That test does not require the source DUT to default to maximum lane
>> count or link rate of the sink. The source may freely choose the lane
>> count and link rate as long as they don't exceed sink capabilities.
>> 
>> For the purposes of the test, the test setup can request specific
>> parameters to be used, but that does not mean using maximum by
>> *default*.
>> 
>> We currently lack the feature to reduce lane count and link rate. The
>> key to understand here is that starting at max and reducing down to the
>> sufficient parameters for the mode (which is where we start now) offers
>> no real benefit for any use case. What we're lacking is a feature to
>> reduce the link parameters *below* what's required by the mode the
>> userspace wants. This can only be achieved through cooperation with
>> userspace.
>> 
>
> We can train at the optimal link rate required for the requested mode as
> done in the existing implementation and retrain whenever the link training
> test request is sent. 
> For the test 4.3.1.4 in CTS, it does force a failure in CR and expects the
> driver to fall back to even lower link rate. We do not implement this in the
> current driver and so this test fails. Could you elaborate on how this can
> be achieved with the the cooperation with userspace?
> Should we send a uevent to the userspace asking to retry at a lower resolution
> after retraining at the lower link rate?
> This is pertty much the place where majority of the compliance tests are failing.
> How can we pass compliance with regards to this feature?

So here's an idea Ville and I came up with. It's not completely thought
out yet, probably has some wrinkles still, but then there are wrinkles
with the upfront link training too (I'll get back to those separately).

If link training fails during modeset (either for real or because it's a
test sink that wants to test failures), we 1) store the link parameters
as failing, 2) send a uevent to userspace, hopefully getting the
userspace to do another get modes and try again, 3) propage errors from
modeset. When the userspace asks for the modes again, we can prune the
modes that require using the parameters that failed. If the link
training fails again, we repeat the steps. When we detect long hpd, we
drop the list of failing modes, so we can start from scratch (it could
be another display or another cable, etc.). This same approach could be
used with sink issued link status checks when the link has degraded
during operation.

Ville, anything to add to that?

BR,
Jani.




>
> Regards
> Manasi 
>> > This will not drop the link BW to a number below what is required for
>> > the mode because the requested modes are pruned or validated in
>> > intel_dp_mode_valid based on the upfront link training results in the
>> > following patch. And these values are used here as the starting values
>> > of link rate and lane count.
>> 
>> Each patch must be a worthwhile change on its own. By my reading of this
>> patch, we can go under the required bandwidth. You can't justify that by
>> saying the follow-up patch fixes it.
>> 
>> > I almost feel that the upfront link training patch and this patch should be 
>> > combined so that insead of starting from the max link rate and lane count it
>> > is clear that we are starting from the upfront values.
>> 
>> I am still reading and gathering more feedback on the upfront link
>> training patch. I will get back to you. But the impression I'm currently
>> getting is that we can't do this. The upfront link training patch was
>> originally written for USB type C. But if DP compliance has priority,
>> the order of business should be getting compliance without upfront link
>> training. I am also still not convinced upfront link training is
>> required for compliance.
>> 
>> To be continued...
>> 
>> BR,
>> Jani.
>> 
>> 
>> 
>> >
>> > Regards,
>> > Manasi 
>> >> >
>> >> > v7:
>> >> > * Address readability concerns (Mika Kahola)
>> >> > v6:
>> >> > * Do not split quoted string across line (Mika Kahola)
>> >> > v5:
>> >> > * Reset the link rate index to the max link rate index
>> >> > before lowering the lane count (Jani Nikula)
>> >> > * Use the paradigm for loop in intel_dp_link_rate_index
>> >> > v4:
>> >> > * Fixed the link rate fallback loop (Manasi Navare)
>> >> > v3:
>> >> > * Fixed some rebase issues (Mika Kahola)
>> >> > v2:
>> >> > * Add a helper function to return index of requested link rate
>> >> > into common_rates array
>> >> > * Changed the link rate fallback loop to make use
>> >> > of common_rates array (Mika Kahola)
>> >> > * Changed INTEL_INFO to INTEL_GEN (David Weinehall)
>> >> >
>> >> > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
>> >> > ---
>> >> >  drivers/gpu/drm/i915/intel_ddi.c              | 113 +++++++++++++++++++++++---
>> >> >  drivers/gpu/drm/i915/intel_dp.c               |  15 ++++
>> >> >  drivers/gpu/drm/i915/intel_dp_link_training.c |  13 ++-
>> >> >  drivers/gpu/drm/i915/intel_drv.h              |   6 +-
>> >> >  4 files changed, 133 insertions(+), 14 deletions(-)
>> >> >
>> >> > diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
>> >> > index 8065a5f..093038c 100644
>> >> > --- a/drivers/gpu/drm/i915/intel_ddi.c
>> >> > +++ b/drivers/gpu/drm/i915/intel_ddi.c
>> >> > @@ -1637,19 +1637,18 @@ void intel_ddi_clk_select(struct intel_encoder *encoder,
>> >> >  	}
>> >> >  }
>> >> >  
>> >> > -static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
>> >> > +static void intel_ddi_pre_enable_edp(struct intel_encoder *encoder,
>> >> >  				    int link_rate, uint32_t lane_count,
>> >> > -				    struct intel_shared_dpll *pll,
>> >> > -				    bool link_mst)
>> >> > +				    struct intel_shared_dpll *pll)
>> >> >  {
>> >> >  	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
>> >> >  	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
>> >> >  	enum port port = intel_ddi_get_encoder_port(encoder);
>> >> >  
>> >> >  	intel_dp_set_link_params(intel_dp, link_rate, lane_count,
>> >> > -				 link_mst);
>> >> > -	if (encoder->type == INTEL_OUTPUT_EDP)
>> >> > -		intel_edp_panel_on(intel_dp);
>> >> > +				 false);
>> >> > +
>> >> > +	intel_edp_panel_on(intel_dp);
>> >> >  
>> >> >  	intel_ddi_clk_select(encoder, pll);
>> >> >  	intel_prepare_dp_ddi_buffers(encoder);
>> >> > @@ -1660,6 +1659,28 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
>> >> >  		intel_dp_stop_link_train(intel_dp);
>> >> >  }
>> >> >  
>> >> > +static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
>> >> > +				    int link_rate, uint32_t lane_count,
>> >> > +				    struct intel_shared_dpll *pll,
>> >> > +				    bool link_mst)
>> >> > +{
>> >> > +	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
>> >> > +	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
>> >> > +	struct intel_shared_dpll_config tmp_pll_config;
>> >> > +
>> >> > +	/* Disable the PLL and obtain the PLL for Link Training
>> >> > +	 * that starts with highest link rate and lane count.
>> >> > +	 */
>> >> > +	tmp_pll_config = pll->config;
>> >> > +	pll->funcs.disable(dev_priv, pll);
>> >> > +	pll->config.crtc_mask = 0;
>> >> > +
>> >> > +	/* If Link Training fails, send a uevent to generate a hotplug */
>> >> > +	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
>> >> > +		drm_kms_helper_hotplug_event(encoder->base.dev);
>> >> > +	pll->config = tmp_pll_config;
>> >> > +}
>> >> > +
>> >> >  static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
>> >> >  				      bool has_hdmi_sink,
>> >> >  				      struct drm_display_mode *adjusted_mode,
>> >> > @@ -1693,20 +1714,26 @@ static void intel_ddi_pre_enable(struct intel_encoder *intel_encoder,
>> >> >  	struct intel_crtc *crtc = to_intel_crtc(encoder->crtc);
>> >> >  	int type = intel_encoder->type;
>> >> >  
>> >> > -	if (type == INTEL_OUTPUT_DP || type == INTEL_OUTPUT_EDP) {
>> >> > +	if (type == INTEL_OUTPUT_EDP)
>> >> > +		intel_ddi_pre_enable_edp(intel_encoder,
>> >> > +					crtc->config->port_clock,
>> >> > +					crtc->config->lane_count,
>> >> > +					crtc->config->shared_dpll);
>> >> > +
>> >> > +	if (type == INTEL_OUTPUT_DP)
>> >> >  		intel_ddi_pre_enable_dp(intel_encoder,
>> >> >  					crtc->config->port_clock,
>> >> >  					crtc->config->lane_count,
>> >> >  					crtc->config->shared_dpll,
>> >> >  					intel_crtc_has_type(crtc->config,
>> >> >  							    INTEL_OUTPUT_DP_MST));
>> >> > -	}
>> >> > -	if (type == INTEL_OUTPUT_HDMI) {
>> >> > +
>> >> > +	if (type == INTEL_OUTPUT_HDMI)
>> >> >  		intel_ddi_pre_enable_hdmi(intel_encoder,
>> >> >  					  crtc->config->has_hdmi_sink,
>> >> >  					  &crtc->config->base.adjusted_mode,
>> >> >  					  crtc->config->shared_dpll);
>> >> > -	}
>> >> > +
>> >> >  }
>> >> >  
>> >> >  static void intel_ddi_post_disable(struct intel_encoder *intel_encoder,
>> >> > @@ -2435,6 +2462,72 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
>> >> >  	return pll;
>> >> >  }
>> >> >  
>> >> > +bool
>> >> > +intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
>> >> > +		     uint8_t max_lane_count, bool link_mst)
>> >> > +{
>> >> > +	struct intel_connector *connector = intel_dp->attached_connector;
>> >> > +	struct intel_encoder *encoder = connector->encoder;
>> >> > +	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
>> >> > +	struct intel_shared_dpll *pll;
>> >> > +	struct intel_shared_dpll_config tmp_pll_config;
>> >> > +	int link_rate, max_link_rate_index, link_rate_index;
>> >> > +	uint8_t lane_count;
>> >> > +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
>> >> > +	bool ret = false;
>> >> > +
>> >> > +	max_link_rate_index = intel_dp_link_rate_index(intel_dp, common_rates,
>> >> > +						       max_link_rate);
>> >> > +	if (max_link_rate_index < 0) {
>> >> > +		DRM_ERROR("Invalid Link Rate\n");
>> >> > +		return false;
>> >> > +	}
>> >> > +
>> >> > +	for (lane_count = max_lane_count; lane_count > 0; lane_count >>= 1) {
>> >> > +		for (link_rate_index = max_link_rate_index;
>> >> > +		     link_rate_index >= 0; link_rate_index--) {
>> >> > +			link_rate = common_rates[link_rate_index];
>> >> > +			pll = intel_ddi_get_link_dpll(intel_dp, link_rate);
>> >> > +			if (pll == NULL) {
>> >> > +				DRM_ERROR("Could not find DPLL for link training.\n");
>> >> > +				return false;
>> >> > +			}
>> >> > +			tmp_pll_config = pll->config;
>> >> > +			pll->funcs.enable(dev_priv, pll);
>> >> > +
>> >> > +			intel_dp_set_link_params(intel_dp, link_rate,
>> >> > +						 lane_count, link_mst);
>> >> > +
>> >> > +			intel_ddi_clk_select(encoder, pll);
>> >> > +			intel_prepare_dp_ddi_buffers(encoder);
>> >> > +			intel_ddi_init_dp_buf_reg(encoder);
>> >> > +			intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
>> >> > +			ret = intel_dp_start_link_train(intel_dp);
>> >> > +			if (ret)
>> >> > +				break;
>> >> > +
>> >> > +			/* Disable port followed by PLL for next
>> >> > +			 *retry/clean up
>> >> > +			 */
>> >> > +			intel_ddi_post_disable(encoder, NULL, NULL);
>> >> > +			pll->funcs.disable(dev_priv, pll);
>> >> > +			pll->config = tmp_pll_config;
>> >> > +		}
>> >> > +		if (ret) {
>> >> > +			DRM_DEBUG_KMS("Link Training successful at link rate: %d lane: %d\n",
>> >> > +				      link_rate, lane_count);
>> >> > +			break;
>> >> > +		}
>> >> > +	}
>> >> > +
>> >> > +	intel_dp_stop_link_train(intel_dp);
>> >> > +
>> >> > +	if (!lane_count)
>> >> > +		DRM_ERROR("Link Training Failed\n");
>> >> > +
>> >> > +	return ret;
>> >> > +}
>> >> > +
>> >> >  void intel_ddi_init(struct drm_device *dev, enum port port)
>> >> >  {
>> >> >  	struct drm_i915_private *dev_priv = to_i915(dev);
>> >> > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
>> >> > index 69cee9b..d81c67cb 100644
>> >> > --- a/drivers/gpu/drm/i915/intel_dp.c
>> >> > +++ b/drivers/gpu/drm/i915/intel_dp.c
>> >> > @@ -1506,6 +1506,21 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
>> >> >  	return rates[len - 1];
>> >> >  }
>> >> >  
>> >> > +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
>> >> > +			     int link_rate)
>> >> > +{
>> >> > +	int common_len;
>> >> > +	int index;
>> >> > +
>> >> > +	common_len = intel_dp_common_rates(intel_dp, common_rates);
>> >> > +	for (index = 0; index < common_len; index++) {
>> >> > +		if (link_rate == common_rates[common_len - index - 1])
>> >> > +			return common_len - index - 1;
>> >> > +	}
>> >> > +
>> >> > +	return -1;
>> >> > +}
>> >> > +
>> >> >  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
>> >> >  {
>> >> >  	return rate_to_index(rate, intel_dp->sink_rates);
>> >> > diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
>> >> > index c438b02..6eb5eb6 100644
>> >> > --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
>> >> > +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
>> >> > @@ -313,9 +313,16 @@ void intel_dp_stop_link_train(struct intel_dp *intel_dp)
>> >> >  				DP_TRAINING_PATTERN_DISABLE);
>> >> >  }
>> >> >  
>> >> > -void
>> >> > +bool
>> >> >  intel_dp_start_link_train(struct intel_dp *intel_dp)
>> >> >  {
>> >> > -	intel_dp_link_training_clock_recovery(intel_dp);
>> >> > -	intel_dp_link_training_channel_equalization(intel_dp);
>> >> > +	bool ret;
>> >> > +
>> >> > +	if (intel_dp_link_training_clock_recovery(intel_dp)) {
>> >> > +		ret = intel_dp_link_training_channel_equalization(intel_dp);
>> >> > +		if (ret)
>> >> > +			return true;
>> >> > +	}
>> >> > +
>> >> > +	return false;
>> >> >  }
>> >> > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
>> >> > index 8fd16ad..08cb571 100644
>> >> > --- a/drivers/gpu/drm/i915/intel_drv.h
>> >> > +++ b/drivers/gpu/drm/i915/intel_drv.h
>> >> > @@ -1164,6 +1164,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
>> >> >  			 struct intel_crtc_state *pipe_config);
>> >> >  void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
>> >> >  uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
>> >> > +bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
>> >> > +			  uint8_t max_lane_count, bool link_mst);
>> >> >  struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
>> >> >  						  int clock);
>> >> >  unsigned int intel_fb_align_height(struct drm_device *dev,
>> >> > @@ -1385,7 +1387,7 @@ bool intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
>> >> >  void intel_dp_set_link_params(struct intel_dp *intel_dp,
>> >> >  			      int link_rate, uint8_t lane_count,
>> >> >  			      bool link_mst);
>> >> > -void intel_dp_start_link_train(struct intel_dp *intel_dp);
>> >> > +bool intel_dp_start_link_train(struct intel_dp *intel_dp);
>> >> >  void intel_dp_stop_link_train(struct intel_dp *intel_dp);
>> >> >  void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
>> >> >  void intel_dp_encoder_reset(struct drm_encoder *encoder);
>> >> > @@ -1407,6 +1409,8 @@ void intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *co
>> >> >  void intel_dp_mst_suspend(struct drm_device *dev);
>> >> >  void intel_dp_mst_resume(struct drm_device *dev);
>> >> >  int intel_dp_max_link_rate(struct intel_dp *intel_dp);
>> >> > +int intel_dp_link_rate_index(struct intel_dp *intel_dp, int *common_rates,
>> >> > +			     int link_rate);
>> >> >  int intel_dp_rate_select(struct intel_dp *intel_dp, int rate);
>> >> >  void intel_dp_hot_plug(struct intel_encoder *intel_encoder);
>> >> >  void intel_power_sequencer_reset(struct drm_i915_private *dev_priv);
>> >> 
>> >> -- 
>> >> Jani Nikula, Intel Open Source Technology Center
>> 
>> -- 
>> Jani Nikula, Intel Open Source Technology Center

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v7 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-29 11:26               ` Jani Nikula
@ 2016-09-29 11:44                 ` Chris Wilson
  2016-09-29 15:10                   ` Ville Syrjälä
  0 siblings, 1 reply; 56+ messages in thread
From: Chris Wilson @ 2016-09-29 11:44 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

On Thu, Sep 29, 2016 at 02:26:16PM +0300, Jani Nikula wrote:
> On Thu, 29 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > On Tue, Sep 27, 2016 at 08:07:01PM +0300, Jani Nikula wrote:
> >> On Tue, 27 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> >> > On Mon, Sep 26, 2016 at 04:39:34PM +0300, Jani Nikula wrote:
> >> >> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> >> >> > According to the DisplayPort Spec, in case of Clock Recovery failure
> >> >> > the link training sequence should fall back to the lower link rate
> >> >> > followed by lower lane count until CR succeeds.
> >> >> > On CR success, the sequence proceeds with Channel EQ.
> >> >> > In case of Channel EQ failures, it should fallback to
> >> >> > lower link rate and lane count and start the CR phase again.
> >> >> 
> >> >> This change makes the link training start at the max lane count and max
> >> >> link rate. This is not ideal, as it wastes the link. And it is not a
> >> >> spec requirement. "The Link Policy Maker of the upstream device may
> >> >> choose any link count and link rate as long as they do not exceed the
> >> >> capabilities of the DP receiver."
> >> >> 
> >> >> Our current code starts at the minimum required bandwidth for the mode,
> >> >> therefore we can't fall back to lower link rate and lane count without
> >> >> reducing the mode.
> >> >> 
> >> >> AFAICT this patch here makes it possible for the link bandwidth to drop
> >> >> below what is required for the mode. This is unacceptable.
> >> >> 
> >> >> BR,
> >> >> Jani.
> >> >> 
> >> >>
> >> >
> >> > Thanks Jani for your review comments.
> >> > Yes in this change we start at the max link rate and lane count. This
> >> > change was made according to the design document discussions we had
> >> > before strating this DP Redesign project. The main reason for starting
> >> > at the maxlink rate and max lane count was for ensuring proper
> >> > behavior of DP MST. In case of DP MST, we want to train the link at
> >> > the maximum supported link rate/lane count based on an early/ upfront
> >> > link training result so that we dont fail when we try to connect a
> >> > higher resolution monitor as a second monitor. This a trade off
> >> > between wsting the link or higher power vs. needing to retrain for
> >> > every monitor that requests a higher BW in case of DP MST.
> >> 
> >> We already train at max bandwidth for DP MST, which seems to be the
> >> sensible thing to do.
> >> 
> >> > Actually this is also the reason for enabling upfront link training in
> >> > the following patch where we train the link much ahead in the modeset
> >> > sequence to understand the link rate and lane count values at which
> >> > the link can be successfully trained and then the link training
> >> > through modeset will always start at the upfront values (maximum
> >> > supported values of lane count and link rate based on upfront link
> >> > training).
> >> 
> >> I don't see a need to do this for DP SST.
> >> 
> >> > As per the CTS, all the test 4.3.1.4 requires that you fall back to
> >> > the lower link rate after trying to train at the maximum link rate
> >> > advertised through the DPCD registers.
> >> 
> >> That test does not require the source DUT to default to maximum lane
> >> count or link rate of the sink. The source may freely choose the lane
> >> count and link rate as long as they don't exceed sink capabilities.
> >> 
> >> For the purposes of the test, the test setup can request specific
> >> parameters to be used, but that does not mean using maximum by
> >> *default*.
> >> 
> >> We currently lack the feature to reduce lane count and link rate. The
> >> key to understand here is that starting at max and reducing down to the
> >> sufficient parameters for the mode (which is where we start now) offers
> >> no real benefit for any use case. What we're lacking is a feature to
> >> reduce the link parameters *below* what's required by the mode the
> >> userspace wants. This can only be achieved through cooperation with
> >> userspace.
> >> 
> >
> > We can train at the optimal link rate required for the requested mode as
> > done in the existing implementation and retrain whenever the link training
> > test request is sent. 
> > For the test 4.3.1.4 in CTS, it does force a failure in CR and expects the
> > driver to fall back to even lower link rate. We do not implement this in the
> > current driver and so this test fails. Could you elaborate on how this can
> > be achieved with the the cooperation with userspace?
> > Should we send a uevent to the userspace asking to retry at a lower resolution
> > after retraining at the lower link rate?
> > This is pertty much the place where majority of the compliance tests are failing.
> > How can we pass compliance with regards to this feature?
> 
> So here's an idea Ville and I came up with. It's not completely thought
> out yet, probably has some wrinkles still, but then there are wrinkles
> with the upfront link training too (I'll get back to those separately).
> 
> If link training fails during modeset (either for real or because it's a
> test sink that wants to test failures), we 1) store the link parameters
> as failing, 2) send a uevent to userspace, hopefully getting the
> userspace to do another get modes and try again, 3) propage errors from
> modeset.

userspace already tries to do a reprobe after a setcrtc fails, to try
and gracefully handle the race between hotplug being in its event queue
and performing setcrtc, i.e. I think the error is enough.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v18 5/6] drm/i915/dp: Enable Upfront link training on DDI platforms
  2016-09-20 22:04     ` [PATCH v18 " Manasi Navare
  2016-09-27 13:59       ` Jani Nikula
@ 2016-09-29 12:15       ` Jani Nikula
  2016-09-29 16:05         ` Jani Nikula
  1 sibling, 1 reply; 56+ messages in thread
From: Jani Nikula @ 2016-09-29 12:15 UTC (permalink / raw)
  To: Manasi Navare, intel-gfx

On Wed, 21 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> To support USB type C alternate DP mode, the display driver needs to
> know the number of lanes required by the DP panel as well as number
> of lanes that can be supported by the type-C cable. Sometimes, the
> type-C cable may limit the bandwidth even if Panel can support
> more lanes. To address these scenarios we need to train the link before
> modeset. This upfront link training caches the values of max link rate
> and max lane count that get used later during modeset. Upfront link
> training does not change any HW state, the link is disabled and PLL
> values are reset to previous values after upfront link tarining so
> that subsequent modeset is not aware of these changes.

Some of the concerns and questions I've gathered about the upfront link
training:

* What if the userspace hasn't disabled the crtc by the time we get the
  hotplug? Upfront link training just goes ahead and messes with that
  state.

* One of the potential benefits of upfront link training is that it
  could make the hotplug faster by doing the link training before the
  userspace even asks for a modeset. However, IIUC, the patch now does
  the upfront link training, disables the link, and then does link
  training again at modeset. Is that right? So it can actually make the
  link training slower? (On the plus side, this avoids the problem of
  leaving the link up and running if there isn't a userspace responding
  to hotplug or userspace never asks for a modeset.)

* Another benefit of upfront link training is that we can prune the
  modes according to what the link can actually do, before
  modeset. However, this still doesn't help the case of link degrading
  during operation. (I am not sure how much we really care about this,
  but it would seem that the approach described in my other mail might
  solve it.)

* Upfront link training is only enabled for DDI platforms, duplicating
  parts of link training and apparently parts of modeset for DDI, and
  diverging the link training code for DDI and non-DDI platforms. With
  the current approach, this makes it impossible to run DP compliance
  tests for non-DDI platforms.

* How does upfront link training interact with atomic and fastboot? /me
  clueless.

* There just is a subjectively scary feeling to the change. The DP link
  training code has been riddled with regressions in the past, and even
  the smallest and innocent seeming changes have caused them. This is a
  hard thing to justify, call it a gut feeling if you will, but history
  has taught me not to dismiss those instincts with a shrug.

All in all, I'd like to decouple current DP compliance efforts from
upfront link training.

Ville, feel free to clarify or add to the list.


BR,
Jani.


>
> This patch is based on prior work done by
> R,Durgadoss <durgadoss.r@intel.com>
>
> Changes since v17:
> * Rebased on the latest nightly
> Changes since v16:
> * Use HAS_DDI macro for enabling this feature (Rodrigo Vivi)
> * Fix some unnecessary removals/changes due to rebase (Rodrigo Vivi)
>
> Changes since v15:
> * Split this patch into two patches - one with functional
> changes to enable upfront and other with moving the existing
> functions around so that they can be used for upfront (Jani Nikula)
> * Cleaned up the commit message
>
> Signed-off-by: Durgadoss R <durgadoss.r@intel.com>
> Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
> ---
>  drivers/gpu/drm/i915/intel_ddi.c              |  21 ++-
>  drivers/gpu/drm/i915/intel_dp.c               | 190 +++++++++++++++++++++++++-
>  drivers/gpu/drm/i915/intel_dp_link_training.c |   1 -
>  drivers/gpu/drm/i915/intel_drv.h              |  14 +-
>  4 files changed, 218 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
> index 093038c..8e52507 100644
> --- a/drivers/gpu/drm/i915/intel_ddi.c
> +++ b/drivers/gpu/drm/i915/intel_ddi.c
> @@ -1676,7 +1676,8 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
>  	pll->config.crtc_mask = 0;
>  
>  	/* If Link Training fails, send a uevent to generate a hotplug */
> -	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
> +	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst,
> +				  false))
>  		drm_kms_helper_hotplug_event(encoder->base.dev);
>  	pll->config = tmp_pll_config;
>  }
> @@ -2464,7 +2465,7 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
>  
>  bool
>  intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
> -		     uint8_t max_lane_count, bool link_mst)
> +		     uint8_t max_lane_count, bool link_mst, bool is_upfront)
>  {
>  	struct intel_connector *connector = intel_dp->attached_connector;
>  	struct intel_encoder *encoder = connector->encoder;
> @@ -2513,6 +2514,7 @@ intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
>  			pll->funcs.disable(dev_priv, pll);
>  			pll->config = tmp_pll_config;
>  		}
> +
>  		if (ret) {
>  			DRM_DEBUG_KMS("Link Training successful at link rate: %d lane: %d\n",
>  				      link_rate, lane_count);
> @@ -2522,6 +2524,21 @@ intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
>  
>  	intel_dp_stop_link_train(intel_dp);
>  
> +	if (is_upfront) {
> +		DRM_DEBUG_KMS("Upfront link train %s: link_clock:%d lanes:%d\n",
> +			      ret ? "Passed" : "Failed",
> +			      link_rate, lane_count);
> +		/* Disable port followed by PLL for next retry/clean up */
> +		intel_ddi_post_disable(encoder, NULL, NULL);
> +		pll->funcs.disable(dev_priv, pll);
> +		pll->config = tmp_pll_config;
> +		if (ret) {
> +			/* Save the upfront values */
> +			intel_dp->max_lanes_upfront = lane_count;
> +			intel_dp->max_link_rate_upfront = link_rate;
> +		}
> +	}
> +
>  	if (!lane_count)
>  		DRM_ERROR("Link Training Failed\n");
>  
> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
> index 8d9a8ab..a058d5d 100644
> --- a/drivers/gpu/drm/i915/intel_dp.c
> +++ b/drivers/gpu/drm/i915/intel_dp.c
> @@ -153,12 +153,21 @@ intel_dp_max_link_bw(struct intel_dp  *intel_dp)
>  static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
>  {
>  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
> -	u8 source_max, sink_max;
> +	u8 temp, source_max, sink_max;
>  
>  	source_max = intel_dig_port->max_lanes;
>  	sink_max = drm_dp_max_lane_count(intel_dp->dpcd);
>  
> -	return min(source_max, sink_max);
> +	temp = min(source_max, sink_max);
> +
> +	/*
> +	 * Limit max lanes w.r.t to the max value found
> +	 * using Upfront link training also.
> +	 */
> +	if (intel_dp->max_lanes_upfront)
> +		return min(temp, intel_dp->max_lanes_upfront);
> +	else
> +		return temp;
>  }
>  
>  /*
> @@ -190,6 +199,42 @@ intel_dp_max_data_rate(int max_link_clock, int max_lanes)
>  	return (max_link_clock * max_lanes * 8) / 10;
>  }
>  
> +static int intel_dp_upfront_crtc_disable(struct intel_crtc *crtc,
> +					 struct drm_modeset_acquire_ctx *ctx,
> +					 bool enable)
> +{
> +	int ret;
> +	struct drm_atomic_state *state;
> +	struct intel_crtc_state *crtc_state;
> +	struct drm_device *dev = crtc->base.dev;
> +	enum pipe pipe = crtc->pipe;
> +
> +	state = drm_atomic_state_alloc(dev);
> +	if (!state)
> +		return -ENOMEM;
> +
> +	state->acquire_ctx = ctx;
> +
> +	crtc_state = intel_atomic_get_crtc_state(state, crtc);
> +	if (IS_ERR(crtc_state)) {
> +		ret = PTR_ERR(crtc_state);
> +		drm_atomic_state_free(state);
> +		return ret;
> +	}
> +
> +	DRM_DEBUG_KMS("%sabling crtc %c %s upfront link train\n",
> +			enable ? "En" : "Dis",
> +			pipe_name(pipe),
> +			enable ? "after" : "before");
> +
> +	crtc_state->base.active = enable;
> +	ret = drm_atomic_commit(state);
> +	if (ret)
> +		drm_atomic_state_free(state);
> +
> +	return ret;
> +}
> +
>  static int
>  intel_dp_downstream_max_dotclock(struct intel_dp *intel_dp)
>  {
> @@ -281,6 +326,17 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
>  	int source_len, sink_len;
>  
>  	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
> +
> +	/* Cap sink rates w.r.t upfront values */
> +	if (intel_dp->max_link_rate_upfront) {
> +		int len = sink_len - 1;
> +
> +		while (len > 0 && sink_rates[len] >
> +		       intel_dp->max_link_rate_upfront)
> +			len--;
> +		sink_len = len + 1;
> +	}
> +
>  	source_len = intel_dp_source_rates(intel_dp, &source_rates);
>  
>  	return intersect_rates(source_rates, source_len,
> @@ -288,6 +344,92 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
>  			       common_rates);
>  }
>  
> +static bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
> +{
> +	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
> +	struct intel_encoder *intel_encoder = &intel_dig_port->base;
> +	struct drm_device *dev = intel_encoder->base.dev;
> +	struct drm_i915_private *dev_priv = to_i915(dev);
> +	struct drm_mode_config *config = &dev->mode_config;
> +	struct drm_modeset_acquire_ctx ctx;
> +	struct intel_crtc *intel_crtc;
> +	struct drm_crtc *crtc = NULL;
> +	struct intel_shared_dpll *pll;
> +	struct intel_shared_dpll_config tmp_pll_config;
> +	bool disable_dpll = false;
> +	int ret;
> +	bool done = false, has_mst = false;
> +	uint8_t max_lanes;
> +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
> +	int common_len;
> +	enum intel_display_power_domain power_domain;
> +
> +	power_domain = intel_display_port_power_domain(intel_encoder);
> +	intel_display_power_get(dev_priv, power_domain);
> +
> +	common_len = intel_dp_common_rates(intel_dp, common_rates);
> +	max_lanes = intel_dp_max_lane_count(intel_dp);
> +	if (WARN_ON(common_len <= 0))
> +		return true;
> +
> +	drm_modeset_acquire_init(&ctx, 0);
> +retry:
> +	ret = drm_modeset_lock(&config->connection_mutex, &ctx);
> +	if (ret)
> +		goto exit_fail;
> +
> +	if (intel_encoder->base.crtc) {
> +		crtc = intel_encoder->base.crtc;
> +
> +		ret = drm_modeset_lock(&crtc->mutex, &ctx);
> +		if (ret)
> +			goto exit_fail;
> +
> +		ret = drm_modeset_lock(&crtc->primary->mutex, &ctx);
> +		if (ret)
> +			goto exit_fail;
> +
> +		intel_crtc = to_intel_crtc(crtc);
> +		pll = intel_crtc->config->shared_dpll;
> +		disable_dpll = true;
> +		has_mst = intel_crtc_has_type(intel_crtc->config,
> +					      INTEL_OUTPUT_DP_MST);
> +		ret = intel_dp_upfront_crtc_disable(intel_crtc, &ctx, false);
> +		if (ret)
> +			goto exit_fail;
> +	}
> +
> +	mutex_lock(&dev_priv->dpll_lock);
> +	if (disable_dpll) {
> +		/* Clear the PLL config state */
> +		tmp_pll_config = pll->config;
> +		pll->config.crtc_mask = 0;
> +	}
> +
> +	done = intel_dp->upfront_link_train(intel_dp,
> +					    common_rates[common_len-1],
> +					    max_lanes,
> +					    has_mst,
> +					    true);
> +	if (disable_dpll)
> +		pll->config = tmp_pll_config;
> +
> +	mutex_unlock(&dev_priv->dpll_lock);
> +
> +	if (crtc)
> +		ret = intel_dp_upfront_crtc_disable(intel_crtc, &ctx, true);
> +
> +exit_fail:
> +	if (ret == -EDEADLK) {
> +		drm_modeset_backoff(&ctx);
> +		goto retry;
> +	}
> +	drm_modeset_drop_locks(&ctx);
> +	drm_modeset_acquire_fini(&ctx);
> +	intel_display_power_put(dev_priv, power_domain);
> +	return done;
> +}
> +
>  static enum drm_mode_status
>  intel_dp_mode_valid(struct drm_connector *connector,
>  		    struct drm_display_mode *mode)
> @@ -311,6 +453,19 @@ intel_dp_mode_valid(struct drm_connector *connector,
>  		target_clock = fixed_mode->clock;
>  	}
>  
> +	if (intel_dp->upfront_link_train && !intel_dp->upfront_done) {
> +		bool do_upfront_link_train;
> +		/* Do not do upfront link train, if it is a compliance
> +		 * request
> +		 */
> +		do_upfront_link_train = !intel_dp->upfront_done &&
> +			(intel_dp->compliance_test_type !=
> +			 DP_TEST_LINK_TRAINING);
> +
> +		if (do_upfront_link_train)
> +			intel_dp->upfront_done = intel_dp_upfront_link_train(intel_dp);
> +	}
> +
>  	max_link_clock = intel_dp_max_link_rate(intel_dp);
>  	max_lanes = intel_dp_max_lane_count(intel_dp);
>  
> @@ -1499,6 +1654,9 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
>  	int rates[DP_MAX_SUPPORTED_RATES] = {};
>  	int len;
>  
> +	if (intel_dp->max_link_rate_upfront)
> +		return intel_dp->max_link_rate_upfront;
> +
>  	len = intel_dp_common_rates(intel_dp, rates);
>  	if (WARN_ON(len <= 0))
>  		return 162000;
> @@ -1644,6 +1802,21 @@ intel_dp_compute_config(struct intel_encoder *encoder,
>  	for (; bpp >= 6*3; bpp -= 2*3) {
>  		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
>  						   bpp);
> +
> +		if (!is_edp(intel_dp) && intel_dp->upfront_done) {
> +			clock = max_clock;
> +			lane_count = intel_dp->max_lanes_upfront;
> +			link_clock = intel_dp->max_link_rate_upfront;
> +			link_avail = intel_dp_max_data_rate(link_clock,
> +							    lane_count);
> +			mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
> +							   bpp);
> +			if (mode_rate <= link_avail)
> +				goto found;
> +			else
> +				continue;
> +		}
> +
>  		clock = max_clock;
>  		lane_count = max_lane_count;
>  		link_clock = common_rates[clock];
> @@ -1672,7 +1845,6 @@ found:
>  	}
>  
>  	pipe_config->lane_count = lane_count;
> -
>  	pipe_config->pipe_bpp = bpp;
>  	pipe_config->port_clock = common_rates[clock];
>  
> @@ -4453,8 +4625,12 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
>  
>  out:
>  	if ((status != connector_status_connected) &&
> -	    (intel_dp->is_mst == false))
> +	    (intel_dp->is_mst == false)) {
>  		intel_dp_unset_edid(intel_dp);
> +		intel_dp->upfront_done = false;
> +		intel_dp->max_lanes_upfront = 0;
> +		intel_dp->max_link_rate_upfront = 0;
> +	}
>  
>  	intel_display_power_put(to_i915(dev), power_domain);
>  	return;
> @@ -5698,6 +5874,12 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
>  	if (type == DRM_MODE_CONNECTOR_eDP)
>  		intel_encoder->type = INTEL_OUTPUT_EDP;
>  
> +	/* Initialize upfront link training vfunc for DP */
> +	if (intel_encoder->type != INTEL_OUTPUT_EDP) {
> +		if (HAS_DDI(dev_priv))
> +			intel_dp->upfront_link_train = intel_ddi_link_train;
> +	}
> +
>  	/* eDP only on port B and/or C on vlv/chv */
>  	if (WARN_ON((IS_VALLEYVIEW(dev) || IS_CHERRYVIEW(dev)) &&
>  		    is_edp(intel_dp) && port != PORT_B && port != PORT_C))
> diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
> index 6eb5eb6..782a919 100644
> --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
> +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
> @@ -304,7 +304,6 @@ intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
>  	intel_dp_set_idle_link_train(intel_dp);
>  
>  	return intel_dp->channel_eq_status;
> -
>  }
>  
>  void intel_dp_stop_link_train(struct intel_dp *intel_dp)
> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
> index 0aeb317..fdfc0b6 100644
> --- a/drivers/gpu/drm/i915/intel_drv.h
> +++ b/drivers/gpu/drm/i915/intel_drv.h
> @@ -887,6 +887,12 @@ struct intel_dp {
>  	enum hdmi_force_audio force_audio;
>  	bool limited_color_range;
>  	bool color_range_auto;
> +
> +	/* Upfront link train parameters */
> +	int max_link_rate_upfront;
> +	uint8_t max_lanes_upfront;
> +	bool upfront_done;
> +
>  	uint8_t dpcd[DP_RECEIVER_CAP_SIZE];
>  	uint8_t psr_dpcd[EDP_PSR_RECEIVER_CAP_SIZE];
>  	uint8_t downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
> @@ -944,6 +950,11 @@ struct intel_dp {
>  	/* This is called before a link training is starterd */
>  	void (*prepare_link_retrain)(struct intel_dp *intel_dp);
>  
> +	/* For Upfront link training */
> +	bool (*upfront_link_train)(struct intel_dp *intel_dp, int clock,
> +				   uint8_t lane_count, bool link_mst,
> +				   bool is_upfront);
> +
>  	/* Displayport compliance testing */
>  	unsigned long compliance_test_type;
>  	unsigned long compliance_test_data;
> @@ -1166,7 +1177,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
>  void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
>  uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
>  bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
> -			  uint8_t max_lane_count, bool link_mst);
> +			  uint8_t max_lane_count, bool link_mst,
> +			  bool is_upfront);
>  struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
>  						  int clock);
>  unsigned int intel_fb_align_height(struct drm_device *dev,

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config
  2016-09-28 16:45           ` Manasi Navare
@ 2016-09-29 14:52             ` Jani Nikula
  0 siblings, 0 replies; 56+ messages in thread
From: Jani Nikula @ 2016-09-29 14:52 UTC (permalink / raw)
  To: Manasi Navare; +Cc: intel-gfx

On Wed, 28 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> On Wed, Sep 28, 2016 at 10:38:37AM +0300, Jani Nikula wrote:
>> On Wed, 28 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
>> > On Mon, Sep 26, 2016 at 04:41:27PM +0300, Jani Nikula wrote:
>> >> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
>> >> > While configuring the pipe during modeset, it should use
>> >> > max clock and max lane count and reduce the bpp until
>> >> > the requested mode rate is less than or equal to
>> >> > available link BW.
>> >> > This is required to pass DP Compliance.
>> >> 
>> >> As I wrote in reply to patch 1/6, this is not a DP spec requirement. The
>> >> link policy maker can freely choose the link parameters as long as the
>> >> sink supports them.
>> >> 
>> >> BR,
>> >> Jani.
>> >> 
>> >>
>> >
>> > Thanks for your review feedback.
>> > This change was driven by Video Pattern generation tests in CTS spec. Eg: In
>> > test 4.3.3.1, the test requests 640x480 @ max link rate of 2.7Gbps and 4 lanes.
>> > The test will pass if it sets the link rate to 2.7 and lane count = 4.
>> > But in the existing implementation, this video mode request triggers a modeset
>> > but the compute_config function starts with the lowest link rate and lane count and
>> > trains the link at 1.62 and 4 lanes which does not match the expeced values of link
>> > rate = 2.7 and lane count = 4 and the test fails. 
>> 
>> Again, the test does not require us to use the maximum parameters by
>> default. It allows us to use optimal parameters by default, and use the
>> sink issued automated test request to change the link parameters to what
>> the test wants.
>> 
>> Look at the table in CTS 4.3.3.1. There's a test for 640x480 with 1.62
>> Gbps and 1 lane. And then there's a test for 640x480 with 2.7 Gbps and 4
>> lanes. What you're suggesting is to use excessive bandwidth for the mode
>> by default just because the test has been designed to be lax and allow
>> certain parameters at a minimum, instead of requiring optimal
>> parameters.
>> 
>> I do not think this is a change we want to make for DP SST, and it is
>> not a DP spec or compliance requirement.
>> 
>> BR,
>> Jani.
>> 
>>
>
> So if we let the driver chose the optimal link rate and lane count
> then for 640x480 it will chose 1.62 and 4 lanes. So then the automated
> test request will issue the test request for the maximum link rate
> lets say 5.4 and 4 lanes.  At this point we will have to re set the
> plls and the clocks to train the link at 5.4link rate and 4 lane count
> before proceeding to handling the video pattern request.  Are you
> recommending to doing the entire pll set up and retraining of the link
> here to the target link rate which will be the max link rate?

If we go by the idea in [1], I think this will mean storing the
parameters in test request, and having the userspace do another modeset
(via sending a hotplug uevent), where we'll use the requested
parameters. I'll still need to double check this complies with the CTS,
but my first impression was yes. If the lane/rate do not match what's
expected, the sink will play along until it can do the test request, and
after that it will wait for another write of the lane/rate. Of course,
this will need an userspace that listens to uevents and does modesets,
but this should be the case with your usual desktop environment.

[1] http://mid.mail-archive.com/8737kjlzfr.fsf@intel.com

> What about the tests 4.3.1.4 that expect the link rate to fall back to
> the lower link rate due the forced failures in the CR/Channel EQ
> phases? For these cases we do need upfront link training and starting
> the link training at the upfront values falling back to the lower
> values. What do you think?

Same here, we'll store the failing parameters, prune the modes that need
those parameters, and have the userspace try again.

The really big upside of this approach is that we'll get error
propagation from modeset, and the modeset/training sequence is always
the same.

BR,
Jani.




>
> Regards
> Manasi 
>> >
>> > Regards
>> > Manasi
>> >  
>> >> >
>> >> > v3:
>> >> > * Add Debug print if requested mode cannot be supported
>> >> > during modeset (Dhinakaran Pandiyan)
>> >> > v2:
>> >> > * Removed the loop since we use max values of clock
>> >> > and lane count (Dhinakaran Pandiyan)
>> >> >
>> >> > Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
>> >> > ---
>> >> >  drivers/gpu/drm/i915/intel_dp.c | 22 ++++++++--------------
>> >> >  1 file changed, 8 insertions(+), 14 deletions(-)
>> >> >
>> >> > diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
>> >> > index d81c67cb..65b4559 100644
>> >> > --- a/drivers/gpu/drm/i915/intel_dp.c
>> >> > +++ b/drivers/gpu/drm/i915/intel_dp.c
>> >> > @@ -1644,23 +1644,17 @@ intel_dp_compute_config(struct intel_encoder *encoder,
>> >> >  	for (; bpp >= 6*3; bpp -= 2*3) {
>> >> >  		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
>> >> >  						   bpp);
>> >> > +		clock = max_clock;
>> >> > +		lane_count = max_lane_count;
>> >> > +		link_clock = common_rates[clock];
>> >> > +		link_avail = intel_dp_max_data_rate(link_clock,
>> >> > +						    lane_count);
>> >> >  
>> >> > -		for (clock = min_clock; clock <= max_clock; clock++) {
>> >> > -			for (lane_count = min_lane_count;
>> >> > -				lane_count <= max_lane_count;
>> >> > -				lane_count <<= 1) {
>> >> > -
>> >> > -				link_clock = common_rates[clock];
>> >> > -				link_avail = intel_dp_max_data_rate(link_clock,
>> >> > -								    lane_count);
>> >> > -
>> >> > -				if (mode_rate <= link_avail) {
>> >> > -					goto found;
>> >> > -				}
>> >> > -			}
>> >> > -		}
>> >> > +		if (mode_rate <= link_avail)
>> >> > +			goto found;
>> >> >  	}
>> >> >  
>> >> > +	DRM_DEBUG_KMS("Requested Mode Rate not supported\n");
>> >> >  	return false;
>> >> >  
>> >> >  found:
>> >> 
>> >> -- 
>> >> Jani Nikula, Intel Open Source Technology Center
>> 
>> -- 
>> Jani Nikula, Intel Open Source Technology Center

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v7 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-29 11:44                 ` Chris Wilson
@ 2016-09-29 15:10                   ` Ville Syrjälä
  2016-09-29 15:48                     ` Jani Nikula
  0 siblings, 1 reply; 56+ messages in thread
From: Ville Syrjälä @ 2016-09-29 15:10 UTC (permalink / raw)
  To: Chris Wilson, Jani Nikula, Manasi Navare, intel-gfx

On Thu, Sep 29, 2016 at 12:44:19PM +0100, Chris Wilson wrote:
> On Thu, Sep 29, 2016 at 02:26:16PM +0300, Jani Nikula wrote:
> > On Thu, 29 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > > On Tue, Sep 27, 2016 at 08:07:01PM +0300, Jani Nikula wrote:
> > >> On Tue, 27 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > >> > On Mon, Sep 26, 2016 at 04:39:34PM +0300, Jani Nikula wrote:
> > >> >> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > >> >> > According to the DisplayPort Spec, in case of Clock Recovery failure
> > >> >> > the link training sequence should fall back to the lower link rate
> > >> >> > followed by lower lane count until CR succeeds.
> > >> >> > On CR success, the sequence proceeds with Channel EQ.
> > >> >> > In case of Channel EQ failures, it should fallback to
> > >> >> > lower link rate and lane count and start the CR phase again.
> > >> >> 
> > >> >> This change makes the link training start at the max lane count and max
> > >> >> link rate. This is not ideal, as it wastes the link. And it is not a
> > >> >> spec requirement. "The Link Policy Maker of the upstream device may
> > >> >> choose any link count and link rate as long as they do not exceed the
> > >> >> capabilities of the DP receiver."
> > >> >> 
> > >> >> Our current code starts at the minimum required bandwidth for the mode,
> > >> >> therefore we can't fall back to lower link rate and lane count without
> > >> >> reducing the mode.
> > >> >> 
> > >> >> AFAICT this patch here makes it possible for the link bandwidth to drop
> > >> >> below what is required for the mode. This is unacceptable.
> > >> >> 
> > >> >> BR,
> > >> >> Jani.
> > >> >> 
> > >> >>
> > >> >
> > >> > Thanks Jani for your review comments.
> > >> > Yes in this change we start at the max link rate and lane count. This
> > >> > change was made according to the design document discussions we had
> > >> > before strating this DP Redesign project. The main reason for starting
> > >> > at the maxlink rate and max lane count was for ensuring proper
> > >> > behavior of DP MST. In case of DP MST, we want to train the link at
> > >> > the maximum supported link rate/lane count based on an early/ upfront
> > >> > link training result so that we dont fail when we try to connect a
> > >> > higher resolution monitor as a second monitor. This a trade off
> > >> > between wsting the link or higher power vs. needing to retrain for
> > >> > every monitor that requests a higher BW in case of DP MST.
> > >> 
> > >> We already train at max bandwidth for DP MST, which seems to be the
> > >> sensible thing to do.
> > >> 
> > >> > Actually this is also the reason for enabling upfront link training in
> > >> > the following patch where we train the link much ahead in the modeset
> > >> > sequence to understand the link rate and lane count values at which
> > >> > the link can be successfully trained and then the link training
> > >> > through modeset will always start at the upfront values (maximum
> > >> > supported values of lane count and link rate based on upfront link
> > >> > training).
> > >> 
> > >> I don't see a need to do this for DP SST.
> > >> 
> > >> > As per the CTS, all the test 4.3.1.4 requires that you fall back to
> > >> > the lower link rate after trying to train at the maximum link rate
> > >> > advertised through the DPCD registers.
> > >> 
> > >> That test does not require the source DUT to default to maximum lane
> > >> count or link rate of the sink. The source may freely choose the lane
> > >> count and link rate as long as they don't exceed sink capabilities.
> > >> 
> > >> For the purposes of the test, the test setup can request specific
> > >> parameters to be used, but that does not mean using maximum by
> > >> *default*.
> > >> 
> > >> We currently lack the feature to reduce lane count and link rate. The
> > >> key to understand here is that starting at max and reducing down to the
> > >> sufficient parameters for the mode (which is where we start now) offers
> > >> no real benefit for any use case. What we're lacking is a feature to
> > >> reduce the link parameters *below* what's required by the mode the
> > >> userspace wants. This can only be achieved through cooperation with
> > >> userspace.
> > >> 
> > >
> > > We can train at the optimal link rate required for the requested mode as
> > > done in the existing implementation and retrain whenever the link training
> > > test request is sent. 
> > > For the test 4.3.1.4 in CTS, it does force a failure in CR and expects the
> > > driver to fall back to even lower link rate. We do not implement this in the
> > > current driver and so this test fails. Could you elaborate on how this can
> > > be achieved with the the cooperation with userspace?
> > > Should we send a uevent to the userspace asking to retry at a lower resolution
> > > after retraining at the lower link rate?
> > > This is pertty much the place where majority of the compliance tests are failing.
> > > How can we pass compliance with regards to this feature?
> > 
> > So here's an idea Ville and I came up with. It's not completely thought
> > out yet, probably has some wrinkles still, but then there are wrinkles
> > with the upfront link training too (I'll get back to those separately).
> > 
> > If link training fails during modeset (either for real or because it's a
> > test sink that wants to test failures), we 1) store the link parameters
> > as failing, 2) send a uevent to userspace, hopefully getting the
> > userspace to do another get modes and try again, 3) propage errors from
> > modeset.
> 
> userspace already tries to do a reprobe after a setcrtc fails, to try
> and gracefully handle the race between hotplug being in its event queue
> and performing setcrtc, i.e. I think the error is enough.

I presume we want the modeset to be async, so by the time we notice the
problem we're no longer in the ioctl.

-- 
Ville Syrjälä
Intel OTC
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v7 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-29 15:10                   ` Ville Syrjälä
@ 2016-09-29 15:48                     ` Jani Nikula
  2016-09-29 16:05                       ` Manasi Navare
  0 siblings, 1 reply; 56+ messages in thread
From: Jani Nikula @ 2016-09-29 15:48 UTC (permalink / raw)
  To: Ville Syrjälä, Chris Wilson, Manasi Navare, intel-gfx

On Thu, 29 Sep 2016, Ville Syrjälä <ville.syrjala@linux.intel.com> wrote:
> On Thu, Sep 29, 2016 at 12:44:19PM +0100, Chris Wilson wrote:
>> On Thu, Sep 29, 2016 at 02:26:16PM +0300, Jani Nikula wrote:
>> > On Thu, 29 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
>> > > On Tue, Sep 27, 2016 at 08:07:01PM +0300, Jani Nikula wrote:
>> > >> On Tue, 27 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
>> > >> > On Mon, Sep 26, 2016 at 04:39:34PM +0300, Jani Nikula wrote:
>> > >> >> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
>> > >> >> > According to the DisplayPort Spec, in case of Clock Recovery failure
>> > >> >> > the link training sequence should fall back to the lower link rate
>> > >> >> > followed by lower lane count until CR succeeds.
>> > >> >> > On CR success, the sequence proceeds with Channel EQ.
>> > >> >> > In case of Channel EQ failures, it should fallback to
>> > >> >> > lower link rate and lane count and start the CR phase again.
>> > >> >> 
>> > >> >> This change makes the link training start at the max lane count and max
>> > >> >> link rate. This is not ideal, as it wastes the link. And it is not a
>> > >> >> spec requirement. "The Link Policy Maker of the upstream device may
>> > >> >> choose any link count and link rate as long as they do not exceed the
>> > >> >> capabilities of the DP receiver."
>> > >> >> 
>> > >> >> Our current code starts at the minimum required bandwidth for the mode,
>> > >> >> therefore we can't fall back to lower link rate and lane count without
>> > >> >> reducing the mode.
>> > >> >> 
>> > >> >> AFAICT this patch here makes it possible for the link bandwidth to drop
>> > >> >> below what is required for the mode. This is unacceptable.
>> > >> >> 
>> > >> >> BR,
>> > >> >> Jani.
>> > >> >> 
>> > >> >>
>> > >> >
>> > >> > Thanks Jani for your review comments.
>> > >> > Yes in this change we start at the max link rate and lane count. This
>> > >> > change was made according to the design document discussions we had
>> > >> > before strating this DP Redesign project. The main reason for starting
>> > >> > at the maxlink rate and max lane count was for ensuring proper
>> > >> > behavior of DP MST. In case of DP MST, we want to train the link at
>> > >> > the maximum supported link rate/lane count based on an early/ upfront
>> > >> > link training result so that we dont fail when we try to connect a
>> > >> > higher resolution monitor as a second monitor. This a trade off
>> > >> > between wsting the link or higher power vs. needing to retrain for
>> > >> > every monitor that requests a higher BW in case of DP MST.
>> > >> 
>> > >> We already train at max bandwidth for DP MST, which seems to be the
>> > >> sensible thing to do.
>> > >> 
>> > >> > Actually this is also the reason for enabling upfront link training in
>> > >> > the following patch where we train the link much ahead in the modeset
>> > >> > sequence to understand the link rate and lane count values at which
>> > >> > the link can be successfully trained and then the link training
>> > >> > through modeset will always start at the upfront values (maximum
>> > >> > supported values of lane count and link rate based on upfront link
>> > >> > training).
>> > >> 
>> > >> I don't see a need to do this for DP SST.
>> > >> 
>> > >> > As per the CTS, all the test 4.3.1.4 requires that you fall back to
>> > >> > the lower link rate after trying to train at the maximum link rate
>> > >> > advertised through the DPCD registers.
>> > >> 
>> > >> That test does not require the source DUT to default to maximum lane
>> > >> count or link rate of the sink. The source may freely choose the lane
>> > >> count and link rate as long as they don't exceed sink capabilities.
>> > >> 
>> > >> For the purposes of the test, the test setup can request specific
>> > >> parameters to be used, but that does not mean using maximum by
>> > >> *default*.
>> > >> 
>> > >> We currently lack the feature to reduce lane count and link rate. The
>> > >> key to understand here is that starting at max and reducing down to the
>> > >> sufficient parameters for the mode (which is where we start now) offers
>> > >> no real benefit for any use case. What we're lacking is a feature to
>> > >> reduce the link parameters *below* what's required by the mode the
>> > >> userspace wants. This can only be achieved through cooperation with
>> > >> userspace.
>> > >> 
>> > >
>> > > We can train at the optimal link rate required for the requested mode as
>> > > done in the existing implementation and retrain whenever the link training
>> > > test request is sent. 
>> > > For the test 4.3.1.4 in CTS, it does force a failure in CR and expects the
>> > > driver to fall back to even lower link rate. We do not implement this in the
>> > > current driver and so this test fails. Could you elaborate on how this can
>> > > be achieved with the the cooperation with userspace?
>> > > Should we send a uevent to the userspace asking to retry at a lower resolution
>> > > after retraining at the lower link rate?
>> > > This is pertty much the place where majority of the compliance tests are failing.
>> > > How can we pass compliance with regards to this feature?
>> > 
>> > So here's an idea Ville and I came up with. It's not completely thought
>> > out yet, probably has some wrinkles still, but then there are wrinkles
>> > with the upfront link training too (I'll get back to those separately).
>> > 
>> > If link training fails during modeset (either for real or because it's a
>> > test sink that wants to test failures), we 1) store the link parameters
>> > as failing, 2) send a uevent to userspace, hopefully getting the
>> > userspace to do another get modes and try again, 3) propage errors from
>> > modeset.
>> 
>> userspace already tries to do a reprobe after a setcrtc fails, to try
>> and gracefully handle the race between hotplug being in its event queue
>> and performing setcrtc, i.e. I think the error is enough.
>
> I presume we want the modeset to be async, so by the time we notice the
> problem we're no longer in the ioctl.

IOW, we'll just need to send the hotplug uevent anyway.

BR,
Jani.

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v7 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-29 15:48                     ` Jani Nikula
@ 2016-09-29 16:05                       ` Manasi Navare
  2016-09-29 23:17                         ` Manasi Navare
  0 siblings, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-29 16:05 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

On Thu, Sep 29, 2016 at 06:48:43PM +0300, Jani Nikula wrote:
> On Thu, 29 Sep 2016, Ville Syrjälä <ville.syrjala@linux.intel.com> wrote:
> > On Thu, Sep 29, 2016 at 12:44:19PM +0100, Chris Wilson wrote:
> >> On Thu, Sep 29, 2016 at 02:26:16PM +0300, Jani Nikula wrote:
> >> > On Thu, 29 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> >> > > On Tue, Sep 27, 2016 at 08:07:01PM +0300, Jani Nikula wrote:
> >> > >> On Tue, 27 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> >> > >> > On Mon, Sep 26, 2016 at 04:39:34PM +0300, Jani Nikula wrote:
> >> > >> >> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> >> > >> >> > According to the DisplayPort Spec, in case of Clock Recovery failure
> >> > >> >> > the link training sequence should fall back to the lower link rate
> >> > >> >> > followed by lower lane count until CR succeeds.
> >> > >> >> > On CR success, the sequence proceeds with Channel EQ.
> >> > >> >> > In case of Channel EQ failures, it should fallback to
> >> > >> >> > lower link rate and lane count and start the CR phase again.
> >> > >> >> 
> >> > >> >> This change makes the link training start at the max lane count and max
> >> > >> >> link rate. This is not ideal, as it wastes the link. And it is not a
> >> > >> >> spec requirement. "The Link Policy Maker of the upstream device may
> >> > >> >> choose any link count and link rate as long as they do not exceed the
> >> > >> >> capabilities of the DP receiver."
> >> > >> >> 
> >> > >> >> Our current code starts at the minimum required bandwidth for the mode,
> >> > >> >> therefore we can't fall back to lower link rate and lane count without
> >> > >> >> reducing the mode.
> >> > >> >> 
> >> > >> >> AFAICT this patch here makes it possible for the link bandwidth to drop
> >> > >> >> below what is required for the mode. This is unacceptable.
> >> > >> >> 
> >> > >> >> BR,
> >> > >> >> Jani.
> >> > >> >> 
> >> > >> >>
> >> > >> >
> >> > >> > Thanks Jani for your review comments.
> >> > >> > Yes in this change we start at the max link rate and lane count. This
> >> > >> > change was made according to the design document discussions we had
> >> > >> > before strating this DP Redesign project. The main reason for starting
> >> > >> > at the maxlink rate and max lane count was for ensuring proper
> >> > >> > behavior of DP MST. In case of DP MST, we want to train the link at
> >> > >> > the maximum supported link rate/lane count based on an early/ upfront
> >> > >> > link training result so that we dont fail when we try to connect a
> >> > >> > higher resolution monitor as a second monitor. This a trade off
> >> > >> > between wsting the link or higher power vs. needing to retrain for
> >> > >> > every monitor that requests a higher BW in case of DP MST.
> >> > >> 
> >> > >> We already train at max bandwidth for DP MST, which seems to be the
> >> > >> sensible thing to do.
> >> > >> 
> >> > >> > Actually this is also the reason for enabling upfront link training in
> >> > >> > the following patch where we train the link much ahead in the modeset
> >> > >> > sequence to understand the link rate and lane count values at which
> >> > >> > the link can be successfully trained and then the link training
> >> > >> > through modeset will always start at the upfront values (maximum
> >> > >> > supported values of lane count and link rate based on upfront link
> >> > >> > training).
> >> > >> 
> >> > >> I don't see a need to do this for DP SST.
> >> > >> 
> >> > >> > As per the CTS, all the test 4.3.1.4 requires that you fall back to
> >> > >> > the lower link rate after trying to train at the maximum link rate
> >> > >> > advertised through the DPCD registers.
> >> > >> 
> >> > >> That test does not require the source DUT to default to maximum lane
> >> > >> count or link rate of the sink. The source may freely choose the lane
> >> > >> count and link rate as long as they don't exceed sink capabilities.
> >> > >> 
> >> > >> For the purposes of the test, the test setup can request specific
> >> > >> parameters to be used, but that does not mean using maximum by
> >> > >> *default*.
> >> > >> 
> >> > >> We currently lack the feature to reduce lane count and link rate. The
> >> > >> key to understand here is that starting at max and reducing down to the
> >> > >> sufficient parameters for the mode (which is where we start now) offers
> >> > >> no real benefit for any use case. What we're lacking is a feature to
> >> > >> reduce the link parameters *below* what's required by the mode the
> >> > >> userspace wants. This can only be achieved through cooperation with
> >> > >> userspace.
> >> > >> 
> >> > >
> >> > > We can train at the optimal link rate required for the requested mode as
> >> > > done in the existing implementation and retrain whenever the link training
> >> > > test request is sent. 
> >> > > For the test 4.3.1.4 in CTS, it does force a failure in CR and expects the
> >> > > driver to fall back to even lower link rate. We do not implement this in the
> >> > > current driver and so this test fails. Could you elaborate on how this can
> >> > > be achieved with the the cooperation with userspace?
> >> > > Should we send a uevent to the userspace asking to retry at a lower resolution
> >> > > after retraining at the lower link rate?
> >> > > This is pertty much the place where majority of the compliance tests are failing.
> >> > > How can we pass compliance with regards to this feature?
> >> > 
> >> > So here's an idea Ville and I came up with. It's not completely thought
> >> > out yet, probably has some wrinkles still, but then there are wrinkles
> >> > with the upfront link training too (I'll get back to those separately).
> >> > 
> >> > If link training fails during modeset (either for real or because it's a
> >> > test sink that wants to test failures), we 1) store the link parameters
> >> > as failing, 2) send a uevent to userspace, hopefully getting the
> >> > userspace to do another get modes and try again, 3) propage errors from
> >> > modeset.
> >> 
> >> userspace already tries to do a reprobe after a setcrtc fails, to try
> >> and gracefully handle the race between hotplug being in its event queue
> >> and performing setcrtc, i.e. I think the error is enough.
> >
> > I presume we want the modeset to be async, so by the time we notice the
> > problem we're no longer in the ioctl.
> 
> IOW, we'll just need to send the hotplug uevent anyway.
> 
> BR,
> Jani.
>

I am going to try to implement a the code where if wefail link training at a 
particular link rate then i send the uevent to the userspace saving off the
values at which thelink training failed so that these values can be used in the next
attempt of the modeset to prune the modes accordingly and link training should be
tried in that attempt with the lower link rate. The hope is that this will make the
compliance test 4.3.1.4 happy.

Regards
Manasi
> -- 
> Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v18 5/6] drm/i915/dp: Enable Upfront link training on DDI platforms
  2016-09-29 12:15       ` Jani Nikula
@ 2016-09-29 16:05         ` Jani Nikula
  0 siblings, 0 replies; 56+ messages in thread
From: Jani Nikula @ 2016-09-29 16:05 UTC (permalink / raw)
  To: Manasi Navare, intel-gfx

On Thu, 29 Sep 2016, Jani Nikula <jani.nikula@linux.intel.com> wrote:
> On Wed, 21 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
>> To support USB type C alternate DP mode, the display driver needs to
>> know the number of lanes required by the DP panel as well as number
>> of lanes that can be supported by the type-C cable. Sometimes, the
>> type-C cable may limit the bandwidth even if Panel can support
>> more lanes. To address these scenarios we need to train the link before
>> modeset. This upfront link training caches the values of max link rate
>> and max lane count that get used later during modeset. Upfront link
>> training does not change any HW state, the link is disabled and PLL
>> values are reset to previous values after upfront link tarining so
>> that subsequent modeset is not aware of these changes.
>
> Some of the concerns and questions I've gathered about the upfront link
> training:
>
> * What if the userspace hasn't disabled the crtc by the time we get the
>   hotplug? Upfront link training just goes ahead and messes with that
>   state.
>
> * One of the potential benefits of upfront link training is that it
>   could make the hotplug faster by doing the link training before the
>   userspace even asks for a modeset. However, IIUC, the patch now does
>   the upfront link training, disables the link, and then does link
>   training again at modeset. Is that right? So it can actually make the
>   link training slower? (On the plus side, this avoids the problem of
>   leaving the link up and running if there isn't a userspace responding
>   to hotplug or userspace never asks for a modeset.)
>
> * Another benefit of upfront link training is that we can prune the
>   modes according to what the link can actually do, before
>   modeset. However, this still doesn't help the case of link degrading
>   during operation. (I am not sure how much we really care about this,
>   but it would seem that the approach described in my other mail might
>   solve it.)
>
> * Upfront link training is only enabled for DDI platforms, duplicating
>   parts of link training and apparently parts of modeset for DDI, and
>   diverging the link training code for DDI and non-DDI platforms. With
>   the current approach, this makes it impossible to run DP compliance
>   tests for non-DDI platforms.
>
> * How does upfront link training interact with atomic and fastboot? /me
>   clueless.
>
> * There just is a subjectively scary feeling to the change. The DP link
>   training code has been riddled with regressions in the past, and even
>   the smallest and innocent seeming changes have caused them. This is a
>   hard thing to justify, call it a gut feeling if you will, but history
>   has taught me not to dismiss those instincts with a shrug.
>
> All in all, I'd like to decouple current DP compliance efforts from
> upfront link training.

Another that I failed to mention: Eventually we'll want the DP
compliance stuff to be something that other drivers can have too. We'll
want to push more DP code to drm core DP helpers, and we'll want to
share the burden of keeping DP compliant. As the upfront link training
doesn't seem to be an (easy) option even for our non-DDI hardware, it
may be a difficult thing for other hardware as well. Maybe. Better stick
with something that's probably an easier sell for drm?

BR,
Jani.


>
> Ville, feel free to clarify or add to the list.
>
>
> BR,
> Jani.
>
>
>>
>> This patch is based on prior work done by
>> R,Durgadoss <durgadoss.r@intel.com>
>>
>> Changes since v17:
>> * Rebased on the latest nightly
>> Changes since v16:
>> * Use HAS_DDI macro for enabling this feature (Rodrigo Vivi)
>> * Fix some unnecessary removals/changes due to rebase (Rodrigo Vivi)
>>
>> Changes since v15:
>> * Split this patch into two patches - one with functional
>> changes to enable upfront and other with moving the existing
>> functions around so that they can be used for upfront (Jani Nikula)
>> * Cleaned up the commit message
>>
>> Signed-off-by: Durgadoss R <durgadoss.r@intel.com>
>> Signed-off-by: Jim Bride <jim.bride@linux.intel.com>
>> Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
>> ---
>>  drivers/gpu/drm/i915/intel_ddi.c              |  21 ++-
>>  drivers/gpu/drm/i915/intel_dp.c               | 190 +++++++++++++++++++++++++-
>>  drivers/gpu/drm/i915/intel_dp_link_training.c |   1 -
>>  drivers/gpu/drm/i915/intel_drv.h              |  14 +-
>>  4 files changed, 218 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
>> index 093038c..8e52507 100644
>> --- a/drivers/gpu/drm/i915/intel_ddi.c
>> +++ b/drivers/gpu/drm/i915/intel_ddi.c
>> @@ -1676,7 +1676,8 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
>>  	pll->config.crtc_mask = 0;
>>  
>>  	/* If Link Training fails, send a uevent to generate a hotplug */
>> -	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst))
>> +	if (!intel_ddi_link_train(intel_dp, link_rate, lane_count, link_mst,
>> +				  false))
>>  		drm_kms_helper_hotplug_event(encoder->base.dev);
>>  	pll->config = tmp_pll_config;
>>  }
>> @@ -2464,7 +2465,7 @@ intel_ddi_get_link_dpll(struct intel_dp *intel_dp, int clock)
>>  
>>  bool
>>  intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
>> -		     uint8_t max_lane_count, bool link_mst)
>> +		     uint8_t max_lane_count, bool link_mst, bool is_upfront)
>>  {
>>  	struct intel_connector *connector = intel_dp->attached_connector;
>>  	struct intel_encoder *encoder = connector->encoder;
>> @@ -2513,6 +2514,7 @@ intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
>>  			pll->funcs.disable(dev_priv, pll);
>>  			pll->config = tmp_pll_config;
>>  		}
>> +
>>  		if (ret) {
>>  			DRM_DEBUG_KMS("Link Training successful at link rate: %d lane: %d\n",
>>  				      link_rate, lane_count);
>> @@ -2522,6 +2524,21 @@ intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
>>  
>>  	intel_dp_stop_link_train(intel_dp);
>>  
>> +	if (is_upfront) {
>> +		DRM_DEBUG_KMS("Upfront link train %s: link_clock:%d lanes:%d\n",
>> +			      ret ? "Passed" : "Failed",
>> +			      link_rate, lane_count);
>> +		/* Disable port followed by PLL for next retry/clean up */
>> +		intel_ddi_post_disable(encoder, NULL, NULL);
>> +		pll->funcs.disable(dev_priv, pll);
>> +		pll->config = tmp_pll_config;
>> +		if (ret) {
>> +			/* Save the upfront values */
>> +			intel_dp->max_lanes_upfront = lane_count;
>> +			intel_dp->max_link_rate_upfront = link_rate;
>> +		}
>> +	}
>> +
>>  	if (!lane_count)
>>  		DRM_ERROR("Link Training Failed\n");
>>  
>> diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
>> index 8d9a8ab..a058d5d 100644
>> --- a/drivers/gpu/drm/i915/intel_dp.c
>> +++ b/drivers/gpu/drm/i915/intel_dp.c
>> @@ -153,12 +153,21 @@ intel_dp_max_link_bw(struct intel_dp  *intel_dp)
>>  static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp)
>>  {
>>  	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
>> -	u8 source_max, sink_max;
>> +	u8 temp, source_max, sink_max;
>>  
>>  	source_max = intel_dig_port->max_lanes;
>>  	sink_max = drm_dp_max_lane_count(intel_dp->dpcd);
>>  
>> -	return min(source_max, sink_max);
>> +	temp = min(source_max, sink_max);
>> +
>> +	/*
>> +	 * Limit max lanes w.r.t to the max value found
>> +	 * using Upfront link training also.
>> +	 */
>> +	if (intel_dp->max_lanes_upfront)
>> +		return min(temp, intel_dp->max_lanes_upfront);
>> +	else
>> +		return temp;
>>  }
>>  
>>  /*
>> @@ -190,6 +199,42 @@ intel_dp_max_data_rate(int max_link_clock, int max_lanes)
>>  	return (max_link_clock * max_lanes * 8) / 10;
>>  }
>>  
>> +static int intel_dp_upfront_crtc_disable(struct intel_crtc *crtc,
>> +					 struct drm_modeset_acquire_ctx *ctx,
>> +					 bool enable)
>> +{
>> +	int ret;
>> +	struct drm_atomic_state *state;
>> +	struct intel_crtc_state *crtc_state;
>> +	struct drm_device *dev = crtc->base.dev;
>> +	enum pipe pipe = crtc->pipe;
>> +
>> +	state = drm_atomic_state_alloc(dev);
>> +	if (!state)
>> +		return -ENOMEM;
>> +
>> +	state->acquire_ctx = ctx;
>> +
>> +	crtc_state = intel_atomic_get_crtc_state(state, crtc);
>> +	if (IS_ERR(crtc_state)) {
>> +		ret = PTR_ERR(crtc_state);
>> +		drm_atomic_state_free(state);
>> +		return ret;
>> +	}
>> +
>> +	DRM_DEBUG_KMS("%sabling crtc %c %s upfront link train\n",
>> +			enable ? "En" : "Dis",
>> +			pipe_name(pipe),
>> +			enable ? "after" : "before");
>> +
>> +	crtc_state->base.active = enable;
>> +	ret = drm_atomic_commit(state);
>> +	if (ret)
>> +		drm_atomic_state_free(state);
>> +
>> +	return ret;
>> +}
>> +
>>  static int
>>  intel_dp_downstream_max_dotclock(struct intel_dp *intel_dp)
>>  {
>> @@ -281,6 +326,17 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
>>  	int source_len, sink_len;
>>  
>>  	sink_len = intel_dp_sink_rates(intel_dp, &sink_rates);
>> +
>> +	/* Cap sink rates w.r.t upfront values */
>> +	if (intel_dp->max_link_rate_upfront) {
>> +		int len = sink_len - 1;
>> +
>> +		while (len > 0 && sink_rates[len] >
>> +		       intel_dp->max_link_rate_upfront)
>> +			len--;
>> +		sink_len = len + 1;
>> +	}
>> +
>>  	source_len = intel_dp_source_rates(intel_dp, &source_rates);
>>  
>>  	return intersect_rates(source_rates, source_len,
>> @@ -288,6 +344,92 @@ static int intel_dp_common_rates(struct intel_dp *intel_dp,
>>  			       common_rates);
>>  }
>>  
>> +static bool intel_dp_upfront_link_train(struct intel_dp *intel_dp)
>> +{
>> +	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
>> +	struct intel_encoder *intel_encoder = &intel_dig_port->base;
>> +	struct drm_device *dev = intel_encoder->base.dev;
>> +	struct drm_i915_private *dev_priv = to_i915(dev);
>> +	struct drm_mode_config *config = &dev->mode_config;
>> +	struct drm_modeset_acquire_ctx ctx;
>> +	struct intel_crtc *intel_crtc;
>> +	struct drm_crtc *crtc = NULL;
>> +	struct intel_shared_dpll *pll;
>> +	struct intel_shared_dpll_config tmp_pll_config;
>> +	bool disable_dpll = false;
>> +	int ret;
>> +	bool done = false, has_mst = false;
>> +	uint8_t max_lanes;
>> +	int common_rates[DP_MAX_SUPPORTED_RATES] = {};
>> +	int common_len;
>> +	enum intel_display_power_domain power_domain;
>> +
>> +	power_domain = intel_display_port_power_domain(intel_encoder);
>> +	intel_display_power_get(dev_priv, power_domain);
>> +
>> +	common_len = intel_dp_common_rates(intel_dp, common_rates);
>> +	max_lanes = intel_dp_max_lane_count(intel_dp);
>> +	if (WARN_ON(common_len <= 0))
>> +		return true;
>> +
>> +	drm_modeset_acquire_init(&ctx, 0);
>> +retry:
>> +	ret = drm_modeset_lock(&config->connection_mutex, &ctx);
>> +	if (ret)
>> +		goto exit_fail;
>> +
>> +	if (intel_encoder->base.crtc) {
>> +		crtc = intel_encoder->base.crtc;
>> +
>> +		ret = drm_modeset_lock(&crtc->mutex, &ctx);
>> +		if (ret)
>> +			goto exit_fail;
>> +
>> +		ret = drm_modeset_lock(&crtc->primary->mutex, &ctx);
>> +		if (ret)
>> +			goto exit_fail;
>> +
>> +		intel_crtc = to_intel_crtc(crtc);
>> +		pll = intel_crtc->config->shared_dpll;
>> +		disable_dpll = true;
>> +		has_mst = intel_crtc_has_type(intel_crtc->config,
>> +					      INTEL_OUTPUT_DP_MST);
>> +		ret = intel_dp_upfront_crtc_disable(intel_crtc, &ctx, false);
>> +		if (ret)
>> +			goto exit_fail;
>> +	}
>> +
>> +	mutex_lock(&dev_priv->dpll_lock);
>> +	if (disable_dpll) {
>> +		/* Clear the PLL config state */
>> +		tmp_pll_config = pll->config;
>> +		pll->config.crtc_mask = 0;
>> +	}
>> +
>> +	done = intel_dp->upfront_link_train(intel_dp,
>> +					    common_rates[common_len-1],
>> +					    max_lanes,
>> +					    has_mst,
>> +					    true);
>> +	if (disable_dpll)
>> +		pll->config = tmp_pll_config;
>> +
>> +	mutex_unlock(&dev_priv->dpll_lock);
>> +
>> +	if (crtc)
>> +		ret = intel_dp_upfront_crtc_disable(intel_crtc, &ctx, true);
>> +
>> +exit_fail:
>> +	if (ret == -EDEADLK) {
>> +		drm_modeset_backoff(&ctx);
>> +		goto retry;
>> +	}
>> +	drm_modeset_drop_locks(&ctx);
>> +	drm_modeset_acquire_fini(&ctx);
>> +	intel_display_power_put(dev_priv, power_domain);
>> +	return done;
>> +}
>> +
>>  static enum drm_mode_status
>>  intel_dp_mode_valid(struct drm_connector *connector,
>>  		    struct drm_display_mode *mode)
>> @@ -311,6 +453,19 @@ intel_dp_mode_valid(struct drm_connector *connector,
>>  		target_clock = fixed_mode->clock;
>>  	}
>>  
>> +	if (intel_dp->upfront_link_train && !intel_dp->upfront_done) {
>> +		bool do_upfront_link_train;
>> +		/* Do not do upfront link train, if it is a compliance
>> +		 * request
>> +		 */
>> +		do_upfront_link_train = !intel_dp->upfront_done &&
>> +			(intel_dp->compliance_test_type !=
>> +			 DP_TEST_LINK_TRAINING);
>> +
>> +		if (do_upfront_link_train)
>> +			intel_dp->upfront_done = intel_dp_upfront_link_train(intel_dp);
>> +	}
>> +
>>  	max_link_clock = intel_dp_max_link_rate(intel_dp);
>>  	max_lanes = intel_dp_max_lane_count(intel_dp);
>>  
>> @@ -1499,6 +1654,9 @@ intel_dp_max_link_rate(struct intel_dp *intel_dp)
>>  	int rates[DP_MAX_SUPPORTED_RATES] = {};
>>  	int len;
>>  
>> +	if (intel_dp->max_link_rate_upfront)
>> +		return intel_dp->max_link_rate_upfront;
>> +
>>  	len = intel_dp_common_rates(intel_dp, rates);
>>  	if (WARN_ON(len <= 0))
>>  		return 162000;
>> @@ -1644,6 +1802,21 @@ intel_dp_compute_config(struct intel_encoder *encoder,
>>  	for (; bpp >= 6*3; bpp -= 2*3) {
>>  		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
>>  						   bpp);
>> +
>> +		if (!is_edp(intel_dp) && intel_dp->upfront_done) {
>> +			clock = max_clock;
>> +			lane_count = intel_dp->max_lanes_upfront;
>> +			link_clock = intel_dp->max_link_rate_upfront;
>> +			link_avail = intel_dp_max_data_rate(link_clock,
>> +							    lane_count);
>> +			mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
>> +							   bpp);
>> +			if (mode_rate <= link_avail)
>> +				goto found;
>> +			else
>> +				continue;
>> +		}
>> +
>>  		clock = max_clock;
>>  		lane_count = max_lane_count;
>>  		link_clock = common_rates[clock];
>> @@ -1672,7 +1845,6 @@ found:
>>  	}
>>  
>>  	pipe_config->lane_count = lane_count;
>> -
>>  	pipe_config->pipe_bpp = bpp;
>>  	pipe_config->port_clock = common_rates[clock];
>>  
>> @@ -4453,8 +4625,12 @@ intel_dp_long_pulse(struct intel_connector *intel_connector)
>>  
>>  out:
>>  	if ((status != connector_status_connected) &&
>> -	    (intel_dp->is_mst == false))
>> +	    (intel_dp->is_mst == false)) {
>>  		intel_dp_unset_edid(intel_dp);
>> +		intel_dp->upfront_done = false;
>> +		intel_dp->max_lanes_upfront = 0;
>> +		intel_dp->max_link_rate_upfront = 0;
>> +	}
>>  
>>  	intel_display_power_put(to_i915(dev), power_domain);
>>  	return;
>> @@ -5698,6 +5874,12 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
>>  	if (type == DRM_MODE_CONNECTOR_eDP)
>>  		intel_encoder->type = INTEL_OUTPUT_EDP;
>>  
>> +	/* Initialize upfront link training vfunc for DP */
>> +	if (intel_encoder->type != INTEL_OUTPUT_EDP) {
>> +		if (HAS_DDI(dev_priv))
>> +			intel_dp->upfront_link_train = intel_ddi_link_train;
>> +	}
>> +
>>  	/* eDP only on port B and/or C on vlv/chv */
>>  	if (WARN_ON((IS_VALLEYVIEW(dev) || IS_CHERRYVIEW(dev)) &&
>>  		    is_edp(intel_dp) && port != PORT_B && port != PORT_C))
>> diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
>> index 6eb5eb6..782a919 100644
>> --- a/drivers/gpu/drm/i915/intel_dp_link_training.c
>> +++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
>> @@ -304,7 +304,6 @@ intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
>>  	intel_dp_set_idle_link_train(intel_dp);
>>  
>>  	return intel_dp->channel_eq_status;
>> -
>>  }
>>  
>>  void intel_dp_stop_link_train(struct intel_dp *intel_dp)
>> diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
>> index 0aeb317..fdfc0b6 100644
>> --- a/drivers/gpu/drm/i915/intel_drv.h
>> +++ b/drivers/gpu/drm/i915/intel_drv.h
>> @@ -887,6 +887,12 @@ struct intel_dp {
>>  	enum hdmi_force_audio force_audio;
>>  	bool limited_color_range;
>>  	bool color_range_auto;
>> +
>> +	/* Upfront link train parameters */
>> +	int max_link_rate_upfront;
>> +	uint8_t max_lanes_upfront;
>> +	bool upfront_done;
>> +
>>  	uint8_t dpcd[DP_RECEIVER_CAP_SIZE];
>>  	uint8_t psr_dpcd[EDP_PSR_RECEIVER_CAP_SIZE];
>>  	uint8_t downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
>> @@ -944,6 +950,11 @@ struct intel_dp {
>>  	/* This is called before a link training is starterd */
>>  	void (*prepare_link_retrain)(struct intel_dp *intel_dp);
>>  
>> +	/* For Upfront link training */
>> +	bool (*upfront_link_train)(struct intel_dp *intel_dp, int clock,
>> +				   uint8_t lane_count, bool link_mst,
>> +				   bool is_upfront);
>> +
>>  	/* Displayport compliance testing */
>>  	unsigned long compliance_test_type;
>>  	unsigned long compliance_test_data;
>> @@ -1166,7 +1177,8 @@ void intel_ddi_clock_get(struct intel_encoder *encoder,
>>  void intel_ddi_set_vc_payload_alloc(struct drm_crtc *crtc, bool state);
>>  uint32_t ddi_signal_levels(struct intel_dp *intel_dp);
>>  bool intel_ddi_link_train(struct intel_dp *intel_dp, int max_link_rate,
>> -			  uint8_t max_lane_count, bool link_mst);
>> +			  uint8_t max_lane_count, bool link_mst,
>> +			  bool is_upfront);
>>  struct intel_shared_dpll *intel_ddi_get_link_dpll(struct intel_dp *intel_dp,
>>  						  int clock);
>>  unsigned int intel_fb_align_height(struct drm_device *dev,

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v7 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-29 16:05                       ` Manasi Navare
@ 2016-09-29 23:17                         ` Manasi Navare
  2016-10-03 23:29                           ` Manasi Navare
  0 siblings, 1 reply; 56+ messages in thread
From: Manasi Navare @ 2016-09-29 23:17 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

On Thu, Sep 29, 2016 at 09:05:01AM -0700, Manasi Navare wrote:
> On Thu, Sep 29, 2016 at 06:48:43PM +0300, Jani Nikula wrote:
> > On Thu, 29 Sep 2016, Ville Syrjälä <ville.syrjala@linux.intel.com> wrote:
> > > On Thu, Sep 29, 2016 at 12:44:19PM +0100, Chris Wilson wrote:
> > >> On Thu, Sep 29, 2016 at 02:26:16PM +0300, Jani Nikula wrote:
> > >> > On Thu, 29 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > >> > > On Tue, Sep 27, 2016 at 08:07:01PM +0300, Jani Nikula wrote:
> > >> > >> On Tue, 27 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > >> > >> > On Mon, Sep 26, 2016 at 04:39:34PM +0300, Jani Nikula wrote:
> > >> > >> >> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > >> > >> >> > According to the DisplayPort Spec, in case of Clock Recovery failure
> > >> > >> >> > the link training sequence should fall back to the lower link rate
> > >> > >> >> > followed by lower lane count until CR succeeds.
> > >> > >> >> > On CR success, the sequence proceeds with Channel EQ.
> > >> > >> >> > In case of Channel EQ failures, it should fallback to
> > >> > >> >> > lower link rate and lane count and start the CR phase again.
> > >> > >> >> 
> > >> > >> >> This change makes the link training start at the max lane count and max
> > >> > >> >> link rate. This is not ideal, as it wastes the link. And it is not a
> > >> > >> >> spec requirement. "The Link Policy Maker of the upstream device may
> > >> > >> >> choose any link count and link rate as long as they do not exceed the
> > >> > >> >> capabilities of the DP receiver."
> > >> > >> >> 
> > >> > >> >> Our current code starts at the minimum required bandwidth for the mode,
> > >> > >> >> therefore we can't fall back to lower link rate and lane count without
> > >> > >> >> reducing the mode.
> > >> > >> >> 
> > >> > >> >> AFAICT this patch here makes it possible for the link bandwidth to drop
> > >> > >> >> below what is required for the mode. This is unacceptable.
> > >> > >> >> 
> > >> > >> >> BR,
> > >> > >> >> Jani.
> > >> > >> >> 
> > >> > >> >>
> > >> > >> >
> > >> > >> > Thanks Jani for your review comments.
> > >> > >> > Yes in this change we start at the max link rate and lane count. This
> > >> > >> > change was made according to the design document discussions we had
> > >> > >> > before strating this DP Redesign project. The main reason for starting
> > >> > >> > at the maxlink rate and max lane count was for ensuring proper
> > >> > >> > behavior of DP MST. In case of DP MST, we want to train the link at
> > >> > >> > the maximum supported link rate/lane count based on an early/ upfront
> > >> > >> > link training result so that we dont fail when we try to connect a
> > >> > >> > higher resolution monitor as a second monitor. This a trade off
> > >> > >> > between wsting the link or higher power vs. needing to retrain for
> > >> > >> > every monitor that requests a higher BW in case of DP MST.
> > >> > >> 
> > >> > >> We already train at max bandwidth for DP MST, which seems to be the
> > >> > >> sensible thing to do.
> > >> > >> 
> > >> > >> > Actually this is also the reason for enabling upfront link training in
> > >> > >> > the following patch where we train the link much ahead in the modeset
> > >> > >> > sequence to understand the link rate and lane count values at which
> > >> > >> > the link can be successfully trained and then the link training
> > >> > >> > through modeset will always start at the upfront values (maximum
> > >> > >> > supported values of lane count and link rate based on upfront link
> > >> > >> > training).
> > >> > >> 
> > >> > >> I don't see a need to do this for DP SST.
> > >> > >> 
> > >> > >> > As per the CTS, all the test 4.3.1.4 requires that you fall back to
> > >> > >> > the lower link rate after trying to train at the maximum link rate
> > >> > >> > advertised through the DPCD registers.
> > >> > >> 
> > >> > >> That test does not require the source DUT to default to maximum lane
> > >> > >> count or link rate of the sink. The source may freely choose the lane
> > >> > >> count and link rate as long as they don't exceed sink capabilities.
> > >> > >> 
> > >> > >> For the purposes of the test, the test setup can request specific
> > >> > >> parameters to be used, but that does not mean using maximum by
> > >> > >> *default*.
> > >> > >> 
> > >> > >> We currently lack the feature to reduce lane count and link rate. The
> > >> > >> key to understand here is that starting at max and reducing down to the
> > >> > >> sufficient parameters for the mode (which is where we start now) offers
> > >> > >> no real benefit for any use case. What we're lacking is a feature to
> > >> > >> reduce the link parameters *below* what's required by the mode the
> > >> > >> userspace wants. This can only be achieved through cooperation with
> > >> > >> userspace.
> > >> > >> 
> > >> > >
> > >> > > We can train at the optimal link rate required for the requested mode as
> > >> > > done in the existing implementation and retrain whenever the link training
> > >> > > test request is sent. 
> > >> > > For the test 4.3.1.4 in CTS, it does force a failure in CR and expects the
> > >> > > driver to fall back to even lower link rate. We do not implement this in the
> > >> > > current driver and so this test fails. Could you elaborate on how this can
> > >> > > be achieved with the the cooperation with userspace?
> > >> > > Should we send a uevent to the userspace asking to retry at a lower resolution
> > >> > > after retraining at the lower link rate?
> > >> > > This is pertty much the place where majority of the compliance tests are failing.
> > >> > > How can we pass compliance with regards to this feature?
> > >> > 
> > >> > So here's an idea Ville and I came up with. It's not completely thought
> > >> > out yet, probably has some wrinkles still, but then there are wrinkles
> > >> > with the upfront link training too (I'll get back to those separately).
> > >> > 
> > >> > If link training fails during modeset (either for real or because it's a
> > >> > test sink that wants to test failures), we 1) store the link parameters
> > >> > as failing, 2) send a uevent to userspace, hopefully getting the
> > >> > userspace to do another get modes and try again, 3) propage errors from
> > >> > modeset.
> > >> 
> > >> userspace already tries to do a reprobe after a setcrtc fails, to try
> > >> and gracefully handle the race between hotplug being in its event queue
> > >> and performing setcrtc, i.e. I think the error is enough.
> > >
> > > I presume we want the modeset to be async, so by the time we notice the
> > > problem we're no longer in the ioctl.
> > 
> > IOW, we'll just need to send the hotplug uevent anyway.
> > 
> > BR,
> > Jani.
> >
> 
> I am going to try to implement a the code where if wefail link training at a 
> particular link rate then i send the uevent to the userspace saving off the
> values at which thelink training failed so that these values can be used in the next
> attempt of the modeset to prune the modes accordingly and link training should be
> tried in that attempt with the lower link rate. The hope is that this will make the
> compliance test 4.3.1.4 happy.
> 
> Regards
> Manasi

This is what I am doing when we get a test request to train at a particular rate:
if ((intel_dp->compliance_test_type == DP_TEST_LINK_TRAINING)) {
                        intel_dp_set_link_params(intel_dp,
                                                 drm_dp_bw_code_to_link_rate(intel_dp->
                                                                             compliance_test_link_rate),
                                                 intel_dp->compliance_test_lane_count,
                                                 false);
                	drm_kms_helper_hotplug_event(intel_encoder->base.dev); 
	}

I see in the dmesg that it sends a hotplug uevent to the userspace that triggers a drm_setup_crtcs()
But it finds that the connector is already enabled and has a CRTC so it does not go ahead with 
compute_config. Do we need to disable the crtc and update the atomic state before generating
this uevent? How can this be done?

Manasi

> > -- 
> > Jani Nikula, Intel Open Source Technology Center
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH v7 1/6] drm/i915: Fallback to lower link rate and lane count during link training
  2016-09-29 23:17                         ` Manasi Navare
@ 2016-10-03 23:29                           ` Manasi Navare
  0 siblings, 0 replies; 56+ messages in thread
From: Manasi Navare @ 2016-10-03 23:29 UTC (permalink / raw)
  To: Jani Nikula; +Cc: intel-gfx

On Thu, Sep 29, 2016 at 04:17:06PM -0700, Manasi Navare wrote:
> On Thu, Sep 29, 2016 at 09:05:01AM -0700, Manasi Navare wrote:
> > On Thu, Sep 29, 2016 at 06:48:43PM +0300, Jani Nikula wrote:
> > > On Thu, 29 Sep 2016, Ville Syrjälä <ville.syrjala@linux.intel.com> wrote:
> > > > On Thu, Sep 29, 2016 at 12:44:19PM +0100, Chris Wilson wrote:
> > > >> On Thu, Sep 29, 2016 at 02:26:16PM +0300, Jani Nikula wrote:
> > > >> > On Thu, 29 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > > >> > > On Tue, Sep 27, 2016 at 08:07:01PM +0300, Jani Nikula wrote:
> > > >> > >> On Tue, 27 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > > >> > >> > On Mon, Sep 26, 2016 at 04:39:34PM +0300, Jani Nikula wrote:
> > > >> > >> >> On Fri, 16 Sep 2016, Manasi Navare <manasi.d.navare@intel.com> wrote:
> > > >> > >> >> > According to the DisplayPort Spec, in case of Clock Recovery failure
> > > >> > >> >> > the link training sequence should fall back to the lower link rate
> > > >> > >> >> > followed by lower lane count until CR succeeds.
> > > >> > >> >> > On CR success, the sequence proceeds with Channel EQ.
> > > >> > >> >> > In case of Channel EQ failures, it should fallback to
> > > >> > >> >> > lower link rate and lane count and start the CR phase again.
> > > >> > >> >> 
> > > >> > >> >> This change makes the link training start at the max lane count and max
> > > >> > >> >> link rate. This is not ideal, as it wastes the link. And it is not a
> > > >> > >> >> spec requirement. "The Link Policy Maker of the upstream device may
> > > >> > >> >> choose any link count and link rate as long as they do not exceed the
> > > >> > >> >> capabilities of the DP receiver."
> > > >> > >> >> 
> > > >> > >> >> Our current code starts at the minimum required bandwidth for the mode,
> > > >> > >> >> therefore we can't fall back to lower link rate and lane count without
> > > >> > >> >> reducing the mode.
> > > >> > >> >> 
> > > >> > >> >> AFAICT this patch here makes it possible for the link bandwidth to drop
> > > >> > >> >> below what is required for the mode. This is unacceptable.
> > > >> > >> >> 
> > > >> > >> >> BR,
> > > >> > >> >> Jani.
> > > >> > >> >> 
> > > >> > >> >>
> > > >> > >> >
> > > >> > >> > Thanks Jani for your review comments.
> > > >> > >> > Yes in this change we start at the max link rate and lane count. This
> > > >> > >> > change was made according to the design document discussions we had
> > > >> > >> > before strating this DP Redesign project. The main reason for starting
> > > >> > >> > at the maxlink rate and max lane count was for ensuring proper
> > > >> > >> > behavior of DP MST. In case of DP MST, we want to train the link at
> > > >> > >> > the maximum supported link rate/lane count based on an early/ upfront
> > > >> > >> > link training result so that we dont fail when we try to connect a
> > > >> > >> > higher resolution monitor as a second monitor. This a trade off
> > > >> > >> > between wsting the link or higher power vs. needing to retrain for
> > > >> > >> > every monitor that requests a higher BW in case of DP MST.
> > > >> > >> 
> > > >> > >> We already train at max bandwidth for DP MST, which seems to be the
> > > >> > >> sensible thing to do.
> > > >> > >> 
> > > >> > >> > Actually this is also the reason for enabling upfront link training in
> > > >> > >> > the following patch where we train the link much ahead in the modeset
> > > >> > >> > sequence to understand the link rate and lane count values at which
> > > >> > >> > the link can be successfully trained and then the link training
> > > >> > >> > through modeset will always start at the upfront values (maximum
> > > >> > >> > supported values of lane count and link rate based on upfront link
> > > >> > >> > training).
> > > >> > >> 
> > > >> > >> I don't see a need to do this for DP SST.
> > > >> > >> 
> > > >> > >> > As per the CTS, all the test 4.3.1.4 requires that you fall back to
> > > >> > >> > the lower link rate after trying to train at the maximum link rate
> > > >> > >> > advertised through the DPCD registers.
> > > >> > >> 
> > > >> > >> That test does not require the source DUT to default to maximum lane
> > > >> > >> count or link rate of the sink. The source may freely choose the lane
> > > >> > >> count and link rate as long as they don't exceed sink capabilities.
> > > >> > >> 
> > > >> > >> For the purposes of the test, the test setup can request specific
> > > >> > >> parameters to be used, but that does not mean using maximum by
> > > >> > >> *default*.
> > > >> > >> 
> > > >> > >> We currently lack the feature to reduce lane count and link rate. The
> > > >> > >> key to understand here is that starting at max and reducing down to the
> > > >> > >> sufficient parameters for the mode (which is where we start now) offers
> > > >> > >> no real benefit for any use case. What we're lacking is a feature to
> > > >> > >> reduce the link parameters *below* what's required by the mode the
> > > >> > >> userspace wants. This can only be achieved through cooperation with
> > > >> > >> userspace.
> > > >> > >> 
> > > >> > >
> > > >> > > We can train at the optimal link rate required for the requested mode as
> > > >> > > done in the existing implementation and retrain whenever the link training
> > > >> > > test request is sent. 
> > > >> > > For the test 4.3.1.4 in CTS, it does force a failure in CR and expects the
> > > >> > > driver to fall back to even lower link rate. We do not implement this in the
> > > >> > > current driver and so this test fails. Could you elaborate on how this can
> > > >> > > be achieved with the the cooperation with userspace?
> > > >> > > Should we send a uevent to the userspace asking to retry at a lower resolution
> > > >> > > after retraining at the lower link rate?
> > > >> > > This is pertty much the place where majority of the compliance tests are failing.
> > > >> > > How can we pass compliance with regards to this feature?
> > > >> > 
> > > >> > So here's an idea Ville and I came up with. It's not completely thought
> > > >> > out yet, probably has some wrinkles still, but then there are wrinkles
> > > >> > with the upfront link training too (I'll get back to those separately).
> > > >> > 
> > > >> > If link training fails during modeset (either for real or because it's a
> > > >> > test sink that wants to test failures), we 1) store the link parameters
> > > >> > as failing, 2) send a uevent to userspace, hopefully getting the
> > > >> > userspace to do another get modes and try again, 3) propage errors from
> > > >> > modeset.
> > > >> 
> > > >> userspace already tries to do a reprobe after a setcrtc fails, to try
> > > >> and gracefully handle the race between hotplug being in its event queue
> > > >> and performing setcrtc, i.e. I think the error is enough.
> > > >
> > > > I presume we want the modeset to be async, so by the time we notice the
> > > > problem we're no longer in the ioctl.
> > > 
> > > IOW, we'll just need to send the hotplug uevent anyway.
> > > 
> > > BR,
> > > Jani.
> > >

When the test sink is testing the failure, link training fails in ddi_pre_enable().
This is still while we are holding the modeset locks hence we cannot send a hotplug
uevent here. If i try to send the hotplug uevent here it just freezes due to deadlock.
I am reading up on how to set up a work queue and call the uevent in a separate thread.
Is this a right approach or you have any other suggestion for sending a hotplug uevent on link
train failure during atomic commit phase.

Manasi
> > 
> > I am going to try to implement a the code where if wefail link training at a 
> > particular link rate then i send the uevent to the userspace saving off the
> > values at which thelink training failed so that these values can be used in the next
> > attempt of the modeset to prune the modes accordingly and link training should be
> > tried in that attempt with the lower link rate. The hope is that this will make the
> > compliance test 4.3.1.4 happy.
> > 
> > Regards
> > Manasi
> 
> This is what I am doing when we get a test request to train at a particular rate:
> if ((intel_dp->compliance_test_type == DP_TEST_LINK_TRAINING)) {
>                         intel_dp_set_link_params(intel_dp,
>                                                  drm_dp_bw_code_to_link_rate(intel_dp->
>                                                                              compliance_test_link_rate),
>                                                  intel_dp->compliance_test_lane_count,
>                                                  false);
>                 	drm_kms_helper_hotplug_event(intel_encoder->base.dev); 
> 	}
> 
> I see in the dmesg that it sends a hotplug uevent to the userspace that triggers a drm_setup_crtcs()
> But it finds that the connector is already enabled and has a CRTC so it does not go ahead with 
> compute_config. Do we need to disable the crtc and update the atomic state before generating
> this uevent? How can this be done?
> 
> Manasi
> 
> > > -- 
> > > Jani Nikula, Intel Open Source Technology Center
> > _______________________________________________
> > Intel-gfx mailing list
> > Intel-gfx@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 56+ messages in thread

end of thread, other threads:[~2016-10-03 23:28 UTC | newest]

Thread overview: 56+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-14  1:08 [PATCH 0/5] Remaining patches for upfront link training on DDI platforms Manasi Navare
2016-09-14  1:08 ` [PATCH v5 1/5] drm/i915: Fallback to lower link rate and lane count during link training Manasi Navare
2016-09-14  8:15   ` Mika Kahola
2016-09-15 19:56     ` Manasi Navare
2016-09-14  1:08 ` [PATCH v3 2/5] drm/i915: Remove the link rate and lane count loop in compute config Manasi Navare
2016-09-14  1:08 ` [PATCH v2 3/5] drm/i915: Change the placement of some static functions in intel_dp.c Manasi Navare
2016-09-15  7:41   ` Mika Kahola
2016-09-15 19:08     ` Manasi Navare
2016-09-14  1:08 ` [PATCH v17 4/5] drm/i915/dp: Enable Upfront link training on DDI platforms Manasi Navare
2016-09-14  1:08 ` [PATCH v3 5/5] drm/i915/dp/mst: Add support for upfront link training for DP MST Manasi Navare
2016-09-15 17:48   ` Pandiyan, Dhinakaran
2016-09-15 19:25     ` Manasi Navare
2016-09-19 17:03       ` Jim Bride
2016-09-19 17:22         ` Manasi Navare
2016-09-14  5:38 ` ✓ Fi.CI.BAT: success for Remaining patches for upfront link training on DDI platforms Patchwork
2016-09-16  0:03 ` [PATCH 0/6] " Manasi Navare
2016-09-16  0:03   ` [PATCH v6 1/6] drm/i915: Fallback to lower link rate and lane count during link training Manasi Navare
2016-09-16  9:29     ` Mika Kahola
2016-09-16 18:45     ` [PATCH v7 " Manasi Navare
2016-09-26 13:39       ` Jani Nikula
2016-09-27 15:25         ` Manasi Navare
2016-09-27 17:07           ` Jani Nikula
2016-09-29  6:41             ` Manasi Navare
2016-09-29 11:26               ` Jani Nikula
2016-09-29 11:44                 ` Chris Wilson
2016-09-29 15:10                   ` Ville Syrjälä
2016-09-29 15:48                     ` Jani Nikula
2016-09-29 16:05                       ` Manasi Navare
2016-09-29 23:17                         ` Manasi Navare
2016-10-03 23:29                           ` Manasi Navare
2016-09-16  0:04   ` [PATCH v3 2/6] drm/i915: Remove the link rate and lane count loop in compute config Manasi Navare
2016-09-26 13:41     ` Jani Nikula
2016-09-27 13:39       ` Jani Nikula
2016-09-27 22:13         ` Manasi Navare
2016-09-28  7:14           ` Jani Nikula
2016-09-28 22:30             ` Manasi Navare
2016-09-27 21:55       ` Manasi Navare
2016-09-28  7:38         ` Jani Nikula
2016-09-28 16:45           ` Manasi Navare
2016-09-29 14:52             ` Jani Nikula
2016-09-16  0:04   ` [PATCH v3 3/6] drm/i915: Change the placement of some static functions in intel_dp.c Manasi Navare
2016-09-16  8:12     ` Mika Kahola
2016-09-16  0:04   ` [PATCH 4/6] drm/i915: Code cleanup to use dev_priv and INTEL_GEN Manasi Navare
2016-09-16  7:40     ` Mika Kahola
2016-09-26 13:45     ` Jani Nikula
2016-09-28  0:03       ` Manasi Navare
2016-09-16  0:04   ` [PATCH v17 5/6] drm/i915/dp: Enable Upfront link training on DDI platforms Manasi Navare
2016-09-20 22:04     ` [PATCH v18 " Manasi Navare
2016-09-27 13:59       ` Jani Nikula
2016-09-29 12:15       ` Jani Nikula
2016-09-29 16:05         ` Jani Nikula
2016-09-16  0:04   ` [PATCH v3 6/6] drm/i915/dp/mst: Add support for upfront link training for DP MST Manasi Navare
2016-09-16  0:47   ` ✓ Fi.CI.BAT: success for series starting with [v6,1/6] drm/i915: Fallback to lower link rate and lane count during link training Patchwork
2016-09-16 19:25   ` ✓ Fi.CI.BAT: success for series starting with [v7,1/6] drm/i915: Fallback to lower link rate and lane count during link training (rev2) Patchwork
2016-09-20  8:45   ` [PATCH 0/6] Remaining patches for upfront link training on DDI platforms Jani Nikula
2016-09-20 22:49   ` ✓ Fi.CI.BAT: success for series starting with [v7,1/6] drm/i915: Fallback to lower link rate and lane count during link training (rev3) Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.