All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH v12 0/3] Refactor Gen11+ SAGV support
@ 2019-12-12 12:40 Stanislav Lisovskiy
  2019-12-12 12:40 ` [Intel-gfx] [PATCH v12 1/3] drm/i915: Refactor intel_can_enable_sagv Stanislav Lisovskiy
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Stanislav Lisovskiy @ 2019-12-12 12:40 UTC (permalink / raw)
  To: intel-gfx

For Gen11+ platforms BSpec suggests disabling specific
QGV points separately, depending on bandwidth limitations
and current display configuration. Thus it required adding
a new PCode request for disabling QGV points and some
refactoring of already existing SAGV code.
Also had to refactor intel_can_enable_sagv function,
as current seems to be outdated and using skl specific
workarounds, also not following BSpec for Gen11+.

Stanislav Lisovskiy (3):
  drm/i915: Refactor intel_can_enable_sagv
  drm/i915: Restrict qgv points which don't have enough bandwidth.
  drm/i915: Enable SAGV support for Gen12

 drivers/gpu/drm/i915/display/intel_bw.c       | 144 ++++--
 drivers/gpu/drm/i915/display/intel_bw.h       |   2 +
 drivers/gpu/drm/i915/display/intel_display.c  |  98 +++-
 .../drm/i915/display/intel_display_types.h    |  12 +
 drivers/gpu/drm/i915/i915_drv.h               |  11 +
 drivers/gpu/drm/i915/i915_reg.h               |   5 +
 drivers/gpu/drm/i915/intel_pm.c               | 422 +++++++++++++++---
 drivers/gpu/drm/i915/intel_pm.h               |   1 +
 drivers/gpu/drm/i915/intel_sideband.c         |  27 +-
 9 files changed, 626 insertions(+), 96 deletions(-)

-- 
2.17.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Intel-gfx] [PATCH v12 1/3] drm/i915: Refactor intel_can_enable_sagv
  2019-12-12 12:40 [Intel-gfx] [PATCH v12 0/3] Refactor Gen11+ SAGV support Stanislav Lisovskiy
@ 2019-12-12 12:40 ` Stanislav Lisovskiy
  2019-12-12 12:40 ` [Intel-gfx] [PATCH v12 2/3] drm/i915: Restrict qgv points which don't have enough bandwidth Stanislav Lisovskiy
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Stanislav Lisovskiy @ 2019-12-12 12:40 UTC (permalink / raw)
  To: intel-gfx

Currently intel_can_enable_sagv function contains
a mix of workarounds for different platforms
some of them are not valid for gens >= 11 already,
so lets split it into separate functions.

v2:
    - Rework watermark calculation algorithm to
      attempt to calculate Level 0 watermark
      with added sagv block time latency and
      check if it fits in DBuf in order to
      determine if SAGV can be enabled already
      at this stage, just as BSpec 49325 states.
      if that fails rollback to usual Level 0
      latency and disable SAGV.
    - Remove unneeded tabs(James Ausmus)

v3: Rebased the patch

v4: - Added back interlaced check for Gen12 and
      added separate function for TGL SAGV check
      (thanks to James Ausmus for spotting)
    - Removed unneeded gen check
    - Extracted Gen12 SAGV decision making code
      to a separate function from skl_compute_wm

v5: - Added SAGV global state to dev_priv, because
      we need to track all pipes, not only those
      in atomic state. Each pipe has now correspondent
      bit mask reflecting, whether it can tolerate
      SAGV or not(thanks to Ville Syrjala for suggestions).
    - Now using active flag instead of enable in crc
      usage check.

v6: - Fixed rebase conflicts

v7: - kms_cursor_legacy seems to get broken because of multiple memcpy
      calls when copying level 0 water marks for enabled SAGV, to
      fix this now simply using that field right away, without copying,
      for that introduced a new wm_level accessor which decides which
      wm_level to return based on SAGV state.

v8: - Protect crtc_sagv_mask same way as we do for other global state
      changes: i.e check if changes are needed, then grab all crtc locks
      to serialize the changes(Ville Syrjälä)
    - Add crtc_sagv_mask caching in order to avoid needless recalculations
      (Matthew Roper)
    - Put back Gen12 SAGV switch in order to get it enabled in separate
      patch(Matthew Roper)
    - Rename *_set_sagv_mask to *_compute_sagv_mask(Matthew Roper)
    - Check if there are no active pipes in intel_can_enable_sagv
      instead of platform specific functions(Matthew Roper), same
      for intel_has_sagv check.

Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Cc: Ville Syrjälä <ville.syrjala@intel.com>
Cc: James Ausmus <james.ausmus@intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c  |  12 +-
 .../drm/i915/display/intel_display_types.h    |   9 +
 drivers/gpu/drm/i915/i915_drv.h               |   6 +
 drivers/gpu/drm/i915/intel_pm.c               | 418 +++++++++++++++---
 drivers/gpu/drm/i915/intel_pm.h               |   1 +
 5 files changed, 394 insertions(+), 52 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 1f1cd7578706..5758932f3312 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -13433,7 +13433,10 @@ static void verify_wm_state(struct intel_crtc *crtc,
 		/* Watermarks */
 		for (level = 0; level <= max_level; level++) {
 			if (skl_wm_level_equals(&hw_plane_wm->wm[level],
-						&sw_plane_wm->wm[level]))
+						&sw_plane_wm->wm[level]) ||
+			   (skl_wm_level_equals(&hw_plane_wm->wm[level],
+						&sw_plane_wm->sagv_wm0) &&
+			   (level == 0)))
 				continue;
 
 			DRM_ERROR("mismatch in WM pipe %c plane %d level %d (expected e=%d b=%u l=%u, got e=%d b=%u l=%u)\n",
@@ -13485,7 +13488,10 @@ static void verify_wm_state(struct intel_crtc *crtc,
 		/* Watermarks */
 		for (level = 0; level <= max_level; level++) {
 			if (skl_wm_level_equals(&hw_plane_wm->wm[level],
-						&sw_plane_wm->wm[level]))
+						&sw_plane_wm->wm[level]) ||
+			   (skl_wm_level_equals(&hw_plane_wm->wm[level],
+						&sw_plane_wm->sagv_wm0) &&
+			   (level == 0)))
 				continue;
 
 			DRM_ERROR("mismatch in WM pipe %c cursor level %d (expected e=%d b=%u l=%u, got e=%d b=%u l=%u)\n",
@@ -14893,6 +14899,8 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
 			dev_priv->display.optimize_watermarks(state, crtc);
 	}
 
+	dev_priv->crtc_sagv_mask = state->crtc_sagv_mask;
+
 	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
 		intel_post_plane_update(old_crtc_state);
 
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
index 83ea04149b77..5301e1042b40 100644
--- a/drivers/gpu/drm/i915/display/intel_display_types.h
+++ b/drivers/gpu/drm/i915/display/intel_display_types.h
@@ -490,6 +490,14 @@ struct intel_atomic_state {
 	 */
 	u8 active_pipe_changes;
 
+	/*
+	 * Contains a mask which reflects whether correspondent pipe
+	 * can tolerate SAGV or not, so that we can make a decision
+	 * at atomic_commit_tail stage, whether we enable it or not
+	 * based on global state in dev_priv.
+	 */
+	u32 crtc_sagv_mask;
+
 	u8 active_pipes;
 	/* minimum acceptable cdclk for each pipe */
 	int min_cdclk[I915_MAX_PIPES];
@@ -670,6 +678,7 @@ struct skl_plane_wm {
 	struct skl_wm_level wm[8];
 	struct skl_wm_level uv_wm[8];
 	struct skl_wm_level trans_wm;
+	struct skl_wm_level sagv_wm0;
 	bool is_planar;
 };
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 14744c114475..d2c16e1a96f2 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1174,6 +1174,12 @@ struct drm_i915_private {
 
 	u32 sagv_block_time_us;
 
+	/*
+	 * Contains a bit mask, whether correspondent
+	 * pipe allows SAGV or not.
+	 */
+	u32 crtc_sagv_mask;
+
 	struct {
 		/*
 		 * Raw watermark latency values:
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index dfd0b8caabde..c3a8b3a8afb0 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3630,7 +3630,7 @@ static bool skl_needs_memory_bw_wa(struct drm_i915_private *dev_priv)
 	return IS_GEN9_BC(dev_priv) || IS_BROXTON(dev_priv);
 }
 
-static bool
+bool
 intel_has_sagv(struct drm_i915_private *dev_priv)
 {
 	/* HACK! */
@@ -3753,7 +3753,7 @@ intel_disable_sagv(struct drm_i915_private *dev_priv)
 	return 0;
 }
 
-bool intel_can_enable_sagv(struct intel_atomic_state *state)
+static void skl_compute_sagv_mask(struct intel_atomic_state *state)
 {
 	struct drm_device *dev = state->base.dev;
 	struct drm_i915_private *dev_priv = to_i915(dev);
@@ -3763,29 +3763,15 @@ bool intel_can_enable_sagv(struct intel_atomic_state *state)
 	enum pipe pipe;
 	int level, latency;
 
-	if (!intel_has_sagv(dev_priv))
-		return false;
-
-	/*
-	 * If there are no active CRTCs, no additional checks need be performed
-	 */
-	if (hweight8(state->active_pipes) == 0)
-		return true;
-
-	/*
-	 * SKL+ workaround: bspec recommends we disable SAGV when we have
-	 * more then one pipe enabled
-	 */
-	if (hweight8(state->active_pipes) > 1)
-		return false;
-
 	/* Since we're now guaranteed to only have one active CRTC... */
 	pipe = ffs(state->active_pipes) - 1;
 	crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
 	crtc_state = to_intel_crtc_state(crtc->base.state);
+	state->crtc_sagv_mask &= ~BIT(crtc->pipe);
 
-	if (crtc_state->hw.adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE)
-		return false;
+	if (crtc_state->hw.adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE) {
+		return;
+	}
 
 	for_each_intel_plane_on_crtc(dev, crtc, plane) {
 		struct skl_plane_wm *wm =
@@ -3812,7 +3798,138 @@ bool intel_can_enable_sagv(struct intel_atomic_state *state)
 		 * incur memory latencies higher than sagv_block_time_us we
 		 * can't enable SAGV.
 		 */
-		if (latency < dev_priv->sagv_block_time_us)
+		if (latency < dev_priv->sagv_block_time_us) {
+			return;
+		}
+	}
+
+	state->crtc_sagv_mask |= BIT(crtc->pipe);
+}
+
+static void tgl_compute_sagv_mask(struct intel_atomic_state *state);
+
+static void icl_compute_sagv_mask(struct intel_atomic_state *state)
+{
+	struct drm_device *dev = state->base.dev;
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	struct intel_crtc *crtc;
+	struct intel_crtc_state *new_crtc_state;
+	int level, latency;
+	int i;
+	int plane_id;
+
+	for_each_new_intel_crtc_in_state(state, crtc,
+					     new_crtc_state, i) {
+		unsigned int flags = crtc->base.state->adjusted_mode.flags;
+		bool can_sagv;
+
+		if (flags & DRM_MODE_FLAG_INTERLACE)
+			continue;
+
+		if (!new_crtc_state->hw.active)
+			continue;
+
+		can_sagv = true;
+		for_each_plane_id_on_crtc(crtc, plane_id) {
+			struct skl_plane_wm *wm =
+				&new_crtc_state->wm.skl.optimal.planes[plane_id];
+
+			/* Skip this plane if it's not enabled */
+			if (!wm->wm[0].plane_en)
+				continue;
+
+			/* Find the highest enabled wm level for this plane */
+			for (level = ilk_wm_max_level(dev_priv);
+			     !wm->wm[level].plane_en; --level) {
+			}
+
+			latency = dev_priv->wm.skl_latency[level];
+
+			/*
+			 * If any of the planes on this pipe don't enable
+			 * wm levels that incur memory latencies higher than
+			 * sagv_block_time_us we can't enable SAGV.
+			 */
+			if (latency < dev_priv->sagv_block_time_us) {
+				can_sagv = false;
+				break;
+			}
+		}
+		if (can_sagv)
+			state->crtc_sagv_mask |= BIT(crtc->pipe);
+		else
+			state->crtc_sagv_mask &= ~BIT(crtc->pipe);
+	}
+}
+
+bool intel_can_enable_sagv(struct intel_atomic_state *state)
+{
+	struct drm_device *dev = state->base.dev;
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	int ret, i;
+	struct intel_crtc *crtc;
+	struct intel_crtc_state *new_crtc_state;
+
+	if (!intel_has_sagv(dev_priv))
+		return false;
+
+	/*
+	 * Check if we had already calculated the mask.
+	 * if we had - then we already have global state,
+	 * serialized and thus protected from changes from
+	 * other commits and able to use cached version here.
+	 */
+	if (!state->crtc_sagv_mask) {
+		/*
+		 * If there are no active CRTCs, no additional
+		 * checks need be performed
+		 */
+		if (hweight8(state->active_pipes) == 0)
+			return false;
+
+		/*
+		 * Make sure we always pick global state first,
+		 * there shouldn't be any issue as we hold only locks
+		 * to correspondent crtcs in state, however once
+		 * we detect that we need to change SAGV mask
+		 * in global state, we will grab all the crtc locks
+		 * in order to get this serialized, thus other
+		 * racing commits having other crtc locks, will have
+		 * to start over again, as stated by Wound-Wait
+		 * algorithm.
+		 */
+		state->crtc_sagv_mask = dev_priv->crtc_sagv_mask;
+
+		if (INTEL_GEN(dev_priv) >= 12)
+			tgl_compute_sagv_mask(state);
+		else if (INTEL_GEN(dev_priv) == 11)
+			icl_compute_sagv_mask(state);
+		else
+			skl_compute_sagv_mask(state);
+
+		/*
+		 * For SAGV we need to account all the pipes,
+		 * not only the ones which are in state currently.
+		 * Grab all locks if we detect that we are actually
+		 * going to do something.
+		 */
+		if (state->crtc_sagv_mask != dev_priv->crtc_sagv_mask) {
+			ret = intel_atomic_serialize_global_state(state);
+			if (ret) {
+				DRM_DEBUG_KMS("Could not serialize global state\n");
+				return false;
+			}
+		}
+	}
+
+	for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) {
+		u32 mask = BIT(crtc->pipe);
+		bool state_sagv_masked = (mask & state->crtc_sagv_mask) == 0;
+
+		if (!new_crtc_state->hw.active)
+			continue;
+
+		if (state_sagv_masked)
 			return false;
 	}
 
@@ -3938,6 +4055,7 @@ static int skl_compute_wm_params(const struct intel_crtc_state *crtc_state,
 				 int color_plane);
 static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
 				 int level,
+				 u32 latency,
 				 const struct skl_wm_params *wp,
 				 const struct skl_wm_level *result_prev,
 				 struct skl_wm_level *result /* out */);
@@ -3960,7 +4078,10 @@ skl_cursor_allocation(const struct intel_crtc_state *crtc_state,
 	WARN_ON(ret);
 
 	for (level = 0; level <= max_level; level++) {
-		skl_compute_plane_wm(crtc_state, level, &wp, &wm, &wm);
+		u32 latency = dev_priv->wm.skl_latency[level];
+
+		skl_compute_plane_wm(crtc_state, level, latency, &wp, &wm, &wm);
+
 		if (wm.min_ddb_alloc == U16_MAX)
 			break;
 
@@ -4225,6 +4346,98 @@ icl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state,
 	return total_data_rate;
 }
 
+static int
+tgl_check_pipe_fits_sagv_wm(struct intel_crtc_state *crtc_state,
+			    struct skl_ddb_allocation *ddb /* out */)
+{
+	struct drm_crtc *crtc = crtc_state->uapi.crtc;
+	struct drm_i915_private *dev_priv = to_i915(crtc->dev);
+	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+	struct skl_ddb_entry *alloc = &crtc_state->wm.skl.ddb;
+	u16 alloc_size;
+	u16 total[I915_MAX_PLANES] = {};
+	u64 total_data_rate;
+	enum plane_id plane_id;
+	int num_active;
+	u64 plane_data_rate[I915_MAX_PLANES] = {};
+	u32 blocks;
+
+	/*
+	 * No need to check gen here, we call this only for gen12
+	 */
+	total_data_rate =
+		icl_get_total_relative_data_rate(crtc_state,
+						 plane_data_rate);
+
+	skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state,
+					   total_data_rate,
+					   ddb, alloc, &num_active);
+	alloc_size = skl_ddb_entry_size(alloc);
+	if (alloc_size == 0)
+		return -ENOSPC;
+
+	/* Allocate fixed number of blocks for cursor. */
+	total[PLANE_CURSOR] = skl_cursor_allocation(crtc_state, num_active);
+	alloc_size -= total[PLANE_CURSOR];
+	crtc_state->wm.skl.plane_ddb_y[PLANE_CURSOR].start =
+		alloc->end - total[PLANE_CURSOR];
+	crtc_state->wm.skl.plane_ddb_y[PLANE_CURSOR].end = alloc->end;
+
+	/*
+	 * Do check if we can fit L0 + sagv_block_time and
+	 * disable SAGV if we can't.
+	 */
+	blocks = 0;
+	for_each_plane_id_on_crtc(intel_crtc, plane_id) {
+		const struct skl_plane_wm *wm =
+			&crtc_state->wm.skl.optimal.planes[plane_id];
+
+		if (plane_id == PLANE_CURSOR) {
+			if (WARN_ON(wm->sagv_wm0.min_ddb_alloc >
+				    total[PLANE_CURSOR])) {
+				blocks = U32_MAX;
+				break;
+			}
+			continue;
+		}
+
+		blocks += wm->sagv_wm0.min_ddb_alloc;
+		if (blocks > alloc_size)
+			return -ENOSPC;
+	}
+	return 0;
+}
+
+const struct skl_wm_level *
+skl_plane_wm_level(struct intel_plane *plane,
+		const struct intel_crtc_state *crtc_state,
+		int level,
+		bool yuv)
+{
+	struct drm_atomic_state *state = crtc_state->uapi.state;
+	enum plane_id plane_id = plane->id;
+	const struct skl_plane_wm *wm =
+		&crtc_state->wm.skl.optimal.planes[plane_id];
+
+	/*
+	 * Looks ridicilous but need to check if state is not
+	 * NULL here as it might be as some cursor plane manipulations
+	 * seem to happen when no atomic state is actually present,
+	 * despite crtc_state is allocated. Removing state check
+	 * from here will result in kernel panic on boot.
+	 * However we now need to check whether should be use SAGV
+	 * wm levels here.
+	 */
+	if (state) {
+		struct intel_atomic_state *intel_state =
+			to_intel_atomic_state(state);
+		if (intel_can_enable_sagv(intel_state) && !level)
+			return &wm->sagv_wm0;
+	}
+
+	return yuv ? &wm->uv_wm[level] : &wm->wm[level];
+}
+
 static int
 skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 		      struct skl_ddb_allocation *ddb /* out */)
@@ -4239,6 +4452,9 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 	u16 uv_total[I915_MAX_PLANES] = {};
 	u64 total_data_rate;
 	enum plane_id plane_id;
+	struct intel_plane *plane;
+	const struct skl_wm_level *wm_level;
+	const struct skl_wm_level *wm_uv_level;
 	int num_active;
 	u64 plane_data_rate[I915_MAX_PLANES] = {};
 	u64 uv_plane_data_rate[I915_MAX_PLANES] = {};
@@ -4290,12 +4506,15 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 	 */
 	for (level = ilk_wm_max_level(dev_priv); level >= 0; level--) {
 		blocks = 0;
-		for_each_plane_id_on_crtc(intel_crtc, plane_id) {
-			const struct skl_plane_wm *wm =
-				&crtc_state->wm.skl.optimal.planes[plane_id];
+		for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) {
+			plane_id = plane->id;
+			wm_level = skl_plane_wm_level(plane, crtc_state,
+						      level, false);
+			wm_uv_level = skl_plane_wm_level(plane, crtc_state,
+							 level, true);
 
 			if (plane_id == PLANE_CURSOR) {
-				if (WARN_ON(wm->wm[level].min_ddb_alloc >
+				if (WARN_ON(wm_level->min_ddb_alloc >
 					    total[PLANE_CURSOR])) {
 					blocks = U32_MAX;
 					break;
@@ -4303,8 +4522,8 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 				continue;
 			}
 
-			blocks += wm->wm[level].min_ddb_alloc;
-			blocks += wm->uv_wm[level].min_ddb_alloc;
+			blocks += wm_level->min_ddb_alloc;
+			blocks += wm_uv_level->min_ddb_alloc;
 		}
 
 		if (blocks <= alloc_size) {
@@ -4325,12 +4544,16 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 	 * watermark level, plus an extra share of the leftover blocks
 	 * proportional to its relative data rate.
 	 */
-	for_each_plane_id_on_crtc(intel_crtc, plane_id) {
-		const struct skl_plane_wm *wm =
-			&crtc_state->wm.skl.optimal.planes[plane_id];
+	for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) {
 		u64 rate;
 		u16 extra;
 
+		plane_id = plane->id;
+		wm_level = skl_plane_wm_level(plane, crtc_state,
+					      level, false);
+		wm_uv_level = skl_plane_wm_level(plane, crtc_state,
+						 level, true);
+
 		if (plane_id == PLANE_CURSOR)
 			continue;
 
@@ -4345,7 +4568,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 		extra = min_t(u16, alloc_size,
 			      DIV64_U64_ROUND_UP(alloc_size * rate,
 						 total_data_rate));
-		total[plane_id] = wm->wm[level].min_ddb_alloc + extra;
+		total[plane_id] = wm_level->min_ddb_alloc + extra;
 		alloc_size -= extra;
 		total_data_rate -= rate;
 
@@ -4356,7 +4579,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 		extra = min_t(u16, alloc_size,
 			      DIV64_U64_ROUND_UP(alloc_size * rate,
 						 total_data_rate));
-		uv_total[plane_id] = wm->uv_wm[level].min_ddb_alloc + extra;
+		uv_total[plane_id] = wm_uv_level->min_ddb_alloc + extra;
 		alloc_size -= extra;
 		total_data_rate -= rate;
 	}
@@ -4397,9 +4620,14 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 	 * that aren't actually possible.
 	 */
 	for (level++; level <= ilk_wm_max_level(dev_priv); level++) {
-		for_each_plane_id_on_crtc(intel_crtc, plane_id) {
+		for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) {
 			struct skl_plane_wm *wm =
-				&crtc_state->wm.skl.optimal.planes[plane_id];
+				&crtc_state->wm.skl.optimal.planes[plane->id];
+
+			wm_level = skl_plane_wm_level(plane, crtc_state,
+						      level, false);
+			wm_uv_level = skl_plane_wm_level(plane, crtc_state,
+						      level, true);
 
 			/*
 			 * We only disable the watermarks for each plane if
@@ -4413,9 +4641,10 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 			 *  planes must be enabled before the level will be used."
 			 * So this is actually safe to do.
 			 */
-			if (wm->wm[level].min_ddb_alloc > total[plane_id] ||
-			    wm->uv_wm[level].min_ddb_alloc > uv_total[plane_id])
-				memset(&wm->wm[level], 0, sizeof(wm->wm[level]));
+			if (wm_level->min_ddb_alloc > total[plane->id] ||
+			    wm_uv_level->min_ddb_alloc > uv_total[plane->id])
+				memset(&wm->wm[level], 0,
+				       sizeof(struct skl_wm_level));
 
 			/*
 			 * Wa_1408961008:icl, ehl
@@ -4423,9 +4652,14 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 			 */
 			if (IS_GEN(dev_priv, 11) &&
 			    level == 1 && wm->wm[0].plane_en) {
-				wm->wm[level].plane_res_b = wm->wm[0].plane_res_b;
-				wm->wm[level].plane_res_l = wm->wm[0].plane_res_l;
-				wm->wm[level].ignore_lines = wm->wm[0].ignore_lines;
+				wm_level = skl_plane_wm_level(plane, crtc_state,
+							      0, false);
+				wm->wm[level].plane_res_b =
+					wm_level->plane_res_b;
+				wm->wm[level].plane_res_l =
+					wm_level->plane_res_l;
+				wm->wm[level].ignore_lines =
+					wm_level->ignore_lines;
 			}
 		}
 	}
@@ -4654,12 +4888,12 @@ static bool skl_wm_has_lines(struct drm_i915_private *dev_priv, int level)
 
 static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
 				 int level,
+				 u32 latency,
 				 const struct skl_wm_params *wp,
 				 const struct skl_wm_level *result_prev,
 				 struct skl_wm_level *result /* out */)
 {
 	struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
-	u32 latency = dev_priv->wm.skl_latency[level];
 	uint_fixed_16_16_t method1, method2;
 	uint_fixed_16_16_t selected_result;
 	u32 res_blocks, res_lines, min_ddb_alloc = 0;
@@ -4780,20 +5014,45 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
 static void
 skl_compute_wm_levels(const struct intel_crtc_state *crtc_state,
 		      const struct skl_wm_params *wm_params,
-		      struct skl_wm_level *levels)
+		      struct skl_plane_wm *plane_wm,
+		      bool yuv)
 {
 	struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
 	int level, max_level = ilk_wm_max_level(dev_priv);
+	/*
+	 * Check which kind of plane is it and based on that calculate
+	 * correspondent WM levels.
+	 */
+	struct skl_wm_level *levels = yuv ? plane_wm->uv_wm : plane_wm->wm;
 	struct skl_wm_level *result_prev = &levels[0];
 
 	for (level = 0; level <= max_level; level++) {
 		struct skl_wm_level *result = &levels[level];
+		u32 latency = dev_priv->wm.skl_latency[level];
 
-		skl_compute_plane_wm(crtc_state, level, wm_params,
-				     result_prev, result);
+		skl_compute_plane_wm(crtc_state, level, latency,
+				     wm_params, result_prev, result);
 
 		result_prev = result;
 	}
+	/*
+	 * For Gen12 if it is an L0 we need to also
+	 * consider sagv_block_time when calculating
+	 * L0 watermark - we will need that when making
+	 * a decision whether enable SAGV or not.
+	 * For older gens we agreed to copy L0 value for
+	 * compatibility.
+	 */
+	if ((INTEL_GEN(dev_priv) >= 12)) {
+		u32 latency = dev_priv->wm.skl_latency[0];
+
+		latency += dev_priv->sagv_block_time_us;
+		skl_compute_plane_wm(crtc_state, 0, latency,
+		     wm_params, &levels[0],
+		    &plane_wm->sagv_wm0);
+	} else
+		memcpy(&plane_wm->sagv_wm0, &levels[0],
+			sizeof(struct skl_wm_level));
 }
 
 static u32
@@ -4886,7 +5145,7 @@ static int skl_build_plane_wm_single(struct intel_crtc_state *crtc_state,
 	if (ret)
 		return ret;
 
-	skl_compute_wm_levels(crtc_state, &wm_params, wm->wm);
+	skl_compute_wm_levels(crtc_state, &wm_params, wm, false);
 	skl_compute_transition_wm(crtc_state, &wm_params, wm);
 
 	return 0;
@@ -4908,7 +5167,7 @@ static int skl_build_plane_wm_uv(struct intel_crtc_state *crtc_state,
 	if (ret)
 		return ret;
 
-	skl_compute_wm_levels(crtc_state, &wm_params, wm->uv_wm);
+	skl_compute_wm_levels(crtc_state, &wm_params, wm, true);
 
 	return 0;
 }
@@ -5045,10 +5304,13 @@ void skl_write_plane_wm(struct intel_plane *plane,
 		&crtc_state->wm.skl.plane_ddb_y[plane_id];
 	const struct skl_ddb_entry *ddb_uv =
 		&crtc_state->wm.skl.plane_ddb_uv[plane_id];
+	const struct skl_wm_level *wm_level;
 
 	for (level = 0; level <= max_level; level++) {
+		wm_level = skl_plane_wm_level(plane, crtc_state, level, false);
+
 		skl_write_wm_level(dev_priv, PLANE_WM(pipe, plane_id, level),
-				   &wm->wm[level]);
+				   wm_level);
 	}
 	skl_write_wm_level(dev_priv, PLANE_WM_TRANS(pipe, plane_id),
 			   &wm->trans_wm);
@@ -5079,10 +5341,13 @@ void skl_write_cursor_wm(struct intel_plane *plane,
 		&crtc_state->wm.skl.optimal.planes[plane_id];
 	const struct skl_ddb_entry *ddb =
 		&crtc_state->wm.skl.plane_ddb_y[plane_id];
+	const struct skl_wm_level *wm_level;
 
 	for (level = 0; level <= max_level; level++) {
+		wm_level = skl_plane_wm_level(plane, crtc_state, level, false);
+
 		skl_write_wm_level(dev_priv, CUR_WM(pipe, level),
-				   &wm->wm[level]);
+				   wm_level);
 	}
 	skl_write_wm_level(dev_priv, CUR_WM_TRANS(pipe), &wm->trans_wm);
 
@@ -5456,18 +5721,68 @@ static int skl_wm_add_affected_planes(struct intel_atomic_state *state,
 	return 0;
 }
 
+static void tgl_compute_sagv_mask(struct intel_atomic_state *state)
+{
+	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
+	struct intel_crtc *crtc;
+	struct intel_crtc_state *new_crtc_state;
+	struct intel_crtc_state *old_crtc_state;
+	struct skl_ddb_allocation *ddb = &state->wm_results.ddb;
+	int ret;
+	int i;
+	struct intel_plane *plane;
+
+	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
+					    new_crtc_state, i) {
+		int pipe_bit = BIT(crtc->pipe);
+		bool skip = true;
+
+		/*
+		 * If we had set this mast already once for this state,
+		 * no need to waste CPU cycles for doing this again.
+		 */
+		for_each_intel_plane_on_crtc(&dev_priv->drm, crtc, plane) {
+			enum plane_id plane_id = plane->id;
+
+			if (!skl_plane_wm_equals(dev_priv,
+				&old_crtc_state->wm.skl.optimal.planes[plane_id],
+				&new_crtc_state->wm.skl.optimal.planes[plane_id])) {
+				skip = false;
+				break;
+			}
+		}
+
+		/*
+		 * Check if wm levels are actually the same as for previous
+		 * state, which means we can just skip doing this long check
+		 * and just  copy correspondent bit from previous state.
+		 */
+		if (skip)
+			continue;
+
+		ret = tgl_check_pipe_fits_sagv_wm(new_crtc_state, ddb);
+		if (!ret)
+			state->crtc_sagv_mask |= pipe_bit;
+		else
+			state->crtc_sagv_mask &= ~pipe_bit;
+	}
+}
+
 static int
 skl_compute_wm(struct intel_atomic_state *state)
 {
 	struct intel_crtc *crtc;
 	struct intel_crtc_state *new_crtc_state;
 	struct intel_crtc_state *old_crtc_state;
-	struct skl_ddb_values *results = &state->wm_results;
 	int ret, i;
+	struct skl_ddb_values *results = &state->wm_results;
 
 	/* Clear all dirty flags */
 	results->dirty_pipes = 0;
 
+	/* No SAGV until we check if it's possible */
+	state->crtc_sagv_mask = 0;
+
 	ret = skl_ddb_add_affected_pipes(state);
 	if (ret)
 		return ret;
@@ -5647,6 +5962,9 @@ void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
 				val = I915_READ(CUR_WM(pipe, level));
 
 			skl_wm_level_from_reg_val(val, &wm->wm[level]);
+			if (level == 0)
+				memcpy(&wm->sagv_wm0, &wm->wm[level],
+					sizeof(struct skl_wm_level));
 		}
 
 		if (plane_id != PLANE_CURSOR)
diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h
index b579c724b915..53275860731a 100644
--- a/drivers/gpu/drm/i915/intel_pm.h
+++ b/drivers/gpu/drm/i915/intel_pm.h
@@ -43,6 +43,7 @@ void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
 void g4x_wm_sanitize(struct drm_i915_private *dev_priv);
 void vlv_wm_sanitize(struct drm_i915_private *dev_priv);
 bool intel_can_enable_sagv(struct intel_atomic_state *state);
+bool intel_has_sagv(struct drm_i915_private *dev_priv);
 int intel_enable_sagv(struct drm_i915_private *dev_priv);
 int intel_disable_sagv(struct drm_i915_private *dev_priv);
 bool skl_wm_level_equals(const struct skl_wm_level *l1,
-- 
2.17.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Intel-gfx] [PATCH v12 2/3] drm/i915: Restrict qgv points which don't have enough bandwidth.
  2019-12-12 12:40 [Intel-gfx] [PATCH v12 0/3] Refactor Gen11+ SAGV support Stanislav Lisovskiy
  2019-12-12 12:40 ` [Intel-gfx] [PATCH v12 1/3] drm/i915: Refactor intel_can_enable_sagv Stanislav Lisovskiy
@ 2019-12-12 12:40 ` Stanislav Lisovskiy
  2019-12-12 12:40 ` [Intel-gfx] [PATCH v12 3/3] drm/i915: Enable SAGV support for Gen12 Stanislav Lisovskiy
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Stanislav Lisovskiy @ 2019-12-12 12:40 UTC (permalink / raw)
  To: intel-gfx

According to BSpec 53998, we should try to
restrict qgv points, which can't provide
enough bandwidth for desired display configuration.

Currently we are just comparing against all of
those and take minimum(worst case).

v2: Fixed wrong PCode reply mask, removed hardcoded
    values.

v3: Forbid simultaneous legacy SAGV PCode requests and
    restricting qgv points. Put the actual restriction
    to commit function, added serialization(thanks to Ville)
    to prevent commit being applied out of order in case of
    nonblocking and/or nomodeset commits.

v4:
    - Minor code refactoring, fixed few typos(thanks to James Ausmus)
    - Change the naming of qgv point
      masking/unmasking functions(James Ausmus).
    - Simplify the masking/unmasking operation itself,
      as we don't need to mask only single point per request(James Ausmus)
    - Reject and stick to highest bandwidth point if SAGV
      can't be enabled(BSpec)

v5:
    - Add new mailbox reply codes, which seems to happen during boot
      time for TGL and indicate that QGV setting is not yet available.

v6:
    - Increase number of supported QGV points to be in sync with BSpec.

v7: - Rebased and resolved conflict to fix build failure.
    - Fix NUM_QGV_POINTS to 8 and moved that to header file(James Ausmus)

v8: - Don't report an error if we can't restrict qgv points, as SAGV
      can be disabled by BIOS, which is completely legal. So don't
      make CI panic. Instead if we detect that there is only 1 QGV
      point accessible just analyze if we can fit the required bandwidth
      requirements, but no need in restricting.

v9: - Fix wrong QGV transition if we have 0 planes and no SAGV
      simultaneously.

v10: - Fix CDCLK corruption, because of global state getting serialized
       without modeset, which caused copying of non-calculated cdclk
       to be copied to dev_priv(thanks to Ville for the hint).

v11: - Remove unneeded headers and spaces(Matthew Roper)
     - Remove unneeded intel_qgv_info qi struct from bw check and zero
       out the needed one(Matthew Roper)
     - Changed QGV error message to have more clear meaning(Matthew Roper)
     - Use state->modeset_set instead of any_ms(Matthew Roper)
     - Moved NUM_SAGV_POINTS from i915_reg.h to i915_drv.h where it's used
     - Keep using crtc_state->hw.active instead of .enable(Matthew Roper)
     - Moved unrelated changes to other patch(using latency as parameter
       for plane wm calculation, moved to SAGV refactoring patch)

v12: - Fix rebase conflict with own temporary SAGV/QGV fix.
     - Remove unnecessary mask being zero check when unmasking
       qgv points as this is completely legal(Matt Roper)
     - Check if we are setting the same mask as already being set
       in hardware to prevent error from PCode.
     - Fix error message when restricting/unrestricting qgv points
       to "mask/unmask" which sounds more accurate(Matt Roper)
     - Move sagv status setting to icl_get_bw_info from atomic check
       as this should be calculated only once.(Matt Roper)
     - Edited comments for the case when we can't enable SAGV and
       use only 1 QGV point with highest bandwidth to be more
       understandable.(Matt Roper)

Reviewed-by: James Ausmus <james.ausmus@intel.com>
Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Cc: Ville Syrjälä <ville.syrjala@intel.com>
Cc: James Ausmus <james.ausmus@intel.com>
---
 drivers/gpu/drm/i915/display/intel_bw.c       | 144 +++++++++++++-----
 drivers/gpu/drm/i915/display/intel_bw.h       |   2 +
 drivers/gpu/drm/i915/display/intel_display.c  |  86 ++++++++++-
 .../drm/i915/display/intel_display_types.h    |   3 +
 drivers/gpu/drm/i915/i915_drv.h               |   5 +
 drivers/gpu/drm/i915/i915_reg.h               |   5 +
 drivers/gpu/drm/i915/intel_sideband.c         |  27 +++-
 7 files changed, 232 insertions(+), 40 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_bw.c b/drivers/gpu/drm/i915/display/intel_bw.c
index dcb66a33be9b..95d8d7dfa769 100644
--- a/drivers/gpu/drm/i915/display/intel_bw.c
+++ b/drivers/gpu/drm/i915/display/intel_bw.c
@@ -8,6 +8,9 @@
 #include "intel_bw.h"
 #include "intel_display_types.h"
 #include "intel_sideband.h"
+#include "intel_atomic.h"
+#include "intel_pm.h"
+
 
 /* Parameters for Qclk Geyserville (QGV) */
 struct intel_qgv_point {
@@ -113,6 +116,26 @@ static int icl_pcode_read_qgv_point_info(struct drm_i915_private *dev_priv,
 	return 0;
 }
 
+int icl_pcode_restrict_qgv_points(struct drm_i915_private *dev_priv,
+				  u32 points_mask)
+{
+	int ret;
+
+	/* bspec says to keep retrying for at least 1 ms */
+	ret = skl_pcode_request(dev_priv, ICL_PCODE_SAGV_DE_MEM_SS_CONFIG,
+				points_mask,
+				GEN11_PCODE_POINTS_RESTRICTED_MASK,
+				GEN11_PCODE_POINTS_RESTRICTED,
+				1);
+
+	if (ret < 0) {
+		DRM_ERROR("Failed to disable qgv points (%d)\n", ret);
+		return ret;
+	}
+
+	return 0;
+}
+
 static int icl_get_qgv_points(struct drm_i915_private *dev_priv,
 			      struct intel_qgv_info *qi)
 {
@@ -236,6 +259,16 @@ static int icl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel
 			break;
 	}
 
+	/*
+	 * In case if SAGV is disabled in BIOS, we always get 1
+	 * SAGV point, but we can't send PCode commands to restrict it
+	 * as it will fail and pointless anyway.
+	 */
+	if (qi.num_points == 1)
+		dev_priv->sagv_status = I915_SAGV_NOT_CONTROLLED;
+	else
+		dev_priv->sagv_status = I915_SAGV_ENABLED;
+
 	return 0;
 }
 
@@ -273,34 +306,6 @@ void intel_bw_init_hw(struct drm_i915_private *dev_priv)
 		icl_get_bw_info(dev_priv, &icl_sa_info);
 }
 
-static unsigned int intel_max_data_rate(struct drm_i915_private *dev_priv,
-					int num_planes)
-{
-	if (INTEL_GEN(dev_priv) >= 11) {
-		/*
-		 * Any bw group has same amount of QGV points
-		 */
-		const struct intel_bw_info *bi =
-			&dev_priv->max_bw[0];
-		unsigned int min_bw = UINT_MAX;
-		int i;
-
-		/*
-		 * FIXME with SAGV disabled maybe we can assume
-		 * point 1 will always be used? Seems to match
-		 * the behaviour observed in the wild.
-		 */
-		for (i = 0; i < bi->num_qgv_points; i++) {
-			unsigned int bw = icl_max_bw(dev_priv, num_planes, i);
-
-			min_bw = min(bw, min_bw);
-		}
-		return min_bw;
-	} else {
-		return UINT_MAX;
-	}
-}
-
 static unsigned int intel_bw_crtc_num_active_planes(const struct intel_crtc_state *crtc_state)
 {
 	/*
@@ -392,7 +397,11 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
 	unsigned int data_rate, max_data_rate;
 	unsigned int num_active_planes;
 	struct intel_crtc *crtc;
-	int i;
+	int i, ret;
+	u32 allowed_points = 0;
+	unsigned int max_bw_point = 0, max_bw = 0;
+	unsigned int num_qgv_points = dev_priv->max_bw[0].num_qgv_points;
+	u32 mask = (1 << num_qgv_points) - 1;
 
 	/* FIXME earlier gens need some checks too */
 	if (INTEL_GEN(dev_priv) < 11)
@@ -436,16 +445,83 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
 	data_rate = intel_bw_data_rate(dev_priv, bw_state);
 	num_active_planes = intel_bw_num_active_planes(dev_priv, bw_state);
 
-	max_data_rate = intel_max_data_rate(dev_priv, num_active_planes);
-
 	data_rate = DIV_ROUND_UP(data_rate, 1000);
 
-	if (data_rate > max_data_rate) {
-		DRM_DEBUG_KMS("Bandwidth %u MB/s exceeds max available %d MB/s (%d active planes)\n",
-			      data_rate, max_data_rate, num_active_planes);
+	for (i = 0; i < num_qgv_points; i++) {
+		max_data_rate = icl_max_bw(dev_priv, num_active_planes, i);
+		/*
+		 * We need to know which qgv point gives us
+		 * maximum bandwidth in order to disable SAGV
+		 * if we find that we exceed SAGV block time
+		 * with watermarks. By that moment we already
+		 * have those, as it is calculated earlier in
+		 * intel_atomic_check,
+		 */
+		if (max_data_rate > max_bw) {
+			max_bw_point = i;
+			max_bw = max_data_rate;
+		}
+		if (max_data_rate >= data_rate)
+			allowed_points |= BIT(i);
+		DRM_DEBUG_KMS("QGV point %d: max bw %d required %d\n",
+			      i, max_data_rate, data_rate);
+	}
+
+	/*
+	 * BSpec states that we always should have at least one allowed point
+	 * left, so if we couldn't - simply reject the configuration for obvious
+	 * reasons.
+	 */
+	if (allowed_points == 0) {
+		DRM_DEBUG_KMS("No QGV points provide sufficient memory"
+			      " bandwidth for display configuration.\n");
 		return -EINVAL;
 	}
 
+	/*
+	 * Leave only single point with highest bandwidth, if
+	 * we can't enable SAGV due to the increased memory latency it may
+	 * cause.
+	 */
+	if (!intel_can_enable_sagv(state)) {
+		/*
+		 * This is a special case, when we have 0 planes
+		 * and SAGV can't be enabled means that we should keep QGV with
+		 * highest bandwidth, however algorithm returns wrong result
+		 * for 0 planes and 0 data rate case, so just stick to last config
+		 * then. Otherwise use the QGV point with highest BW according
+		 * to BSpec.
+		 */
+		if (!data_rate && !num_active_planes) {
+			DRM_DEBUG_KMS("No SAGV, using old QGV mask\n");
+			allowed_points = (~dev_priv->qgv_points_mask) & mask;
+		} else {
+			allowed_points = 1 << max_bw_point;
+			DRM_DEBUG_KMS("No SAGV, using single QGV point %d\n",
+				      max_bw_point);
+		}
+	}
+	/*
+	 * We store the ones which need to be masked as that is what PCode
+	 * actually accepts as a parameter.
+	 */
+	state->qgv_points_mask = (~allowed_points) & mask;
+
+	DRM_DEBUG_KMS("New state %p qgv mask %x\n",
+		      state, state->qgv_points_mask);
+
+	/*
+	 * If the actual mask had changed we need to make sure that
+	 * the commits are serialized(in case this is a nomodeset, nonblocking)
+	 */
+	if (state->qgv_points_mask != dev_priv->qgv_points_mask) {
+		ret = intel_atomic_serialize_global_state(state);
+		if (ret) {
+			DRM_DEBUG_KMS("Could not serialize global state\n");
+			return ret;
+		}
+	}
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/i915/display/intel_bw.h b/drivers/gpu/drm/i915/display/intel_bw.h
index 9db10af012f4..66bf9bc10b73 100644
--- a/drivers/gpu/drm/i915/display/intel_bw.h
+++ b/drivers/gpu/drm/i915/display/intel_bw.h
@@ -28,5 +28,7 @@ int intel_bw_init(struct drm_i915_private *dev_priv);
 int intel_bw_atomic_check(struct intel_atomic_state *state);
 void intel_bw_crtc_update(struct intel_bw_state *bw_state,
 			  const struct intel_crtc_state *crtc_state);
+int icl_pcode_restrict_qgv_points(struct drm_i915_private *dev_priv,
+				  u32 points_mask);
 
 #endif /* __INTEL_BW_H__ */
diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 5758932f3312..de3da4c063a8 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -14786,6 +14786,75 @@ static void intel_atomic_cleanup_work(struct work_struct *work)
 	intel_atomic_helper_free_state(i915);
 }
 
+static void intel_qgv_points_mask(struct intel_atomic_state *state)
+{
+	struct drm_device *dev = state->base.dev;
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	int ret;
+	u32 new_mask = dev_priv->qgv_points_mask | state->qgv_points_mask;
+	unsigned int num_qgv_points = dev_priv->max_bw[0].num_qgv_points;
+	unsigned int mask = (1 << num_qgv_points) - 1;
+
+	/*
+	 * As we don't know initial hardware state during initial commit
+	 * we should not do anything, until we actually figure out,
+	 * what are the qgv points to mask.
+	 */
+	if (!new_mask)
+		return;
+
+	WARN_ON(new_mask == mask);
+
+	/*
+	 * Just return if we can't control SAGV or don't have it.
+	 */
+	if (!intel_has_sagv(dev_priv))
+		return;
+
+	/*
+	 * Restrict required qgv points before updating the configuration.
+	 * According to BSpec we can't mask and unmask qgv points at the same
+	 * time. Also masking should be done before updating the configuration
+	 * and unmasking afterwards.
+	 */
+	ret = icl_pcode_restrict_qgv_points(dev_priv, new_mask);
+	if (ret < 0)
+		DRM_DEBUG_KMS("Could not mask required qgv points(%d)\n",
+			      ret);
+	else
+		dev_priv->qgv_points_mask = new_mask;
+}
+
+static void intel_qgv_points_unmask(struct intel_atomic_state *state)
+{
+	struct drm_device *dev = state->base.dev;
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	int ret;
+	u32 new_mask = dev_priv->qgv_points_mask & state->qgv_points_mask;
+
+	/*
+	 * Just return if we can't control SAGV or don't have it.
+	 */
+	if (!intel_has_sagv(dev_priv))
+		return;
+
+	if (new_mask == dev_priv->qgv_points_mask)
+		return;
+
+	/*
+	 * Allow required qgv points after updating the configuration.
+	 * According to BSpec we can't mask and unmask qgv points at the same
+	 * time. Also masking should be done before updating the configuration
+	 * and unmasking afterwards.
+	 */
+	ret = icl_pcode_restrict_qgv_points(dev_priv, new_mask);
+	if (ret < 0)
+		DRM_DEBUG_KMS("Could not unmask required qgv points(%d)\n",
+			      ret);
+	else
+		dev_priv->qgv_points_mask = new_mask;
+}
+
 static void intel_atomic_commit_tail(struct intel_atomic_state *state)
 {
 	struct drm_device *dev = state->base.dev;
@@ -14813,6 +14882,9 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
 		}
 	}
 
+	if ((INTEL_GEN(dev_priv) >= 11))
+		intel_qgv_points_mask(state);
+
 	intel_commit_modeset_disables(state);
 
 	/* FIXME: Eventually get rid of our crtc->config pointer */
@@ -14831,8 +14903,9 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
 		 * SKL workaround: bspec recommends we disable the SAGV when we
 		 * have more then one pipe enabled
 		 */
-		if (!intel_can_enable_sagv(state))
-			intel_disable_sagv(dev_priv);
+		if (INTEL_GEN(dev_priv) < 11)
+			if (!intel_can_enable_sagv(state))
+				intel_disable_sagv(dev_priv);
 
 		intel_modeset_verify_disabled(dev_priv, state);
 	}
@@ -14913,8 +14986,11 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
 	if (state->modeset)
 		intel_verify_planes(state);
 
-	if (state->modeset && intel_can_enable_sagv(state))
-		intel_enable_sagv(dev_priv);
+	if (INTEL_GEN(dev_priv) < 11) {
+		if (state->modeset && intel_can_enable_sagv(state))
+			intel_enable_sagv(dev_priv);
+	} else
+		intel_qgv_points_unmask(state);
 
 	drm_atomic_helper_commit_hw_done(&state->base);
 
@@ -15061,7 +15137,7 @@ static int intel_atomic_commit(struct drm_device *dev,
 	intel_shared_dpll_swap_state(state);
 	intel_atomic_track_fbs(state);
 
-	if (state->global_state_changed) {
+	if (state->global_state_changed && state->modeset) {
 		assert_global_state_locked(dev_priv);
 
 		memcpy(dev_priv->min_cdclk, state->min_cdclk,
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
index 5301e1042b40..e1ac7c01bbda 100644
--- a/drivers/gpu/drm/i915/display/intel_display_types.h
+++ b/drivers/gpu/drm/i915/display/intel_display_types.h
@@ -528,6 +528,9 @@ struct intel_atomic_state {
 	struct i915_sw_fence commit_ready;
 
 	struct llist_node freed;
+
+	/* Gen11+ only */
+	u32 qgv_points_mask;
 };
 
 struct intel_plane_state {
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index d2c16e1a96f2..32832209d4f3 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -853,6 +853,9 @@ enum intel_pipe_crc_source {
 	INTEL_PIPE_CRC_SOURCE_MAX,
 };
 
+/* BSpec precisely defines this */
+#define NUM_SAGV_POINTS 8
+
 #define INTEL_PIPE_CRC_ENTRIES_NR	128
 struct intel_pipe_crc {
 	spinlock_t lock;
@@ -1247,6 +1250,8 @@ struct drm_i915_private {
 		u8 num_planes;
 	} max_bw[6];
 
+	u32 qgv_points_mask;
+
 	struct drm_private_obj bw_obj;
 
 	struct intel_runtime_pm runtime_pm;
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 1a6376a97d48..3af43cf2d839 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -8991,6 +8991,8 @@ enum {
 #define     GEN6_PCODE_UNIMPLEMENTED_CMD	0xFF
 #define     GEN7_PCODE_TIMEOUT			0x2
 #define     GEN7_PCODE_ILLEGAL_DATA		0x3
+#define     GEN11_PCODE_MAIL_BOX_LOCKED		0x6
+#define     GEN11_PCODE_REJECTED		0x11
 #define     GEN7_PCODE_MIN_FREQ_TABLE_GT_RATIO_OUT_OF_RANGE 0x10
 #define   GEN6_PCODE_WRITE_RC6VIDS		0x4
 #define   GEN6_PCODE_READ_RC6VIDS		0x5
@@ -9012,6 +9014,7 @@ enum {
 #define   ICL_PCODE_MEM_SUBSYSYSTEM_INFO	0xd
 #define     ICL_PCODE_MEM_SS_READ_GLOBAL_INFO	(0x0 << 8)
 #define     ICL_PCODE_MEM_SS_READ_QGV_POINT_INFO(point)	(((point) << 16) | (0x1 << 8))
+#define   ICL_PCODE_SAGV_DE_MEM_SS_CONFIG	0xe
 #define   GEN6_PCODE_READ_D_COMP		0x10
 #define   GEN6_PCODE_WRITE_D_COMP		0x11
 #define   HSW_PCODE_DE_WRITE_FREQ_REQ		0x17
@@ -9024,6 +9027,8 @@ enum {
 #define     GEN9_SAGV_IS_DISABLED		0x1
 #define     GEN9_SAGV_ENABLE			0x3
 #define GEN12_PCODE_READ_SAGV_BLOCK_TIME_US	0x23
+#define GEN11_PCODE_POINTS_RESTRICTED		0x0
+#define GEN11_PCODE_POINTS_RESTRICTED_MASK	0x1
 #define GEN6_PCODE_DATA				_MMIO(0x138128)
 #define   GEN6_PCODE_FREQ_IA_RATIO_SHIFT	8
 #define   GEN6_PCODE_FREQ_RING_RATIO_SHIFT	16
diff --git a/drivers/gpu/drm/i915/intel_sideband.c b/drivers/gpu/drm/i915/intel_sideband.c
index e06b35b844a0..ff9dbed094d8 100644
--- a/drivers/gpu/drm/i915/intel_sideband.c
+++ b/drivers/gpu/drm/i915/intel_sideband.c
@@ -371,6 +371,29 @@ static inline int gen7_check_mailbox_status(u32 mbox)
 	}
 }
 
+static inline int gen11_check_mailbox_status(u32 mbox)
+{
+	switch (mbox & GEN6_PCODE_ERROR_MASK) {
+	case GEN6_PCODE_SUCCESS:
+		return 0;
+	case GEN6_PCODE_ILLEGAL_CMD:
+		return -ENXIO;
+	case GEN7_PCODE_TIMEOUT:
+		return -ETIMEDOUT;
+	case GEN7_PCODE_ILLEGAL_DATA:
+		return -EINVAL;
+	case GEN7_PCODE_MIN_FREQ_TABLE_GT_RATIO_OUT_OF_RANGE:
+		return -EOVERFLOW;
+	case GEN11_PCODE_MAIL_BOX_LOCKED:
+		return -EAGAIN;
+	case GEN11_PCODE_REJECTED:
+		return -EACCES;
+	default:
+		MISSING_CASE(mbox & GEN6_PCODE_ERROR_MASK);
+		return 0;
+	}
+}
+
 static int __sandybridge_pcode_rw(struct drm_i915_private *i915,
 				  u32 mbox, u32 *val, u32 *val1,
 				  int fast_timeout_us,
@@ -408,7 +431,9 @@ static int __sandybridge_pcode_rw(struct drm_i915_private *i915,
 	if (is_read && val1)
 		*val1 = intel_uncore_read_fw(uncore, GEN6_PCODE_DATA1);
 
-	if (INTEL_GEN(i915) > 6)
+	if (INTEL_GEN(i915) >= 11)
+		return gen11_check_mailbox_status(mbox);
+	else if (INTEL_GEN(i915) > 6)
 		return gen7_check_mailbox_status(mbox);
 	else
 		return gen6_check_mailbox_status(mbox);
-- 
2.17.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Intel-gfx] [PATCH v12 3/3] drm/i915: Enable SAGV support for Gen12
  2019-12-12 12:40 [Intel-gfx] [PATCH v12 0/3] Refactor Gen11+ SAGV support Stanislav Lisovskiy
  2019-12-12 12:40 ` [Intel-gfx] [PATCH v12 1/3] drm/i915: Refactor intel_can_enable_sagv Stanislav Lisovskiy
  2019-12-12 12:40 ` [Intel-gfx] [PATCH v12 2/3] drm/i915: Restrict qgv points which don't have enough bandwidth Stanislav Lisovskiy
@ 2019-12-12 12:40 ` Stanislav Lisovskiy
  2019-12-12 17:20 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Refactor Gen11+ SAGV support (rev13) Patchwork
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Stanislav Lisovskiy @ 2019-12-12 12:40 UTC (permalink / raw)
  To: intel-gfx

Flip the switch and enable SAGV support
for Gen12 also.

Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index c3a8b3a8afb0..cf323e0781a8 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3633,10 +3633,6 @@ static bool skl_needs_memory_bw_wa(struct drm_i915_private *dev_priv)
 bool
 intel_has_sagv(struct drm_i915_private *dev_priv)
 {
-	/* HACK! */
-	if (IS_GEN(dev_priv, 12))
-		return false;
-
 	return (IS_GEN9_BC(dev_priv) || INTEL_GEN(dev_priv) >= 10) &&
 		dev_priv->sagv_status != I915_SAGV_NOT_CONTROLLED;
 }
-- 
2.17.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Refactor Gen11+ SAGV support (rev13)
  2019-12-12 12:40 [Intel-gfx] [PATCH v12 0/3] Refactor Gen11+ SAGV support Stanislav Lisovskiy
                   ` (2 preceding siblings ...)
  2019-12-12 12:40 ` [Intel-gfx] [PATCH v12 3/3] drm/i915: Enable SAGV support for Gen12 Stanislav Lisovskiy
@ 2019-12-12 17:20 ` Patchwork
  2019-12-12 17:23 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Patchwork @ 2019-12-12 17:20 UTC (permalink / raw)
  To: Stanislav Lisovskiy; +Cc: intel-gfx

== Series Details ==

Series: Refactor Gen11+ SAGV support (rev13)
URL   : https://patchwork.freedesktop.org/series/68028/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
38b8ca097735 drm/i915: Refactor intel_can_enable_sagv
-:208: WARNING:BRACES: braces {} are not necessary for single statement blocks
#208: FILE: drivers/gpu/drm/i915/intel_pm.c:3818:
+		if (latency < dev_priv->sagv_block_time_us) {
+			return;
+		}

-:229: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#229: FILE: drivers/gpu/drm/i915/intel_pm.c:3839:
+	for_each_new_intel_crtc_in_state(state, crtc,
+					     new_crtc_state, i) {

-:431: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#431: FILE: drivers/gpu/drm/i915/intel_pm.c:4430:
+skl_plane_wm_level(struct intel_plane *plane,
+		const struct intel_crtc_state *crtc_state,

-:554: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#554: FILE: drivers/gpu/drm/i915/intel_pm.c:4647:
+			wm_uv_level = skl_plane_wm_level(plane, crtc_state,
+						      level, true);

-:645: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#645: FILE: drivers/gpu/drm/i915/intel_pm.c:5068:
+		skl_compute_plane_wm(crtc_state, 0, latency,
+		     wm_params, &levels[0],

-:647: CHECK:BRACES: Unbalanced braces around else statement
#647: FILE: drivers/gpu/drm/i915/intel_pm.c:5070:
+	} else

-:649: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#649: FILE: drivers/gpu/drm/i915/intel_pm.c:5072:
+		memcpy(&plane_wm->sagv_wm0, &levels[0],
+			sizeof(struct skl_wm_level));

-:729: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#729: FILE: drivers/gpu/drm/i915/intel_pm.c:5765:
+			if (!skl_plane_wm_equals(dev_priv,
+				&old_crtc_state->wm.skl.optimal.planes[plane_id],

-:777: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#777: FILE: drivers/gpu/drm/i915/intel_pm.c:5984:
+				memcpy(&wm->sagv_wm0, &wm->wm[level],
+					sizeof(struct skl_wm_level));

total: 0 errors, 1 warnings, 8 checks, 676 lines checked
e9f893ee788f drm/i915: Restrict qgv points which don't have enough bandwidth.
-:401: CHECK:BRACES: braces {} should be used on all arms of this statement
#401: FILE: drivers/gpu/drm/i915/display/intel_display.c:14914:
+	if (INTEL_GEN(dev_priv) < 11) {
[...]
+	} else
[...]

-:404: CHECK:BRACES: Unbalanced braces around else statement
#404: FILE: drivers/gpu/drm/i915/display/intel_display.c:14917:
+	} else

total: 0 errors, 0 warnings, 2 checks, 396 lines checked
7171528b4b10 drm/i915: Enable SAGV support for Gen12

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Intel-gfx] ✗ Fi.CI.SPARSE: warning for Refactor Gen11+ SAGV support (rev13)
  2019-12-12 12:40 [Intel-gfx] [PATCH v12 0/3] Refactor Gen11+ SAGV support Stanislav Lisovskiy
                   ` (3 preceding siblings ...)
  2019-12-12 17:20 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Refactor Gen11+ SAGV support (rev13) Patchwork
@ 2019-12-12 17:23 ` Patchwork
  2019-12-12 17:43 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
  2019-12-13 10:32 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
  6 siblings, 0 replies; 10+ messages in thread
From: Patchwork @ 2019-12-12 17:23 UTC (permalink / raw)
  To: Stanislav Lisovskiy; +Cc: intel-gfx

== Series Details ==

Series: Refactor Gen11+ SAGV support (rev13)
URL   : https://patchwork.freedesktop.org/series/68028/
State : warning

== Summary ==

$ dim sparse origin/drm-tip
Sparse version: v0.6.0
Commit: drm/i915: Refactor intel_can_enable_sagv
+drivers/gpu/drm/i915/intel_pm.c:4428:27: warning: symbol 'skl_plane_wm_level' was not declared. Should it be static?

Commit: drm/i915: Restrict qgv points which don't have enough bandwidth.
Okay!

Commit: drm/i915: Enable SAGV support for Gen12
Okay!

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for Refactor Gen11+ SAGV support (rev13)
  2019-12-12 12:40 [Intel-gfx] [PATCH v12 0/3] Refactor Gen11+ SAGV support Stanislav Lisovskiy
                   ` (4 preceding siblings ...)
  2019-12-12 17:23 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
@ 2019-12-12 17:43 ` Patchwork
  2019-12-13 10:32 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
  6 siblings, 0 replies; 10+ messages in thread
From: Patchwork @ 2019-12-12 17:43 UTC (permalink / raw)
  To: Stanislav Lisovskiy; +Cc: intel-gfx

== Series Details ==

Series: Refactor Gen11+ SAGV support (rev13)
URL   : https://patchwork.freedesktop.org/series/68028/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_7551 -> Patchwork_15721
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/index.html

Known issues
------------

  Here are the changes found in Patchwork_15721 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live_blt:
    - fi-hsw-4770r:       [PASS][1] -> [DMESG-FAIL][2] ([i915#725])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/fi-hsw-4770r/igt@i915_selftest@live_blt.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/fi-hsw-4770r/igt@i915_selftest@live_blt.html

  * igt@i915_selftest@live_gem_contexts:
    - fi-byt-j1900:       [PASS][3] -> [INCOMPLETE][4] ([i915#45])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/fi-byt-j1900/igt@i915_selftest@live_gem_contexts.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/fi-byt-j1900/igt@i915_selftest@live_gem_contexts.html

  
#### Possible fixes ####

  * igt@gem_sync@basic-many-each:
    - {fi-tgl-guc}:       [INCOMPLETE][5] ([i915#707]) -> [PASS][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/fi-tgl-guc/igt@gem_sync@basic-many-each.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/fi-tgl-guc/igt@gem_sync@basic-many-each.html

  * igt@i915_selftest@live_blt:
    - fi-hsw-4770:        [DMESG-FAIL][7] ([i915#553] / [i915#725]) -> [PASS][8]
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/fi-hsw-4770/igt@i915_selftest@live_blt.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/fi-hsw-4770/igt@i915_selftest@live_blt.html

  * igt@kms_chamelium@hdmi-edid-read:
    - fi-kbl-7500u:       [FAIL][9] ([i915#217]) -> [PASS][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/fi-kbl-7500u/igt@kms_chamelium@hdmi-edid-read.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/fi-kbl-7500u/igt@kms_chamelium@hdmi-edid-read.html

  
#### Warnings ####

  * igt@gem_exec_suspend@basic-s4-devices:
    - fi-kbl-x1275:       [DMESG-WARN][11] ([fdo#107139] / [i915#62] / [i915#92]) -> [DMESG-WARN][12] ([fdo#107139] / [i915#62] / [i915#92] / [i915#95])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/fi-kbl-x1275/igt@gem_exec_suspend@basic-s4-devices.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/fi-kbl-x1275/igt@gem_exec_suspend@basic-s4-devices.html

  * igt@i915_selftest@live_gem_contexts:
    - fi-hsw-peppy:       [DMESG-FAIL][13] ([i915#722]) -> [INCOMPLETE][14] ([i915#694])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/fi-hsw-peppy/igt@i915_selftest@live_gem_contexts.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/fi-hsw-peppy/igt@i915_selftest@live_gem_contexts.html

  * igt@kms_chamelium@hdmi-hpd-fast:
    - fi-kbl-7500u:       [FAIL][15] ([fdo#111096] / [i915#323]) -> [FAIL][16] ([fdo#111407])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/fi-kbl-7500u/igt@kms_chamelium@hdmi-hpd-fast.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/fi-kbl-7500u/igt@kms_chamelium@hdmi-hpd-fast.html

  * igt@kms_flip@basic-flip-vs-modeset:
    - fi-kbl-x1275:       [DMESG-WARN][17] ([i915#62] / [i915#92]) -> [DMESG-WARN][18] ([i915#62] / [i915#92] / [i915#95]) +3 similar issues
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/fi-kbl-x1275/igt@kms_flip@basic-flip-vs-modeset.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/fi-kbl-x1275/igt@kms_flip@basic-flip-vs-modeset.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
    - fi-kbl-x1275:       [DMESG-WARN][19] ([i915#62] / [i915#92] / [i915#95]) -> [DMESG-WARN][20] ([i915#62] / [i915#92]) +2 similar issues
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/fi-kbl-x1275/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/fi-kbl-x1275/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#107139]: https://bugs.freedesktop.org/show_bug.cgi?id=107139
  [fdo#111096]: https://bugs.freedesktop.org/show_bug.cgi?id=111096
  [fdo#111407]: https://bugs.freedesktop.org/show_bug.cgi?id=111407
  [i915#217]: https://gitlab.freedesktop.org/drm/intel/issues/217
  [i915#323]: https://gitlab.freedesktop.org/drm/intel/issues/323
  [i915#45]: https://gitlab.freedesktop.org/drm/intel/issues/45
  [i915#553]: https://gitlab.freedesktop.org/drm/intel/issues/553
  [i915#62]: https://gitlab.freedesktop.org/drm/intel/issues/62
  [i915#694]: https://gitlab.freedesktop.org/drm/intel/issues/694
  [i915#707]: https://gitlab.freedesktop.org/drm/intel/issues/707
  [i915#722]: https://gitlab.freedesktop.org/drm/intel/issues/722
  [i915#725]: https://gitlab.freedesktop.org/drm/intel/issues/725
  [i915#92]: https://gitlab.freedesktop.org/drm/intel/issues/92
  [i915#95]: https://gitlab.freedesktop.org/drm/intel/issues/95


Participating hosts (52 -> 46)
------------------------------

  Additional (1): fi-gdg-551 
  Missing    (7): fi-icl-1065g7 fi-ilk-m540 fi-byt-squawks fi-bsw-cyan fi-ctg-p8600 fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_7551 -> Patchwork_15721

  CI-20190529: 20190529
  CI_DRM_7551: e60aa4ffc106f910452d28f2ea49ae2ff44d85d5 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5346: 466b0e6cbcbaccff012b484d1fd7676364b37b93 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_15721: 7171528b4b10e691a554e2bc359b2491402b9b26 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

7171528b4b10 drm/i915: Enable SAGV support for Gen12
e9f893ee788f drm/i915: Restrict qgv points which don't have enough bandwidth.
38b8ca097735 drm/i915: Refactor intel_can_enable_sagv

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for Refactor Gen11+ SAGV support (rev13)
  2019-12-12 12:40 [Intel-gfx] [PATCH v12 0/3] Refactor Gen11+ SAGV support Stanislav Lisovskiy
                   ` (5 preceding siblings ...)
  2019-12-12 17:43 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2019-12-13 10:32 ` Patchwork
  6 siblings, 0 replies; 10+ messages in thread
From: Patchwork @ 2019-12-13 10:32 UTC (permalink / raw)
  To: Lisovskiy, Stanislav; +Cc: intel-gfx

== Series Details ==

Series: Refactor Gen11+ SAGV support (rev13)
URL   : https://patchwork.freedesktop.org/series/68028/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_7551_full -> Patchwork_15721_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Known issues
------------

  Here are the changes found in Patchwork_15721_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_isolation@rcs0-s3:
    - shard-apl:          [PASS][1] -> [DMESG-WARN][2] ([i915#180]) +2 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-apl6/igt@gem_ctx_isolation@rcs0-s3.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-apl1/igt@gem_ctx_isolation@rcs0-s3.html

  * igt@gem_ctx_persistence@bcs0-mixed-process:
    - shard-glk:          [PASS][3] -> [FAIL][4] ([i915#679])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-glk5/igt@gem_ctx_persistence@bcs0-mixed-process.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-glk9/igt@gem_ctx_persistence@bcs0-mixed-process.html

  * igt@gem_ctx_persistence@vcs1-persistence:
    - shard-iclb:         [PASS][5] -> [SKIP][6] ([fdo#109276] / [fdo#112080])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb2/igt@gem_ctx_persistence@vcs1-persistence.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb3/igt@gem_ctx_persistence@vcs1-persistence.html

  * igt@gem_ctx_shared@q-smoketest-vebox:
    - shard-tglb:         [PASS][7] -> [INCOMPLETE][8] ([fdo#111735])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb5/igt@gem_ctx_shared@q-smoketest-vebox.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb3/igt@gem_ctx_shared@q-smoketest-vebox.html

  * igt@gem_eio@suspend:
    - shard-tglb:         [PASS][9] -> [INCOMPLETE][10] ([i915#460])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb6/igt@gem_eio@suspend.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb8/igt@gem_eio@suspend.html

  * igt@gem_exec_async@concurrent-writes-bsd:
    - shard-iclb:         [PASS][11] -> [SKIP][12] ([fdo#112146]) +2 similar issues
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb6/igt@gem_exec_async@concurrent-writes-bsd.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb1/igt@gem_exec_async@concurrent-writes-bsd.html

  * igt@gem_exec_balancer@bonded-slice:
    - shard-kbl:          [PASS][13] -> [FAIL][14] ([i915#800])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-kbl1/igt@gem_exec_balancer@bonded-slice.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-kbl1/igt@gem_exec_balancer@bonded-slice.html
    - shard-iclb:         [PASS][15] -> [FAIL][16] ([i915#800])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb3/igt@gem_exec_balancer@bonded-slice.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb2/igt@gem_exec_balancer@bonded-slice.html

  * igt@gem_exec_balancer@nop:
    - shard-tglb:         [PASS][17] -> [INCOMPLETE][18] ([fdo#111736])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb1/igt@gem_exec_balancer@nop.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb6/igt@gem_exec_balancer@nop.html

  * igt@gem_exec_parse_blt@allowed-single:
    - shard-apl:          [PASS][19] -> [DMESG-WARN][20] ([i915#716])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-apl8/igt@gem_exec_parse_blt@allowed-single.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-apl3/igt@gem_exec_parse_blt@allowed-single.html

  * igt@gem_exec_schedule@fifo-bsd1:
    - shard-iclb:         [PASS][21] -> [SKIP][22] ([fdo#109276]) +6 similar issues
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb2/igt@gem_exec_schedule@fifo-bsd1.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb3/igt@gem_exec_schedule@fifo-bsd1.html

  * igt@gem_exec_schedule@preempt-queue-chain-vebox:
    - shard-tglb:         [PASS][23] -> [INCOMPLETE][24] ([fdo#111677])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb2/igt@gem_exec_schedule@preempt-queue-chain-vebox.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb6/igt@gem_exec_schedule@preempt-queue-chain-vebox.html

  * igt@gem_exec_suspend@basic-s0:
    - shard-iclb:         [PASS][25] -> [DMESG-WARN][26] ([fdo#111764])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb8/igt@gem_exec_suspend@basic-s0.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb5/igt@gem_exec_suspend@basic-s0.html

  * igt@gem_persistent_relocs@forked-interruptible-thrashing:
    - shard-iclb:         [PASS][27] -> [FAIL][28] ([i915#520])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb3/igt@gem_persistent_relocs@forked-interruptible-thrashing.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb2/igt@gem_persistent_relocs@forked-interruptible-thrashing.html

  * igt@gem_persistent_relocs@forked-thrashing:
    - shard-snb:          [PASS][29] -> [FAIL][30] ([i915#520])
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-snb6/igt@gem_persistent_relocs@forked-thrashing.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-snb5/igt@gem_persistent_relocs@forked-thrashing.html

  * igt@gem_ppgtt@flink-and-close-vma-leak:
    - shard-glk:          [PASS][31] -> [FAIL][32] ([i915#644])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-glk4/igt@gem_ppgtt@flink-and-close-vma-leak.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-glk8/igt@gem_ppgtt@flink-and-close-vma-leak.html
    - shard-skl:          [PASS][33] -> [FAIL][34] ([i915#644])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl1/igt@gem_ppgtt@flink-and-close-vma-leak.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl4/igt@gem_ppgtt@flink-and-close-vma-leak.html

  * igt@gem_userptr_blits@map-fixed-invalidate-busy:
    - shard-snb:          [PASS][35] -> [DMESG-WARN][36] ([fdo#111870])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-snb7/igt@gem_userptr_blits@map-fixed-invalidate-busy.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-snb6/igt@gem_userptr_blits@map-fixed-invalidate-busy.html

  * igt@i915_suspend@sysfs-reader:
    - shard-tglb:         [PASS][37] -> [INCOMPLETE][38] ([i915#456] / [i915#460]) +2 similar issues
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb5/igt@i915_suspend@sysfs-reader.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb8/igt@i915_suspend@sysfs-reader.html

  * igt@kms_cursor_crc@pipe-b-cursor-256x256-sliding:
    - shard-hsw:          [PASS][39] -> [DMESG-WARN][40] ([IGT#6])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-hsw4/igt@kms_cursor_crc@pipe-b-cursor-256x256-sliding.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-hsw6/igt@kms_cursor_crc@pipe-b-cursor-256x256-sliding.html

  * igt@kms_draw_crc@draw-method-rgb565-blt-ytiled:
    - shard-skl:          [PASS][41] -> [INCOMPLETE][42] ([i915#435] / [i915#667])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl1/igt@kms_draw_crc@draw-method-rgb565-blt-ytiled.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl8/igt@kms_draw_crc@draw-method-rgb565-blt-ytiled.html

  * igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-ytiled:
    - shard-skl:          [PASS][43] -> [FAIL][44] ([i915#52] / [i915#54])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl3/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-ytiled.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl2/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-gtt-ytiled.html

  * igt@kms_flip@flip-vs-expired-vblank:
    - shard-skl:          [PASS][45] -> [FAIL][46] ([i915#79])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl2/igt@kms_flip@flip-vs-expired-vblank.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl2/igt@kms_flip@flip-vs-expired-vblank.html
    - shard-glk:          [PASS][47] -> [FAIL][48] ([i915#79])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-glk3/igt@kms_flip@flip-vs-expired-vblank.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-glk4/igt@kms_flip@flip-vs-expired-vblank.html

  * igt@kms_flip@flip-vs-suspend-interruptible:
    - shard-kbl:          [PASS][49] -> [DMESG-WARN][50] ([i915#180]) +4 similar issues
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-kbl6/igt@kms_flip@flip-vs-suspend-interruptible.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-kbl1/igt@kms_flip@flip-vs-suspend-interruptible.html
    - shard-hsw:          [PASS][51] -> [INCOMPLETE][52] ([i915#61])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-hsw7/igt@kms_flip@flip-vs-suspend-interruptible.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-hsw4/igt@kms_flip@flip-vs-suspend-interruptible.html

  * igt@kms_frontbuffer_tracking@fbc-1p-pri-indfb-multidraw:
    - shard-tglb:         [PASS][53] -> [FAIL][54] ([i915#49])
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb1/igt@kms_frontbuffer_tracking@fbc-1p-pri-indfb-multidraw.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb6/igt@kms_frontbuffer_tracking@fbc-1p-pri-indfb-multidraw.html

  * igt@kms_frontbuffer_tracking@fbc-1p-rte:
    - shard-tglb:         [PASS][55] -> [INCOMPLETE][56] ([i915#474] / [i915#667])
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb1/igt@kms_frontbuffer_tracking@fbc-1p-rte.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb2/igt@kms_frontbuffer_tracking@fbc-1p-rte.html

  * igt@kms_frontbuffer_tracking@fbc-farfromfence:
    - shard-kbl:          [PASS][57] -> [DMESG-WARN][58] ([i915#62] / [i915#92]) +4 similar issues
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-kbl2/igt@kms_frontbuffer_tracking@fbc-farfromfence.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-kbl1/igt@kms_frontbuffer_tracking@fbc-farfromfence.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-shrfb-pgflip-blt:
    - shard-skl:          [PASS][59] -> [INCOMPLETE][60] ([i915#123] / [i915#667])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl2/igt@kms_frontbuffer_tracking@psr-1p-primscrn-shrfb-pgflip-blt.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl5/igt@kms_frontbuffer_tracking@psr-1p-primscrn-shrfb-pgflip-blt.html

  * igt@kms_plane@pixel-format-pipe-b-planes-source-clamping:
    - shard-kbl:          [PASS][61] -> [INCOMPLETE][62] ([fdo#103665] / [i915#435] / [i915#648] / [i915#667])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-kbl3/igt@kms_plane@pixel-format-pipe-b-planes-source-clamping.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-kbl6/igt@kms_plane@pixel-format-pipe-b-planes-source-clamping.html

  * igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes:
    - shard-skl:          [PASS][63] -> [INCOMPLETE][64] ([i915#69])
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl7/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl10/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes.html

  * igt@kms_psr@psr2_cursor_plane_move:
    - shard-iclb:         [PASS][65] -> [SKIP][66] ([fdo#109441]) +2 similar issues
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb2/igt@kms_psr@psr2_cursor_plane_move.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb8/igt@kms_psr@psr2_cursor_plane_move.html

  * igt@kms_setmode@basic:
    - shard-glk:          [PASS][67] -> [FAIL][68] ([i915#31])
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-glk2/igt@kms_setmode@basic.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-glk1/igt@kms_setmode@basic.html

  * igt@perf_pmu@busy-no-semaphores-vcs1:
    - shard-iclb:         [PASS][69] -> [SKIP][70] ([fdo#112080]) +8 similar issues
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb2/igt@perf_pmu@busy-no-semaphores-vcs1.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb6/igt@perf_pmu@busy-no-semaphores-vcs1.html

  
#### Possible fixes ####

  * igt@gem_ctx_persistence@vcs1-queued:
    - shard-iclb:         [SKIP][71] ([fdo#109276] / [fdo#112080]) -> [PASS][72] +1 similar issue
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb8/igt@gem_ctx_persistence@vcs1-queued.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb4/igt@gem_ctx_persistence@vcs1-queued.html

  * igt@gem_ctx_shared@exec-single-timeline-bsd:
    - shard-iclb:         [SKIP][73] ([fdo#110841]) -> [PASS][74]
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb2/igt@gem_ctx_shared@exec-single-timeline-bsd.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb8/igt@gem_ctx_shared@exec-single-timeline-bsd.html

  * igt@gem_exec_parallel@vcs1-fds:
    - shard-iclb:         [SKIP][75] ([fdo#112080]) -> [PASS][76] +7 similar issues
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb3/igt@gem_exec_parallel@vcs1-fds.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb2/igt@gem_exec_parallel@vcs1-fds.html

  * igt@gem_exec_schedule@preempt-queue-bsd1:
    - shard-iclb:         [SKIP][77] ([fdo#109276]) -> [PASS][78] +10 similar issues
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb8/igt@gem_exec_schedule@preempt-queue-bsd1.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb2/igt@gem_exec_schedule@preempt-queue-bsd1.html

  * igt@gem_exec_schedule@preempt-queue-contexts-chain-bsd2:
    - shard-tglb:         [INCOMPLETE][79] ([fdo#111677]) -> [PASS][80]
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb6/igt@gem_exec_schedule@preempt-queue-contexts-chain-bsd2.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb4/igt@gem_exec_schedule@preempt-queue-contexts-chain-bsd2.html

  * igt@gem_exec_schedule@smoketest-bsd:
    - shard-iclb:         [SKIP][81] ([fdo#112146]) -> [PASS][82]
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb2/igt@gem_exec_schedule@smoketest-bsd.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb8/igt@gem_exec_schedule@smoketest-bsd.html

  * igt@gem_persistent_relocs@forked-interruptible-faulting-reloc-thrash-inactive:
    - shard-snb:          [TIMEOUT][83] ([i915#530]) -> [PASS][84]
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-snb2/igt@gem_persistent_relocs@forked-interruptible-faulting-reloc-thrash-inactive.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-snb6/igt@gem_persistent_relocs@forked-interruptible-faulting-reloc-thrash-inactive.html

  * igt@gem_persistent_relocs@forked-interruptible-thrashing:
    - shard-tglb:         [FAIL][85] ([i915#520]) -> [PASS][86]
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb5/igt@gem_persistent_relocs@forked-interruptible-thrashing.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb1/igt@gem_persistent_relocs@forked-interruptible-thrashing.html

  * igt@gem_ppgtt@flink-and-close-vma-leak:
    - shard-iclb:         [FAIL][87] ([i915#644]) -> [PASS][88]
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb8/igt@gem_ppgtt@flink-and-close-vma-leak.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb5/igt@gem_ppgtt@flink-and-close-vma-leak.html

  * igt@gem_tiled_partial_pwrite_pread@writes-after-reads:
    - shard-hsw:          [FAIL][89] ([i915#817]) -> [PASS][90]
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-hsw1/igt@gem_tiled_partial_pwrite_pread@writes-after-reads.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-hsw1/igt@gem_tiled_partial_pwrite_pread@writes-after-reads.html

  * igt@gem_userptr_blits@dmabuf-unsync:
    - shard-snb:          [DMESG-WARN][91] ([fdo#111870]) -> [PASS][92]
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-snb1/igt@gem_userptr_blits@dmabuf-unsync.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-snb4/igt@gem_userptr_blits@dmabuf-unsync.html

  * igt@i915_pm_dc@dc6-dpms:
    - shard-iclb:         [FAIL][93] ([i915#454]) -> [PASS][94]
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb3/igt@i915_pm_dc@dc6-dpms.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb5/igt@i915_pm_dc@dc6-dpms.html

  * igt@i915_selftest@live_hangcheck:
    - shard-tglb:         [INCOMPLETE][95] ([i915#435]) -> [PASS][96]
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb2/igt@i915_selftest@live_hangcheck.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb1/igt@i915_selftest@live_hangcheck.html

  * igt@i915_selftest@mock_sanitycheck:
    - shard-kbl:          [DMESG-WARN][97] ([i915#747]) -> [PASS][98]
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-kbl1/igt@i915_selftest@mock_sanitycheck.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-kbl4/igt@i915_selftest@mock_sanitycheck.html
    - shard-skl:          [DMESG-WARN][99] ([i915#747]) -> [PASS][100]
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl5/igt@i915_selftest@mock_sanitycheck.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl4/igt@i915_selftest@mock_sanitycheck.html
    - shard-snb:          [DMESG-WARN][101] ([i915#747]) -> [PASS][102]
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-snb2/igt@i915_selftest@mock_sanitycheck.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-snb5/igt@i915_selftest@mock_sanitycheck.html

  * igt@kms_color@pipe-b-ctm-0-25:
    - shard-skl:          [DMESG-WARN][103] ([i915#109]) -> [PASS][104] +1 similar issue
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl3/igt@kms_color@pipe-b-ctm-0-25.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl2/igt@kms_color@pipe-b-ctm-0-25.html

  * igt@kms_cursor_crc@pipe-b-cursor-256x256-sliding:
    - shard-skl:          [FAIL][105] ([i915#54]) -> [PASS][106] +5 similar issues
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl1/igt@kms_cursor_crc@pipe-b-cursor-256x256-sliding.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl2/igt@kms_cursor_crc@pipe-b-cursor-256x256-sliding.html

  * igt@kms_cursor_crc@pipe-c-cursor-suspend:
    - shard-skl:          [INCOMPLETE][107] ([i915#300]) -> [PASS][108]
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl9/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl6/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
    - shard-kbl:          [DMESG-WARN][109] ([i915#180]) -> [PASS][110] +2 similar issues
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-kbl1/igt@kms_cursor_crc@pipe-c-cursor-suspend.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-kbl4/igt@kms_cursor_crc@pipe-c-cursor-suspend.html

  * igt@kms_draw_crc@draw-method-rgb565-mmap-gtt-ytiled:
    - shard-skl:          [FAIL][111] ([i915#52] / [i915#54]) -> [PASS][112]
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl3/igt@kms_draw_crc@draw-method-rgb565-mmap-gtt-ytiled.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl2/igt@kms_draw_crc@draw-method-rgb565-mmap-gtt-ytiled.html

  * igt@kms_draw_crc@draw-method-rgb565-render-ytiled:
    - shard-tglb:         [INCOMPLETE][113] ([i915#667]) -> [PASS][114]
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb4/igt@kms_draw_crc@draw-method-rgb565-render-ytiled.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb5/igt@kms_draw_crc@draw-method-rgb565-render-ytiled.html

  * igt@kms_frontbuffer_tracking@fbc-suspend:
    - shard-tglb:         [INCOMPLETE][115] ([i915#456] / [i915#460] / [i915#474]) -> [PASS][116]
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb5/igt@kms_frontbuffer_tracking@fbc-suspend.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb1/igt@kms_frontbuffer_tracking@fbc-suspend.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-offscren-pri-indfb-draw-mmap-wc:
    - shard-tglb:         [INCOMPLETE][117] ([i915#474] / [i915#667]) -> [PASS][118]
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb5/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscren-pri-indfb-draw-mmap-wc.html
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb1/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscren-pri-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-pgflip-blt:
    - shard-tglb:         [FAIL][119] ([i915#49]) -> [PASS][120]
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb3/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-pgflip-blt.html
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb4/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-draw-blt:
    - shard-tglb:         [INCOMPLETE][121] ([fdo#112393] / [i915#667]) -> [PASS][122]
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb7/igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-draw-blt.html
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb3/igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-draw-blt.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b:
    - shard-skl:          [INCOMPLETE][123] ([i915#69]) -> [PASS][124]
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl1/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl9/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b.html
    - shard-tglb:         [INCOMPLETE][125] ([i915#456] / [i915#460]) -> [PASS][126] +1 similar issue
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb8/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b.html
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb4/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-c:
    - shard-apl:          [DMESG-WARN][127] ([i915#180]) -> [PASS][128]
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-apl4/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-c.html
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-apl4/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-c.html

  * igt@kms_plane@pixel-format-pipe-a-planes-source-clamping:
    - shard-skl:          [INCOMPLETE][129] ([i915#648] / [i915#667]) -> [PASS][130]
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl3/igt@kms_plane@pixel-format-pipe-a-planes-source-clamping.html
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl1/igt@kms_plane@pixel-format-pipe-a-planes-source-clamping.html

  * igt@kms_psr@psr2_sprite_plane_move:
    - shard-iclb:         [SKIP][131] ([fdo#109441]) -> [PASS][132] +4 similar issues
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb1/igt@kms_psr@psr2_sprite_plane_move.html
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb2/igt@kms_psr@psr2_sprite_plane_move.html

  * igt@kms_psr@psr2_suspend:
    - shard-tglb:         [SKIP][133] -> [PASS][134]
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-tglb5/igt@kms_psr@psr2_suspend.html
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-tglb1/igt@kms_psr@psr2_suspend.html

  * igt@kms_setmode@basic:
    - shard-hsw:          [FAIL][135] ([i915#31]) -> [PASS][136]
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-hsw2/igt@kms_setmode@basic.html
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-hsw4/igt@kms_setmode@basic.html

  
#### Warnings ####

  * igt@gem_tiled_blits@normal:
    - shard-hsw:          [FAIL][137] ([i915#818]) -> [INCOMPLETE][138] ([i915#61])
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-hsw2/igt@gem_tiled_blits@normal.html
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-hsw2/igt@gem_tiled_blits@normal.html

  * igt@kms_dp_dsc@basic-dsc-enable-edp:
    - shard-iclb:         [DMESG-WARN][139] ([fdo#107724]) -> [SKIP][140] ([fdo#109349])
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-iclb2/igt@kms_dp_dsc@basic-dsc-enable-edp.html
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-iclb4/igt@kms_dp_dsc@basic-dsc-enable-edp.html

  * igt@kms_plane@pixel-format-pipe-b-planes:
    - shard-skl:          [INCOMPLETE][141] ([fdo#112391] / [i915#648] / [i915#667]) -> [INCOMPLETE][142] ([i915#648])
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-skl3/igt@kms_plane@pixel-format-pipe-b-planes.html
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-skl2/igt@kms_plane@pixel-format-pipe-b-planes.html

  * igt@kms_vblank@pipe-c-ts-continuation-suspend:
    - shard-kbl:          [DMESG-WARN][143] ([i915#180]) -> [DMESG-WARN][144] ([i915#62] / [i915#92])
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_7551/shard-kbl2/igt@kms_vblank@pipe-c-ts-continuation-suspend.html
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/shard-kbl1/igt@kms_vblank@pipe-c-ts-continuation-suspend.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [IGT#6]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/6
  [fdo#103665]: https://bugs.freedesktop.org/show_bug.cgi?id=103665
  [fdo#107724]: https://bugs.freedesktop.org/show_bug.cgi?id=107724
  [fdo#109276]: https://bugs.freedesktop.org/show_bug.cgi?id=109276
  [fdo#109349]: https://bugs.freedesktop.org/show_bug.cgi?id=109349
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#110841]: https://bugs.freedesktop.org/show_bug.cgi?id=110841
  [fdo#111677]: https://bugs.freedesktop.org/show_bug.cgi?id=111677
  [fdo#111735]: https://bugs.freedesktop.org/show_bug.cgi?id=111735
  [fdo#111736]: https://bugs.freedesktop.org/show_bug.cgi?id=111736
  [fdo#111764]: https://bugs.freedesktop.org/show_bug.cgi?id=111764
  [fdo#111870]: https://bugs.freedesktop.org/show_bug.cgi?id=111870
  [fdo#112080]: https://bugs.freedesktop.org/show_bug.cgi?id=112080
  [fdo#112146]: https://bugs.freedesktop.org/show_bug.cgi?id=112146
  [fdo#112391]: https://bugs.freedesktop.org/show_bug.cgi?id=112391
  [fdo#112393]: https://bugs.freedesktop.org/show_bug.cgi?id=112393
  [i915#109]: https://gitlab.freedesktop.org/drm/intel/issues/109
  [i915#123]: https://gitlab.freedesktop.org/drm/intel/issues/123
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#300]: https://gitlab.freedesktop.org/drm/intel/issues/300
  [i915#31]: https://gitlab.freedesktop.org/drm/intel/issues/31
  [i915#435]: https://gitlab.freedesktop.org/drm/intel/issues/435
  [i915#454]: https://gitlab.freedesktop.org/drm/intel/issues/454
  [i915#456]: https://gitlab.freedesktop.org/drm/intel/issues/456
  [i915#460]: https://gitlab.freedesktop.org/drm/intel/issues/460
  [i915#474]: https://gitlab.freedesktop.org/drm/intel/issues/474
  [i915#49]: https://gitlab.freedesktop.org/drm/intel/issues/49
  [i915#52]: https://gitlab.freedesktop.org/drm/intel/issues/52
  [i915#520]: https://gitlab.freedesktop.org/drm/intel/issues/520
  [i915#530]: https://gitlab.freedesktop.org/drm/intel/issues/530
  [i915#54]: https://gitlab.freedesktop.org/drm/intel/issues/54
  [i915#61]: https://gitlab.freedesktop.org/drm/intel/issues/61
  [i915#62]: https://gitlab.freedesktop.org/drm/intel/issues/62
  [i915#644]: https://gitlab.freedesktop.org/drm/intel/issues/644
  [i915#648]: https://gitlab.freedesktop.org/drm/intel/issues/648
  [i915#667]: https://gitlab.freedesktop.org/drm/intel/issues/667
  [i915#677]: https://gitlab.freedesktop.org/drm/intel/issues/677
  [i915#679]: https://gitlab.freedesktop.org/drm/intel/issues/679
  [i915#69]: https://gitlab.freedesktop.org/drm/intel/issues/69
  [i915#716]: https://gitlab.freedesktop.org/drm/intel/issues/716
  [i915#747]: https://gitlab.freedesktop.org/drm/intel/issues/747
  [i915#79]: https://gitlab.freedesktop.org/drm/intel/issues/79
  [i915#800]: https://gitlab.freedesktop.org/drm/intel/issues/800
  [i915#817]: https://gitlab.freedesktop.org/drm/intel/issues/817
  [i915#818]: https://gitlab.freedesktop.org/drm/intel/issues/818
  [i915#92]: https://gitlab.freedesktop.org/drm/intel/issues/92


Participating hosts (10 -> 10)
------------------------

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_15721/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Intel-gfx] [PATCH v12 1/3] drm/i915: Refactor intel_can_enable_sagv
  2019-12-13 14:12 ` [Intel-gfx] [PATCH v12 1/3] drm/i915: Refactor intel_can_enable_sagv Stanislav Lisovskiy
@ 2019-12-17 17:04   ` Ville Syrjälä
  0 siblings, 0 replies; 10+ messages in thread
From: Ville Syrjälä @ 2019-12-17 17:04 UTC (permalink / raw)
  To: Stanislav Lisovskiy; +Cc: intel-gfx

On Fri, Dec 13, 2019 at 04:12:29PM +0200, Stanislav Lisovskiy wrote:
> Currently intel_can_enable_sagv function contains
> a mix of workarounds for different platforms
> some of them are not valid for gens >= 11 already,
> so lets split it into separate functions.
> 
> v2:
>     - Rework watermark calculation algorithm to
>       attempt to calculate Level 0 watermark
>       with added sagv block time latency and
>       check if it fits in DBuf in order to
>       determine if SAGV can be enabled already
>       at this stage, just as BSpec 49325 states.
>       if that fails rollback to usual Level 0
>       latency and disable SAGV.
>     - Remove unneeded tabs(James Ausmus)
> 
> v3: Rebased the patch
> 
> v4: - Added back interlaced check for Gen12 and
>       added separate function for TGL SAGV check
>       (thanks to James Ausmus for spotting)
>     - Removed unneeded gen check
>     - Extracted Gen12 SAGV decision making code
>       to a separate function from skl_compute_wm
> 
> v5: - Added SAGV global state to dev_priv, because
>       we need to track all pipes, not only those
>       in atomic state. Each pipe has now correspondent
>       bit mask reflecting, whether it can tolerate
>       SAGV or not(thanks to Ville Syrjala for suggestions).
>     - Now using active flag instead of enable in crc
>       usage check.
> 
> v6: - Fixed rebase conflicts
> 
> v7: - kms_cursor_legacy seems to get broken because of multiple memcpy
>       calls when copying level 0 water marks for enabled SAGV, to
>       fix this now simply using that field right away, without copying,
>       for that introduced a new wm_level accessor which decides which
>       wm_level to return based on SAGV state.
> 
> v8: - Protect crtc_sagv_mask same way as we do for other global state
>       changes: i.e check if changes are needed, then grab all crtc locks
>       to serialize the changes(Ville Syrjälä)
>     - Add crtc_sagv_mask caching in order to avoid needless recalculations
>       (Matthew Roper)
>     - Put back Gen12 SAGV switch in order to get it enabled in separate
>       patch(Matthew Roper)
>     - Rename *_set_sagv_mask to *_compute_sagv_mask(Matthew Roper)
>     - Check if there are no active pipes in intel_can_enable_sagv
>       instead of platform specific functions(Matthew Roper), same
>       for intel_has_sagv check.
> 
> Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Cc: Ville Syrjälä <ville.syrjala@intel.com>
> Cc: James Ausmus <james.ausmus@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_display.c  |  12 +-
>  .../drm/i915/display/intel_display_types.h    |   9 +
>  drivers/gpu/drm/i915/i915_drv.h               |   6 +
>  drivers/gpu/drm/i915/intel_pm.c               | 416 +++++++++++++++---
>  drivers/gpu/drm/i915/intel_pm.h               |   1 +
>  5 files changed, 393 insertions(+), 51 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> index 0f37f1d2026d..d58c70fbc08e 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -13379,7 +13379,10 @@ static void verify_wm_state(struct intel_crtc *crtc,
>  		/* Watermarks */
>  		for (level = 0; level <= max_level; level++) {
>  			if (skl_wm_level_equals(&hw_plane_wm->wm[level],
> -						&sw_plane_wm->wm[level]))
> +						&sw_plane_wm->wm[level]) ||
> +			   (skl_wm_level_equals(&hw_plane_wm->wm[level],
> +						&sw_plane_wm->sagv_wm0) &&
> +			   (level == 0)))
>  				continue;
>  
>  			DRM_ERROR("mismatch in WM pipe %c plane %d level %d (expected e=%d b=%u l=%u, got e=%d b=%u l=%u)\n",
> @@ -13431,7 +13434,10 @@ static void verify_wm_state(struct intel_crtc *crtc,
>  		/* Watermarks */
>  		for (level = 0; level <= max_level; level++) {
>  			if (skl_wm_level_equals(&hw_plane_wm->wm[level],
> -						&sw_plane_wm->wm[level]))
> +						&sw_plane_wm->wm[level]) ||
> +			   (skl_wm_level_equals(&hw_plane_wm->wm[level],
> +						&sw_plane_wm->sagv_wm0) &&
> +			   (level == 0)))
>  				continue;
>  
>  			DRM_ERROR("mismatch in WM pipe %c cursor level %d (expected e=%d b=%u l=%u, got e=%d b=%u l=%u)\n",
> @@ -14808,6 +14814,8 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
>  			dev_priv->display.optimize_watermarks(state, crtc);
>  	}
>  
> +	dev_priv->crtc_sagv_mask = state->crtc_sagv_mask;
> +
>  	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
>  		intel_post_plane_update(state, crtc);
>  
> diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
> index 83ea04149b77..5301e1042b40 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_types.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_types.h
> @@ -490,6 +490,14 @@ struct intel_atomic_state {
>  	 */
>  	u8 active_pipe_changes;
>  
> +	/*
> +	 * Contains a mask which reflects whether correspondent pipe
> +	 * can tolerate SAGV or not, so that we can make a decision
> +	 * at atomic_commit_tail stage, whether we enable it or not
> +	 * based on global state in dev_priv.
> +	 */
> +	u32 crtc_sagv_mask;

Fits in u8, s/crtc/pipe/ so we don't get confused.

> +
>  	u8 active_pipes;
>  	/* minimum acceptable cdclk for each pipe */
>  	int min_cdclk[I915_MAX_PIPES];
> @@ -670,6 +678,7 @@ struct skl_plane_wm {
>  	struct skl_wm_level wm[8];
>  	struct skl_wm_level uv_wm[8];
>  	struct skl_wm_level trans_wm;
> +	struct skl_wm_level sagv_wm0;
>  	bool is_planar;
>  };
>  
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 0781b6326b8c..b877c42213c4 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -1171,6 +1171,12 @@ struct drm_i915_private {
>  
>  	u32 sagv_block_time_us;
>  
> +	/*
> +	 * Contains a bit mask, whether correspondent
> +	 * pipe allows SAGV or not.
> +	 */
> +	u32 crtc_sagv_mask;

ditto.

> +
>  	struct {
>  		/*
>  		 * Raw watermark latency values:
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index ccbbdf4a6aab..d70c33df0bbf 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -3647,7 +3647,7 @@ static bool skl_needs_memory_bw_wa(struct drm_i915_private *dev_priv)
>  	return IS_GEN9_BC(dev_priv) || IS_BROXTON(dev_priv);
>  }
>  
> -static bool
> +bool
>  intel_has_sagv(struct drm_i915_private *dev_priv)
>  {
>  	/* HACK! */
> @@ -3770,7 +3770,7 @@ intel_disable_sagv(struct drm_i915_private *dev_priv)
>  	return 0;
>  }
>  
> -bool intel_can_enable_sagv(struct intel_atomic_state *state)
> +static void skl_compute_sagv_mask(struct intel_atomic_state *state)
>  {
>  	struct drm_device *dev = state->base.dev;
>  	struct drm_i915_private *dev_priv = to_i915(dev);
> @@ -3780,29 +3780,15 @@ bool intel_can_enable_sagv(struct intel_atomic_state *state)
>  	enum pipe pipe;
>  	int level, latency;
>  
> -	if (!intel_has_sagv(dev_priv))
> -		return false;
> -
> -	/*
> -	 * If there are no active CRTCs, no additional checks need be performed
> -	 */
> -	if (hweight8(state->active_pipes) == 0)
> -		return true;
> -
> -	/*
> -	 * SKL+ workaround: bspec recommends we disable SAGV when we have
> -	 * more then one pipe enabled
> -	 */
> -	if (hweight8(state->active_pipes) > 1)
> -		return false;
> -
>  	/* Since we're now guaranteed to only have one active CRTC... */
>  	pipe = ffs(state->active_pipes) - 1;
>  	crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
>  	crtc_state = to_intel_crtc_state(crtc->base.state);
> +	state->crtc_sagv_mask &= ~BIT(crtc->pipe);
>  
> -	if (crtc_state->hw.adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE)
> -		return false;
> +	if (crtc_state->hw.adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE) {
> +		return;
> +	}
>  
>  	for_each_intel_plane_on_crtc(dev, crtc, plane) {
>  		struct skl_plane_wm *wm =
> @@ -3830,6 +3816,136 @@ bool intel_can_enable_sagv(struct intel_atomic_state *state)
>  		 * can't enable SAGV.
>  		 */
>  		if (latency < dev_priv->sagv_block_time_us)
> +			return;
> +	}
> +
> +	state->crtc_sagv_mask |= BIT(crtc->pipe);
> +}

Looks like we can keep this as a pure function and leave it up to the
caller to update the mask. Much easier on the brain to deal with pure
functions.

> +
> +static void tgl_compute_sagv_mask(struct intel_atomic_state *state);
> +
> +static void icl_compute_sagv_mask(struct intel_atomic_state *state)
> +{
> +	struct drm_device *dev = state->base.dev;
> +	struct drm_i915_private *dev_priv = to_i915(dev);
> +	struct intel_crtc *crtc;
> +	struct intel_crtc_state *new_crtc_state;
> +	int level, latency;
> +	int i;
> +	int plane_id;
> +
> +	for_each_new_intel_crtc_in_state(state, crtc,
> +					 new_crtc_state, i) {
> +		unsigned int flags = crtc->base.state->adjusted_mode.flags;
> +		bool can_sagv;
> +
> +		if (flags & DRM_MODE_FLAG_INTERLACE)
> +			continue;
> +
> +		if (!new_crtc_state->hw.active)
> +			continue;
> +
> +		can_sagv = true;
> +		for_each_plane_id_on_crtc(crtc, plane_id) {
> +			struct skl_plane_wm *wm =
> +				&new_crtc_state->wm.skl.optimal.planes[plane_id];
> +
> +			/* Skip this plane if it's not enabled */
> +			if (!wm->wm[0].plane_en)
> +				continue;
> +
> +			/* Find the highest enabled wm level for this plane */
> +			for (level = ilk_wm_max_level(dev_priv);
> +			     !wm->wm[level].plane_en; --level) {
> +			}
> +
> +			latency = dev_priv->wm.skl_latency[level];
> +
> +			/*
> +			 * If any of the planes on this pipe don't enable
> +			 * wm levels that incur memory latencies higher than
> +			 * sagv_block_time_us we can't enable SAGV.
> +			 */
> +			if (latency < dev_priv->sagv_block_time_us) {
> +				can_sagv = false;
> +				break;
> +			}

Hmm. What's the difference between this and the skl version?

> +		}
> +		if (can_sagv)
> +			state->crtc_sagv_mask |= BIT(crtc->pipe);
> +		else
> +			state->crtc_sagv_mask &= ~BIT(crtc->pipe);
> +	}
> +}
> +
> +bool intel_can_enable_sagv(struct intel_atomic_state *state)

This seems to do a double duty: compute the mask, and return whether we
can enable sagv. Those two things should be separate funtions IMO.

> +{
> +	struct drm_device *dev = state->base.dev;
> +	struct drm_i915_private *dev_priv = to_i915(dev);
> +	int ret, i;
> +	struct intel_crtc *crtc;
> +	struct intel_crtc_state *new_crtc_state;
> +
> +	if (!intel_has_sagv(dev_priv))
> +		return false;
> +
> +	/*
> +	 * Check if we had already calculated the mask.
> +	 * if we had - then we already have global state,
> +	 * serialized and thus protected from changes from
> +	 * other commits and able to use cached version here.
> +	 */
> +	if (!state->crtc_sagv_mask) {
> +		/*
> +		 * If there are no active CRTCs, no additional
> +		 * checks need be performed
> +		 */
> +		if (hweight8(state->active_pipes) == 0)
> +			return false;
> +
> +		/*
> +		 * Make sure we always pick global state first,
> +		 * there shouldn't be any issue as we hold only locks
> +		 * to correspondent crtcs in state, however once
> +		 * we detect that we need to change SAGV mask
> +		 * in global state, we will grab all the crtc locks
> +		 * in order to get this serialized, thus other
> +		 * racing commits having other crtc locks, will have
> +		 * to start over again, as stated by Wound-Wait
> +		 * algorithm.
> +		 */
> +		state->crtc_sagv_mask = dev_priv->crtc_sagv_mask;
> +
> +		if (INTEL_GEN(dev_priv) >= 12)
> +			tgl_compute_sagv_mask(state);
> +		else if (INTEL_GEN(dev_priv) == 11)
> +			icl_compute_sagv_mask(state);
> +		else
> +			skl_compute_sagv_mask(state);
> +
> +		/*
> +		 * For SAGV we need to account all the pipes,
> +		 * not only the ones which are in state currently.
> +		 * Grab all locks if we detect that we are actually
> +		 * going to do something.
> +		 */
> +		if (state->crtc_sagv_mask != dev_priv->crtc_sagv_mask) {
> +			ret = intel_atomic_serialize_global_state(state);
> +			if (ret) {
> +				DRM_DEBUG_KMS("Could not serialize global state\n");
> +				return false;
> +			}
> +		}
> +	}
> +
> +	for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) {
> +		u32 mask = BIT(crtc->pipe);
> +		bool state_sagv_masked = (mask & state->crtc_sagv_mask) == 0;
> +
> +		if (!new_crtc_state->hw.active)
> +			continue;
> +
> +		if (state_sagv_masked)
>  			return false;
>  	}
>  
> @@ -3955,6 +4071,7 @@ static int skl_compute_wm_params(const struct intel_crtc_state *crtc_state,
>  				 int color_plane);
>  static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
>  				 int level,
> +				 u32 latency,

Passing in the latency could be split into a separate trivial patch.

>  				 const struct skl_wm_params *wp,
>  				 const struct skl_wm_level *result_prev,
>  				 struct skl_wm_level *result /* out */);
> @@ -3977,7 +4094,10 @@ skl_cursor_allocation(const struct intel_crtc_state *crtc_state,
>  	WARN_ON(ret);
>  
>  	for (level = 0; level <= max_level; level++) {
> -		skl_compute_plane_wm(crtc_state, level, &wp, &wm, &wm);
> +		u32 latency = dev_priv->wm.skl_latency[level];
> +
> +		skl_compute_plane_wm(crtc_state, level, latency, &wp, &wm, &wm);
> +
>  		if (wm.min_ddb_alloc == U16_MAX)
>  			break;
>  
> @@ -4242,6 +4362,98 @@ icl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state,
>  	return total_data_rate;
>  }
>  
> +static int
> +tgl_check_pipe_fits_sagv_wm(struct intel_crtc_state *crtc_state,
> +			    struct skl_ddb_allocation *ddb /* out */)
> +{
> +	struct drm_crtc *crtc = crtc_state->uapi.crtc;
> +	struct drm_i915_private *dev_priv = to_i915(crtc->dev);
> +	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
> +	struct skl_ddb_entry *alloc = &crtc_state->wm.skl.ddb;
> +	u16 alloc_size;
> +	u16 total[I915_MAX_PLANES] = {};
> +	u64 total_data_rate;
> +	enum plane_id plane_id;
> +	int num_active;
> +	u64 plane_data_rate[I915_MAX_PLANES] = {};
> +	u32 blocks;
> +
> +	/*
> +	 * No need to check gen here, we call this only for gen12
> +	 */
> +	total_data_rate =
> +		icl_get_total_relative_data_rate(crtc_state,
> +						 plane_data_rate);
> +
> +	skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state,
> +					   total_data_rate,
> +					   ddb, alloc, &num_active);
> +	alloc_size = skl_ddb_entry_size(alloc);
> +	if (alloc_size == 0)
> +		return -ENOSPC;
> +
> +	/* Allocate fixed number of blocks for cursor. */
> +	total[PLANE_CURSOR] = skl_cursor_allocation(crtc_state, num_active);
> +	alloc_size -= total[PLANE_CURSOR];
> +	crtc_state->wm.skl.plane_ddb_y[PLANE_CURSOR].start =
> +		alloc->end - total[PLANE_CURSOR];
> +	crtc_state->wm.skl.plane_ddb_y[PLANE_CURSOR].end = alloc->end;

Why are we recomputing the cursor ddb here?

> +
> +	/*
> +	 * Do check if we can fit L0 + sagv_block_time and
> +	 * disable SAGV if we can't.
> +	 */
> +	blocks = 0;
> +	for_each_plane_id_on_crtc(intel_crtc, plane_id) {
> +		const struct skl_plane_wm *wm =
> +			&crtc_state->wm.skl.optimal.planes[plane_id];
> +
> +		if (plane_id == PLANE_CURSOR) {
> +			if (WARN_ON(wm->sagv_wm0.min_ddb_alloc >
> +				    total[PLANE_CURSOR])) {
> +				blocks = U32_MAX;
> +				break;
> +			}
> +			continue;
> +		}
> +
> +		blocks += wm->sagv_wm0.min_ddb_alloc;
> +		if (blocks > alloc_size)
> +			return -ENOSPC;
> +	}

Looks like lots of copy paste. Please refactor if you think the
current code is useful for this.

But I don't really see why this is here. AFAICS we should just
include the sagv wm as part of current ddb allocation loop, iff we
have a chance of enabling sagv. And probably if we don't have enough
room for the sagv wms we should still try to allocate based on the
normal level 0 wm and mark sagv as no go.

Not quite sure what we should do with the cursor ddb since we don't
want to change that needlessly. I guess we could just have it compute
also the sagv wm and if that succeeds then bump the ddb allocation
if needed.

> +	return 0;
> +}
> +
> +static const struct skl_wm_level *
> +skl_plane_wm_level(struct intel_plane *plane,
> +		   const struct intel_crtc_state *crtc_state,
> +		   int level,
> +		   bool yuv)

s/bool yuv/int color_plane/ to match existing code.

> +{
> +	struct drm_atomic_state *state = crtc_state->uapi.state;
> +	enum plane_id plane_id = plane->id;
> +	const struct skl_plane_wm *wm =
> +		&crtc_state->wm.skl.optimal.planes[plane_id];
> +
> +	/*
> +	 * Looks ridicilous but need to check if state is not
> +	 * NULL here as it might be as some cursor plane manipulations
> +	 * seem to happen when no atomic state is actually present,
> +	 * despite crtc_state is allocated. Removing state check
> +	 * from here will result in kernel panic on boot.
> +	 * However we now need to check whether should be use SAGV
> +	 * wm levels here.
> +	 */

Should really find out what is happening instead of papering over it.

> +	if (state) {
> +		struct intel_atomic_state *intel_state =
> +			to_intel_atomic_state(state);
> +		if (intel_can_enable_sagv(intel_state) && !level)
> +			return &wm->sagv_wm0;
> +	}
> +
> +	return yuv ? &wm->uv_wm[level] : &wm->wm[level];
> +}
> +
>  static int
>  skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
>  		      struct skl_ddb_allocation *ddb /* out */)
> @@ -4256,6 +4468,9 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
>  	u16 uv_total[I915_MAX_PLANES] = {};
>  	u64 total_data_rate;
>  	enum plane_id plane_id;
> +	struct intel_plane *plane;
> +	const struct skl_wm_level *wm_level;
> +	const struct skl_wm_level *wm_uv_level;

Needlessly wide scope.

>  	int num_active;
>  	u64 plane_data_rate[I915_MAX_PLANES] = {};
>  	u64 uv_plane_data_rate[I915_MAX_PLANES] = {};
> @@ -4307,12 +4522,15 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
>  	 */
>  	for (level = ilk_wm_max_level(dev_priv); level >= 0; level--) {
>  		blocks = 0;
> -		for_each_plane_id_on_crtc(intel_crtc, plane_id) {
> -			const struct skl_plane_wm *wm =
> -				&crtc_state->wm.skl.optimal.planes[plane_id];
> +		for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) {
> +			plane_id = plane->id;
> +			wm_level = skl_plane_wm_level(plane, crtc_state,
> +						      level, false);
> +			wm_uv_level = skl_plane_wm_level(plane, crtc_state,
> +							 level, true);
>  
>  			if (plane_id == PLANE_CURSOR) {
> -				if (WARN_ON(wm->wm[level].min_ddb_alloc >
> +				if (WARN_ON(wm_level->min_ddb_alloc >
>  					    total[PLANE_CURSOR])) {
>  					blocks = U32_MAX;
>  					break;
> @@ -4320,8 +4538,8 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
>  				continue;
>  			}
>  
> -			blocks += wm->wm[level].min_ddb_alloc;
> -			blocks += wm->uv_wm[level].min_ddb_alloc;
> +			blocks += wm_level->min_ddb_alloc;
> +			blocks += wm_uv_level->min_ddb_alloc;
>  		}
>  
>  		if (blocks <= alloc_size) {
> @@ -4342,12 +4560,16 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
>  	 * watermark level, plus an extra share of the leftover blocks
>  	 * proportional to its relative data rate.
>  	 */
> -	for_each_plane_id_on_crtc(intel_crtc, plane_id) {
> -		const struct skl_plane_wm *wm =
> -			&crtc_state->wm.skl.optimal.planes[plane_id];
> +	for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) {
>  		u64 rate;
>  		u16 extra;
>  
> +		plane_id = plane->id;

These s/plane_id/plane/ changes seem a bit pointless. Just churn with
no functional difference AFAICS.

> +		wm_level = skl_plane_wm_level(plane, crtc_state,
> +					      level, false);
> +		wm_uv_level = skl_plane_wm_level(plane, crtc_state,
> +						 level, true);

The introduction of skl_plane_wm_level() could be a separate patch.

> +
>  		if (plane_id == PLANE_CURSOR)
>  			continue;
>  
> @@ -4362,7 +4584,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
>  		extra = min_t(u16, alloc_size,
>  			      DIV64_U64_ROUND_UP(alloc_size * rate,
>  						 total_data_rate));
> -		total[plane_id] = wm->wm[level].min_ddb_alloc + extra;
> +		total[plane_id] = wm_level->min_ddb_alloc + extra;
>  		alloc_size -= extra;
>  		total_data_rate -= rate;
>  
> @@ -4373,7 +4595,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
>  		extra = min_t(u16, alloc_size,
>  			      DIV64_U64_ROUND_UP(alloc_size * rate,
>  						 total_data_rate));
> -		uv_total[plane_id] = wm->uv_wm[level].min_ddb_alloc + extra;
> +		uv_total[plane_id] = wm_uv_level->min_ddb_alloc + extra;
>  		alloc_size -= extra;
>  		total_data_rate -= rate;
>  	}
> @@ -4414,9 +4636,14 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
>  	 * that aren't actually possible.
>  	 */
>  	for (level++; level <= ilk_wm_max_level(dev_priv); level++) {
> -		for_each_plane_id_on_crtc(intel_crtc, plane_id) {
> +		for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) {
>  			struct skl_plane_wm *wm =
> -				&crtc_state->wm.skl.optimal.planes[plane_id];
> +				&crtc_state->wm.skl.optimal.planes[plane->id];
> +
> +			wm_level = skl_plane_wm_level(plane, crtc_state,
> +						      level, false);
> +			wm_uv_level = skl_plane_wm_level(plane, crtc_state,
> +							 level, true);
>  
>  			/*
>  			 * We only disable the watermarks for each plane if
> @@ -4430,9 +4657,10 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
>  			 *  planes must be enabled before the level will be used."
>  			 * So this is actually safe to do.
>  			 */
> -			if (wm->wm[level].min_ddb_alloc > total[plane_id] ||
> -			    wm->uv_wm[level].min_ddb_alloc > uv_total[plane_id])
> -				memset(&wm->wm[level], 0, sizeof(wm->wm[level]));
> +			if (wm_level->min_ddb_alloc > total[plane->id] ||
> +			    wm_uv_level->min_ddb_alloc > uv_total[plane->id])
> +				memset(&wm->wm[level], 0,
> +				       sizeof(struct skl_wm_level));
>  
>  			/*
>  			 * Wa_1408961008:icl, ehl
> @@ -4440,9 +4668,14 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
>  			 */
>  			if (IS_GEN(dev_priv, 11) &&
>  			    level == 1 && wm->wm[0].plane_en) {
> -				wm->wm[level].plane_res_b = wm->wm[0].plane_res_b;
> -				wm->wm[level].plane_res_l = wm->wm[0].plane_res_l;
> -				wm->wm[level].ignore_lines = wm->wm[0].ignore_lines;
> +				wm_level = skl_plane_wm_level(plane, crtc_state,
> +							      0, false);
> +				wm->wm[level].plane_res_b =
> +					wm_level->plane_res_b;
> +				wm->wm[level].plane_res_l =
> +					wm_level->plane_res_l;
> +				wm->wm[level].ignore_lines =
> +					wm_level->ignore_lines;
>  			}
>  		}
>  	}
> @@ -4671,12 +4904,12 @@ static bool skl_wm_has_lines(struct drm_i915_private *dev_priv, int level)
>  
>  static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
>  				 int level,
> +				 u32 latency,
>  				 const struct skl_wm_params *wp,
>  				 const struct skl_wm_level *result_prev,
>  				 struct skl_wm_level *result /* out */)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
> -	u32 latency = dev_priv->wm.skl_latency[level];
>  	uint_fixed_16_16_t method1, method2;
>  	uint_fixed_16_16_t selected_result;
>  	u32 res_blocks, res_lines, min_ddb_alloc = 0;
> @@ -4797,20 +5030,46 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
>  static void
>  skl_compute_wm_levels(const struct intel_crtc_state *crtc_state,
>  		      const struct skl_wm_params *wm_params,
> -		      struct skl_wm_level *levels)
> +		      struct skl_plane_wm *plane_wm,
> +		      bool yuv)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
>  	int level, max_level = ilk_wm_max_level(dev_priv);
> +	/*
> +	 * Check which kind of plane is it and based on that calculate
> +	 * correspondent WM levels.
> +	 */
> +	struct skl_wm_level *levels = yuv ? plane_wm->uv_wm : plane_wm->wm;
>  	struct skl_wm_level *result_prev = &levels[0];
>  
>  	for (level = 0; level <= max_level; level++) {
>  		struct skl_wm_level *result = &levels[level];
> +		u32 latency = dev_priv->wm.skl_latency[level];
>  
> -		skl_compute_plane_wm(crtc_state, level, wm_params,
> -				     result_prev, result);
> +		skl_compute_plane_wm(crtc_state, level, latency,
> +				     wm_params, result_prev, result);
>  
>  		result_prev = result;
>  	}
> +	/*
> +	 * For Gen12 if it is an L0 we need to also
> +	 * consider sagv_block_time when calculating
> +	 * L0 watermark - we will need that when making
> +	 * a decision whether enable SAGV or not.
> +	 * For older gens we agreed to copy L0 value for
> +	 * compatibility.
> +	 */
> +	if ((INTEL_GEN(dev_priv) >= 12)) {
> +		u32 latency = dev_priv->wm.skl_latency[0];
> +
> +		latency += dev_priv->sagv_block_time_us;
> +		skl_compute_plane_wm(crtc_state, 0, latency,
> +				     wm_params, &levels[0],
> +				     &plane_wm->sagv_wm0);
> +	} else {
> +		memcpy(&plane_wm->sagv_wm0, &levels[0],
> +		       sizeof(struct skl_wm_level));

Simple assignments should do.

> +	}
>  }
>  
>  static u32
> @@ -4903,7 +5162,7 @@ static int skl_build_plane_wm_single(struct intel_crtc_state *crtc_state,
>  	if (ret)
>  		return ret;
>  
> -	skl_compute_wm_levels(crtc_state, &wm_params, wm->wm);
> +	skl_compute_wm_levels(crtc_state, &wm_params, wm, false);
>  	skl_compute_transition_wm(crtc_state, &wm_params, wm);
>  
>  	return 0;
> @@ -4925,7 +5184,7 @@ static int skl_build_plane_wm_uv(struct intel_crtc_state *crtc_state,
>  	if (ret)
>  		return ret;
>  
> -	skl_compute_wm_levels(crtc_state, &wm_params, wm->uv_wm);
> +	skl_compute_wm_levels(crtc_state, &wm_params, wm, true);
>  
>  	return 0;
>  }
> @@ -5062,10 +5321,13 @@ void skl_write_plane_wm(struct intel_plane *plane,
>  		&crtc_state->wm.skl.plane_ddb_y[plane_id];
>  	const struct skl_ddb_entry *ddb_uv =
>  		&crtc_state->wm.skl.plane_ddb_uv[plane_id];
> +	const struct skl_wm_level *wm_level;
>  
>  	for (level = 0; level <= max_level; level++) {
> +		wm_level = skl_plane_wm_level(plane, crtc_state, level, false);
> +
>  		skl_write_wm_level(dev_priv, PLANE_WM(pipe, plane_id, level),
> -				   &wm->wm[level]);
> +				   wm_level);
>  	}
>  	skl_write_wm_level(dev_priv, PLANE_WM_TRANS(pipe, plane_id),
>  			   &wm->trans_wm);
> @@ -5096,10 +5358,13 @@ void skl_write_cursor_wm(struct intel_plane *plane,
>  		&crtc_state->wm.skl.optimal.planes[plane_id];
>  	const struct skl_ddb_entry *ddb =
>  		&crtc_state->wm.skl.plane_ddb_y[plane_id];
> +	const struct skl_wm_level *wm_level;
>  
>  	for (level = 0; level <= max_level; level++) {
> +		wm_level = skl_plane_wm_level(plane, crtc_state, level, false);
> +
>  		skl_write_wm_level(dev_priv, CUR_WM(pipe, level),
> -				   &wm->wm[level]);
> +				   wm_level);
>  	}
>  	skl_write_wm_level(dev_priv, CUR_WM_TRANS(pipe), &wm->trans_wm);
>  
> @@ -5473,18 +5738,68 @@ static int skl_wm_add_affected_planes(struct intel_atomic_state *state,
>  	return 0;
>  }
>  
> +static void tgl_compute_sagv_mask(struct intel_atomic_state *state)
> +{
> +	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> +	struct intel_crtc *crtc;
> +	struct intel_crtc_state *new_crtc_state;
> +	struct intel_crtc_state *old_crtc_state;
> +	struct skl_ddb_allocation *ddb = &state->wm_results.ddb;
> +	int ret;
> +	int i;
> +	struct intel_plane *plane;
> +
> +	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
> +					    new_crtc_state, i) {
> +		int pipe_bit = BIT(crtc->pipe);
> +		bool skip = true;
> +
> +		/*
> +		 * If we had set this mast already once for this state,
> +		 * no need to waste CPU cycles for doing this again.
> +		 */
> +		for_each_intel_plane_on_crtc(&dev_priv->drm, crtc, plane) {
> +			enum plane_id plane_id = plane->id;
> +
> +			if (!skl_plane_wm_equals(dev_priv,
> +						 &old_crtc_state->wm.skl.optimal.planes[plane_id],
> +						 &new_crtc_state->wm.skl.optimal.planes[plane_id])) {
> +				skip = false;
> +				break;
> +			}
> +		}
> +
> +		/*
> +		 * Check if wm levels are actually the same as for previous
> +		 * state, which means we can just skip doing this long check
> +		 * and just  copy correspondent bit from previous state.
> +		 */
> +		if (skip)
> +			continue;
> +
> +		ret = tgl_check_pipe_fits_sagv_wm(new_crtc_state, ddb);
> +		if (!ret)
> +			state->crtc_sagv_mask |= pipe_bit;
> +		else
> +			state->crtc_sagv_mask &= ~pipe_bit;
> +	}
> +}
> +
>  static int
>  skl_compute_wm(struct intel_atomic_state *state)
>  {
>  	struct intel_crtc *crtc;
>  	struct intel_crtc_state *new_crtc_state;
>  	struct intel_crtc_state *old_crtc_state;
> -	struct skl_ddb_values *results = &state->wm_results;
>  	int ret, i;
> +	struct skl_ddb_values *results = &state->wm_results;
>  
>  	/* Clear all dirty flags */
>  	results->dirty_pipes = 0;
>  
> +	/* No SAGV until we check if it's possible */
> +	state->crtc_sagv_mask = 0;
> +
>  	ret = skl_ddb_add_affected_pipes(state);
>  	if (ret)
>  		return ret;
> @@ -5664,6 +5979,9 @@ void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
>  				val = I915_READ(CUR_WM(pipe, level));
>  
>  			skl_wm_level_from_reg_val(val, &wm->wm[level]);
> +			if (level == 0)
> +				memcpy(&wm->sagv_wm0, &wm->wm[level],
> +				       sizeof(struct skl_wm_level));
>  		}
>  
>  		if (plane_id != PLANE_CURSOR)
> diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h
> index c06c6a846d9a..4136d4508e63 100644
> --- a/drivers/gpu/drm/i915/intel_pm.h
> +++ b/drivers/gpu/drm/i915/intel_pm.h
> @@ -43,6 +43,7 @@ void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
>  void g4x_wm_sanitize(struct drm_i915_private *dev_priv);
>  void vlv_wm_sanitize(struct drm_i915_private *dev_priv);
>  bool intel_can_enable_sagv(struct intel_atomic_state *state);
> +bool intel_has_sagv(struct drm_i915_private *dev_priv);
>  int intel_enable_sagv(struct drm_i915_private *dev_priv);
>  int intel_disable_sagv(struct drm_i915_private *dev_priv);
>  bool skl_wm_level_equals(const struct skl_wm_level *l1,
> -- 
> 2.17.1

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [Intel-gfx] [PATCH v12 1/3] drm/i915: Refactor intel_can_enable_sagv
  2019-12-13 14:12 [Intel-gfx] [PATCH v12 0/3] Refactor Gen11+ SAGV support Stanislav Lisovskiy
@ 2019-12-13 14:12 ` Stanislav Lisovskiy
  2019-12-17 17:04   ` Ville Syrjälä
  0 siblings, 1 reply; 10+ messages in thread
From: Stanislav Lisovskiy @ 2019-12-13 14:12 UTC (permalink / raw)
  To: intel-gfx

Currently intel_can_enable_sagv function contains
a mix of workarounds for different platforms
some of them are not valid for gens >= 11 already,
so lets split it into separate functions.

v2:
    - Rework watermark calculation algorithm to
      attempt to calculate Level 0 watermark
      with added sagv block time latency and
      check if it fits in DBuf in order to
      determine if SAGV can be enabled already
      at this stage, just as BSpec 49325 states.
      if that fails rollback to usual Level 0
      latency and disable SAGV.
    - Remove unneeded tabs(James Ausmus)

v3: Rebased the patch

v4: - Added back interlaced check for Gen12 and
      added separate function for TGL SAGV check
      (thanks to James Ausmus for spotting)
    - Removed unneeded gen check
    - Extracted Gen12 SAGV decision making code
      to a separate function from skl_compute_wm

v5: - Added SAGV global state to dev_priv, because
      we need to track all pipes, not only those
      in atomic state. Each pipe has now correspondent
      bit mask reflecting, whether it can tolerate
      SAGV or not(thanks to Ville Syrjala for suggestions).
    - Now using active flag instead of enable in crc
      usage check.

v6: - Fixed rebase conflicts

v7: - kms_cursor_legacy seems to get broken because of multiple memcpy
      calls when copying level 0 water marks for enabled SAGV, to
      fix this now simply using that field right away, without copying,
      for that introduced a new wm_level accessor which decides which
      wm_level to return based on SAGV state.

v8: - Protect crtc_sagv_mask same way as we do for other global state
      changes: i.e check if changes are needed, then grab all crtc locks
      to serialize the changes(Ville Syrjälä)
    - Add crtc_sagv_mask caching in order to avoid needless recalculations
      (Matthew Roper)
    - Put back Gen12 SAGV switch in order to get it enabled in separate
      patch(Matthew Roper)
    - Rename *_set_sagv_mask to *_compute_sagv_mask(Matthew Roper)
    - Check if there are no active pipes in intel_can_enable_sagv
      instead of platform specific functions(Matthew Roper), same
      for intel_has_sagv check.

Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Cc: Ville Syrjälä <ville.syrjala@intel.com>
Cc: James Ausmus <james.ausmus@intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c  |  12 +-
 .../drm/i915/display/intel_display_types.h    |   9 +
 drivers/gpu/drm/i915/i915_drv.h               |   6 +
 drivers/gpu/drm/i915/intel_pm.c               | 416 +++++++++++++++---
 drivers/gpu/drm/i915/intel_pm.h               |   1 +
 5 files changed, 393 insertions(+), 51 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 0f37f1d2026d..d58c70fbc08e 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -13379,7 +13379,10 @@ static void verify_wm_state(struct intel_crtc *crtc,
 		/* Watermarks */
 		for (level = 0; level <= max_level; level++) {
 			if (skl_wm_level_equals(&hw_plane_wm->wm[level],
-						&sw_plane_wm->wm[level]))
+						&sw_plane_wm->wm[level]) ||
+			   (skl_wm_level_equals(&hw_plane_wm->wm[level],
+						&sw_plane_wm->sagv_wm0) &&
+			   (level == 0)))
 				continue;
 
 			DRM_ERROR("mismatch in WM pipe %c plane %d level %d (expected e=%d b=%u l=%u, got e=%d b=%u l=%u)\n",
@@ -13431,7 +13434,10 @@ static void verify_wm_state(struct intel_crtc *crtc,
 		/* Watermarks */
 		for (level = 0; level <= max_level; level++) {
 			if (skl_wm_level_equals(&hw_plane_wm->wm[level],
-						&sw_plane_wm->wm[level]))
+						&sw_plane_wm->wm[level]) ||
+			   (skl_wm_level_equals(&hw_plane_wm->wm[level],
+						&sw_plane_wm->sagv_wm0) &&
+			   (level == 0)))
 				continue;
 
 			DRM_ERROR("mismatch in WM pipe %c cursor level %d (expected e=%d b=%u l=%u, got e=%d b=%u l=%u)\n",
@@ -14808,6 +14814,8 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
 			dev_priv->display.optimize_watermarks(state, crtc);
 	}
 
+	dev_priv->crtc_sagv_mask = state->crtc_sagv_mask;
+
 	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
 		intel_post_plane_update(state, crtc);
 
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
index 83ea04149b77..5301e1042b40 100644
--- a/drivers/gpu/drm/i915/display/intel_display_types.h
+++ b/drivers/gpu/drm/i915/display/intel_display_types.h
@@ -490,6 +490,14 @@ struct intel_atomic_state {
 	 */
 	u8 active_pipe_changes;
 
+	/*
+	 * Contains a mask which reflects whether correspondent pipe
+	 * can tolerate SAGV or not, so that we can make a decision
+	 * at atomic_commit_tail stage, whether we enable it or not
+	 * based on global state in dev_priv.
+	 */
+	u32 crtc_sagv_mask;
+
 	u8 active_pipes;
 	/* minimum acceptable cdclk for each pipe */
 	int min_cdclk[I915_MAX_PIPES];
@@ -670,6 +678,7 @@ struct skl_plane_wm {
 	struct skl_wm_level wm[8];
 	struct skl_wm_level uv_wm[8];
 	struct skl_wm_level trans_wm;
+	struct skl_wm_level sagv_wm0;
 	bool is_planar;
 };
 
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 0781b6326b8c..b877c42213c4 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1171,6 +1171,12 @@ struct drm_i915_private {
 
 	u32 sagv_block_time_us;
 
+	/*
+	 * Contains a bit mask, whether correspondent
+	 * pipe allows SAGV or not.
+	 */
+	u32 crtc_sagv_mask;
+
 	struct {
 		/*
 		 * Raw watermark latency values:
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index ccbbdf4a6aab..d70c33df0bbf 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3647,7 +3647,7 @@ static bool skl_needs_memory_bw_wa(struct drm_i915_private *dev_priv)
 	return IS_GEN9_BC(dev_priv) || IS_BROXTON(dev_priv);
 }
 
-static bool
+bool
 intel_has_sagv(struct drm_i915_private *dev_priv)
 {
 	/* HACK! */
@@ -3770,7 +3770,7 @@ intel_disable_sagv(struct drm_i915_private *dev_priv)
 	return 0;
 }
 
-bool intel_can_enable_sagv(struct intel_atomic_state *state)
+static void skl_compute_sagv_mask(struct intel_atomic_state *state)
 {
 	struct drm_device *dev = state->base.dev;
 	struct drm_i915_private *dev_priv = to_i915(dev);
@@ -3780,29 +3780,15 @@ bool intel_can_enable_sagv(struct intel_atomic_state *state)
 	enum pipe pipe;
 	int level, latency;
 
-	if (!intel_has_sagv(dev_priv))
-		return false;
-
-	/*
-	 * If there are no active CRTCs, no additional checks need be performed
-	 */
-	if (hweight8(state->active_pipes) == 0)
-		return true;
-
-	/*
-	 * SKL+ workaround: bspec recommends we disable SAGV when we have
-	 * more then one pipe enabled
-	 */
-	if (hweight8(state->active_pipes) > 1)
-		return false;
-
 	/* Since we're now guaranteed to only have one active CRTC... */
 	pipe = ffs(state->active_pipes) - 1;
 	crtc = intel_get_crtc_for_pipe(dev_priv, pipe);
 	crtc_state = to_intel_crtc_state(crtc->base.state);
+	state->crtc_sagv_mask &= ~BIT(crtc->pipe);
 
-	if (crtc_state->hw.adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE)
-		return false;
+	if (crtc_state->hw.adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE) {
+		return;
+	}
 
 	for_each_intel_plane_on_crtc(dev, crtc, plane) {
 		struct skl_plane_wm *wm =
@@ -3830,6 +3816,136 @@ bool intel_can_enable_sagv(struct intel_atomic_state *state)
 		 * can't enable SAGV.
 		 */
 		if (latency < dev_priv->sagv_block_time_us)
+			return;
+	}
+
+	state->crtc_sagv_mask |= BIT(crtc->pipe);
+}
+
+static void tgl_compute_sagv_mask(struct intel_atomic_state *state);
+
+static void icl_compute_sagv_mask(struct intel_atomic_state *state)
+{
+	struct drm_device *dev = state->base.dev;
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	struct intel_crtc *crtc;
+	struct intel_crtc_state *new_crtc_state;
+	int level, latency;
+	int i;
+	int plane_id;
+
+	for_each_new_intel_crtc_in_state(state, crtc,
+					 new_crtc_state, i) {
+		unsigned int flags = crtc->base.state->adjusted_mode.flags;
+		bool can_sagv;
+
+		if (flags & DRM_MODE_FLAG_INTERLACE)
+			continue;
+
+		if (!new_crtc_state->hw.active)
+			continue;
+
+		can_sagv = true;
+		for_each_plane_id_on_crtc(crtc, plane_id) {
+			struct skl_plane_wm *wm =
+				&new_crtc_state->wm.skl.optimal.planes[plane_id];
+
+			/* Skip this plane if it's not enabled */
+			if (!wm->wm[0].plane_en)
+				continue;
+
+			/* Find the highest enabled wm level for this plane */
+			for (level = ilk_wm_max_level(dev_priv);
+			     !wm->wm[level].plane_en; --level) {
+			}
+
+			latency = dev_priv->wm.skl_latency[level];
+
+			/*
+			 * If any of the planes on this pipe don't enable
+			 * wm levels that incur memory latencies higher than
+			 * sagv_block_time_us we can't enable SAGV.
+			 */
+			if (latency < dev_priv->sagv_block_time_us) {
+				can_sagv = false;
+				break;
+			}
+		}
+		if (can_sagv)
+			state->crtc_sagv_mask |= BIT(crtc->pipe);
+		else
+			state->crtc_sagv_mask &= ~BIT(crtc->pipe);
+	}
+}
+
+bool intel_can_enable_sagv(struct intel_atomic_state *state)
+{
+	struct drm_device *dev = state->base.dev;
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	int ret, i;
+	struct intel_crtc *crtc;
+	struct intel_crtc_state *new_crtc_state;
+
+	if (!intel_has_sagv(dev_priv))
+		return false;
+
+	/*
+	 * Check if we had already calculated the mask.
+	 * if we had - then we already have global state,
+	 * serialized and thus protected from changes from
+	 * other commits and able to use cached version here.
+	 */
+	if (!state->crtc_sagv_mask) {
+		/*
+		 * If there are no active CRTCs, no additional
+		 * checks need be performed
+		 */
+		if (hweight8(state->active_pipes) == 0)
+			return false;
+
+		/*
+		 * Make sure we always pick global state first,
+		 * there shouldn't be any issue as we hold only locks
+		 * to correspondent crtcs in state, however once
+		 * we detect that we need to change SAGV mask
+		 * in global state, we will grab all the crtc locks
+		 * in order to get this serialized, thus other
+		 * racing commits having other crtc locks, will have
+		 * to start over again, as stated by Wound-Wait
+		 * algorithm.
+		 */
+		state->crtc_sagv_mask = dev_priv->crtc_sagv_mask;
+
+		if (INTEL_GEN(dev_priv) >= 12)
+			tgl_compute_sagv_mask(state);
+		else if (INTEL_GEN(dev_priv) == 11)
+			icl_compute_sagv_mask(state);
+		else
+			skl_compute_sagv_mask(state);
+
+		/*
+		 * For SAGV we need to account all the pipes,
+		 * not only the ones which are in state currently.
+		 * Grab all locks if we detect that we are actually
+		 * going to do something.
+		 */
+		if (state->crtc_sagv_mask != dev_priv->crtc_sagv_mask) {
+			ret = intel_atomic_serialize_global_state(state);
+			if (ret) {
+				DRM_DEBUG_KMS("Could not serialize global state\n");
+				return false;
+			}
+		}
+	}
+
+	for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) {
+		u32 mask = BIT(crtc->pipe);
+		bool state_sagv_masked = (mask & state->crtc_sagv_mask) == 0;
+
+		if (!new_crtc_state->hw.active)
+			continue;
+
+		if (state_sagv_masked)
 			return false;
 	}
 
@@ -3955,6 +4071,7 @@ static int skl_compute_wm_params(const struct intel_crtc_state *crtc_state,
 				 int color_plane);
 static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
 				 int level,
+				 u32 latency,
 				 const struct skl_wm_params *wp,
 				 const struct skl_wm_level *result_prev,
 				 struct skl_wm_level *result /* out */);
@@ -3977,7 +4094,10 @@ skl_cursor_allocation(const struct intel_crtc_state *crtc_state,
 	WARN_ON(ret);
 
 	for (level = 0; level <= max_level; level++) {
-		skl_compute_plane_wm(crtc_state, level, &wp, &wm, &wm);
+		u32 latency = dev_priv->wm.skl_latency[level];
+
+		skl_compute_plane_wm(crtc_state, level, latency, &wp, &wm, &wm);
+
 		if (wm.min_ddb_alloc == U16_MAX)
 			break;
 
@@ -4242,6 +4362,98 @@ icl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state,
 	return total_data_rate;
 }
 
+static int
+tgl_check_pipe_fits_sagv_wm(struct intel_crtc_state *crtc_state,
+			    struct skl_ddb_allocation *ddb /* out */)
+{
+	struct drm_crtc *crtc = crtc_state->uapi.crtc;
+	struct drm_i915_private *dev_priv = to_i915(crtc->dev);
+	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+	struct skl_ddb_entry *alloc = &crtc_state->wm.skl.ddb;
+	u16 alloc_size;
+	u16 total[I915_MAX_PLANES] = {};
+	u64 total_data_rate;
+	enum plane_id plane_id;
+	int num_active;
+	u64 plane_data_rate[I915_MAX_PLANES] = {};
+	u32 blocks;
+
+	/*
+	 * No need to check gen here, we call this only for gen12
+	 */
+	total_data_rate =
+		icl_get_total_relative_data_rate(crtc_state,
+						 plane_data_rate);
+
+	skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state,
+					   total_data_rate,
+					   ddb, alloc, &num_active);
+	alloc_size = skl_ddb_entry_size(alloc);
+	if (alloc_size == 0)
+		return -ENOSPC;
+
+	/* Allocate fixed number of blocks for cursor. */
+	total[PLANE_CURSOR] = skl_cursor_allocation(crtc_state, num_active);
+	alloc_size -= total[PLANE_CURSOR];
+	crtc_state->wm.skl.plane_ddb_y[PLANE_CURSOR].start =
+		alloc->end - total[PLANE_CURSOR];
+	crtc_state->wm.skl.plane_ddb_y[PLANE_CURSOR].end = alloc->end;
+
+	/*
+	 * Do check if we can fit L0 + sagv_block_time and
+	 * disable SAGV if we can't.
+	 */
+	blocks = 0;
+	for_each_plane_id_on_crtc(intel_crtc, plane_id) {
+		const struct skl_plane_wm *wm =
+			&crtc_state->wm.skl.optimal.planes[plane_id];
+
+		if (plane_id == PLANE_CURSOR) {
+			if (WARN_ON(wm->sagv_wm0.min_ddb_alloc >
+				    total[PLANE_CURSOR])) {
+				blocks = U32_MAX;
+				break;
+			}
+			continue;
+		}
+
+		blocks += wm->sagv_wm0.min_ddb_alloc;
+		if (blocks > alloc_size)
+			return -ENOSPC;
+	}
+	return 0;
+}
+
+static const struct skl_wm_level *
+skl_plane_wm_level(struct intel_plane *plane,
+		   const struct intel_crtc_state *crtc_state,
+		   int level,
+		   bool yuv)
+{
+	struct drm_atomic_state *state = crtc_state->uapi.state;
+	enum plane_id plane_id = plane->id;
+	const struct skl_plane_wm *wm =
+		&crtc_state->wm.skl.optimal.planes[plane_id];
+
+	/*
+	 * Looks ridicilous but need to check if state is not
+	 * NULL here as it might be as some cursor plane manipulations
+	 * seem to happen when no atomic state is actually present,
+	 * despite crtc_state is allocated. Removing state check
+	 * from here will result in kernel panic on boot.
+	 * However we now need to check whether should be use SAGV
+	 * wm levels here.
+	 */
+	if (state) {
+		struct intel_atomic_state *intel_state =
+			to_intel_atomic_state(state);
+		if (intel_can_enable_sagv(intel_state) && !level)
+			return &wm->sagv_wm0;
+	}
+
+	return yuv ? &wm->uv_wm[level] : &wm->wm[level];
+}
+
 static int
 skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 		      struct skl_ddb_allocation *ddb /* out */)
@@ -4256,6 +4468,9 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 	u16 uv_total[I915_MAX_PLANES] = {};
 	u64 total_data_rate;
 	enum plane_id plane_id;
+	struct intel_plane *plane;
+	const struct skl_wm_level *wm_level;
+	const struct skl_wm_level *wm_uv_level;
 	int num_active;
 	u64 plane_data_rate[I915_MAX_PLANES] = {};
 	u64 uv_plane_data_rate[I915_MAX_PLANES] = {};
@@ -4307,12 +4522,15 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 	 */
 	for (level = ilk_wm_max_level(dev_priv); level >= 0; level--) {
 		blocks = 0;
-		for_each_plane_id_on_crtc(intel_crtc, plane_id) {
-			const struct skl_plane_wm *wm =
-				&crtc_state->wm.skl.optimal.planes[plane_id];
+		for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) {
+			plane_id = plane->id;
+			wm_level = skl_plane_wm_level(plane, crtc_state,
+						      level, false);
+			wm_uv_level = skl_plane_wm_level(plane, crtc_state,
+							 level, true);
 
 			if (plane_id == PLANE_CURSOR) {
-				if (WARN_ON(wm->wm[level].min_ddb_alloc >
+				if (WARN_ON(wm_level->min_ddb_alloc >
 					    total[PLANE_CURSOR])) {
 					blocks = U32_MAX;
 					break;
@@ -4320,8 +4538,8 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 				continue;
 			}
 
-			blocks += wm->wm[level].min_ddb_alloc;
-			blocks += wm->uv_wm[level].min_ddb_alloc;
+			blocks += wm_level->min_ddb_alloc;
+			blocks += wm_uv_level->min_ddb_alloc;
 		}
 
 		if (blocks <= alloc_size) {
@@ -4342,12 +4560,16 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 	 * watermark level, plus an extra share of the leftover blocks
 	 * proportional to its relative data rate.
 	 */
-	for_each_plane_id_on_crtc(intel_crtc, plane_id) {
-		const struct skl_plane_wm *wm =
-			&crtc_state->wm.skl.optimal.planes[plane_id];
+	for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) {
 		u64 rate;
 		u16 extra;
 
+		plane_id = plane->id;
+		wm_level = skl_plane_wm_level(plane, crtc_state,
+					      level, false);
+		wm_uv_level = skl_plane_wm_level(plane, crtc_state,
+						 level, true);
+
 		if (plane_id == PLANE_CURSOR)
 			continue;
 
@@ -4362,7 +4584,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 		extra = min_t(u16, alloc_size,
 			      DIV64_U64_ROUND_UP(alloc_size * rate,
 						 total_data_rate));
-		total[plane_id] = wm->wm[level].min_ddb_alloc + extra;
+		total[plane_id] = wm_level->min_ddb_alloc + extra;
 		alloc_size -= extra;
 		total_data_rate -= rate;
 
@@ -4373,7 +4595,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 		extra = min_t(u16, alloc_size,
 			      DIV64_U64_ROUND_UP(alloc_size * rate,
 						 total_data_rate));
-		uv_total[plane_id] = wm->uv_wm[level].min_ddb_alloc + extra;
+		uv_total[plane_id] = wm_uv_level->min_ddb_alloc + extra;
 		alloc_size -= extra;
 		total_data_rate -= rate;
 	}
@@ -4414,9 +4636,14 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 	 * that aren't actually possible.
 	 */
 	for (level++; level <= ilk_wm_max_level(dev_priv); level++) {
-		for_each_plane_id_on_crtc(intel_crtc, plane_id) {
+		for_each_intel_plane_on_crtc(&dev_priv->drm, intel_crtc, plane) {
 			struct skl_plane_wm *wm =
-				&crtc_state->wm.skl.optimal.planes[plane_id];
+				&crtc_state->wm.skl.optimal.planes[plane->id];
+
+			wm_level = skl_plane_wm_level(plane, crtc_state,
+						      level, false);
+			wm_uv_level = skl_plane_wm_level(plane, crtc_state,
+							 level, true);
 
 			/*
 			 * We only disable the watermarks for each plane if
@@ -4430,9 +4657,10 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 			 *  planes must be enabled before the level will be used."
 			 * So this is actually safe to do.
 			 */
-			if (wm->wm[level].min_ddb_alloc > total[plane_id] ||
-			    wm->uv_wm[level].min_ddb_alloc > uv_total[plane_id])
-				memset(&wm->wm[level], 0, sizeof(wm->wm[level]));
+			if (wm_level->min_ddb_alloc > total[plane->id] ||
+			    wm_uv_level->min_ddb_alloc > uv_total[plane->id])
+				memset(&wm->wm[level], 0,
+				       sizeof(struct skl_wm_level));
 
 			/*
 			 * Wa_1408961008:icl, ehl
@@ -4440,9 +4668,14 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state,
 			 */
 			if (IS_GEN(dev_priv, 11) &&
 			    level == 1 && wm->wm[0].plane_en) {
-				wm->wm[level].plane_res_b = wm->wm[0].plane_res_b;
-				wm->wm[level].plane_res_l = wm->wm[0].plane_res_l;
-				wm->wm[level].ignore_lines = wm->wm[0].ignore_lines;
+				wm_level = skl_plane_wm_level(plane, crtc_state,
+							      0, false);
+				wm->wm[level].plane_res_b =
+					wm_level->plane_res_b;
+				wm->wm[level].plane_res_l =
+					wm_level->plane_res_l;
+				wm->wm[level].ignore_lines =
+					wm_level->ignore_lines;
 			}
 		}
 	}
@@ -4671,12 +4904,12 @@ static bool skl_wm_has_lines(struct drm_i915_private *dev_priv, int level)
 
 static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
 				 int level,
+				 u32 latency,
 				 const struct skl_wm_params *wp,
 				 const struct skl_wm_level *result_prev,
 				 struct skl_wm_level *result /* out */)
 {
 	struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
-	u32 latency = dev_priv->wm.skl_latency[level];
 	uint_fixed_16_16_t method1, method2;
 	uint_fixed_16_16_t selected_result;
 	u32 res_blocks, res_lines, min_ddb_alloc = 0;
@@ -4797,20 +5030,46 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
 static void
 skl_compute_wm_levels(const struct intel_crtc_state *crtc_state,
 		      const struct skl_wm_params *wm_params,
-		      struct skl_wm_level *levels)
+		      struct skl_plane_wm *plane_wm,
+		      bool yuv)
 {
 	struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
 	int level, max_level = ilk_wm_max_level(dev_priv);
+	/*
+	 * Check which kind of plane is it and based on that calculate
+	 * correspondent WM levels.
+	 */
+	struct skl_wm_level *levels = yuv ? plane_wm->uv_wm : plane_wm->wm;
 	struct skl_wm_level *result_prev = &levels[0];
 
 	for (level = 0; level <= max_level; level++) {
 		struct skl_wm_level *result = &levels[level];
+		u32 latency = dev_priv->wm.skl_latency[level];
 
-		skl_compute_plane_wm(crtc_state, level, wm_params,
-				     result_prev, result);
+		skl_compute_plane_wm(crtc_state, level, latency,
+				     wm_params, result_prev, result);
 
 		result_prev = result;
 	}
+	/*
+	 * For Gen12 if it is an L0 we need to also
+	 * consider sagv_block_time when calculating
+	 * L0 watermark - we will need that when making
+	 * a decision whether enable SAGV or not.
+	 * For older gens we agreed to copy L0 value for
+	 * compatibility.
+	 */
+	if ((INTEL_GEN(dev_priv) >= 12)) {
+		u32 latency = dev_priv->wm.skl_latency[0];
+
+		latency += dev_priv->sagv_block_time_us;
+		skl_compute_plane_wm(crtc_state, 0, latency,
+				     wm_params, &levels[0],
+				     &plane_wm->sagv_wm0);
+	} else {
+		memcpy(&plane_wm->sagv_wm0, &levels[0],
+		       sizeof(struct skl_wm_level));
+	}
 }
 
 static u32
@@ -4903,7 +5162,7 @@ static int skl_build_plane_wm_single(struct intel_crtc_state *crtc_state,
 	if (ret)
 		return ret;
 
-	skl_compute_wm_levels(crtc_state, &wm_params, wm->wm);
+	skl_compute_wm_levels(crtc_state, &wm_params, wm, false);
 	skl_compute_transition_wm(crtc_state, &wm_params, wm);
 
 	return 0;
@@ -4925,7 +5184,7 @@ static int skl_build_plane_wm_uv(struct intel_crtc_state *crtc_state,
 	if (ret)
 		return ret;
 
-	skl_compute_wm_levels(crtc_state, &wm_params, wm->uv_wm);
+	skl_compute_wm_levels(crtc_state, &wm_params, wm, true);
 
 	return 0;
 }
@@ -5062,10 +5321,13 @@ void skl_write_plane_wm(struct intel_plane *plane,
 		&crtc_state->wm.skl.plane_ddb_y[plane_id];
 	const struct skl_ddb_entry *ddb_uv =
 		&crtc_state->wm.skl.plane_ddb_uv[plane_id];
+	const struct skl_wm_level *wm_level;
 
 	for (level = 0; level <= max_level; level++) {
+		wm_level = skl_plane_wm_level(plane, crtc_state, level, false);
+
 		skl_write_wm_level(dev_priv, PLANE_WM(pipe, plane_id, level),
-				   &wm->wm[level]);
+				   wm_level);
 	}
 	skl_write_wm_level(dev_priv, PLANE_WM_TRANS(pipe, plane_id),
 			   &wm->trans_wm);
@@ -5096,10 +5358,13 @@ void skl_write_cursor_wm(struct intel_plane *plane,
 		&crtc_state->wm.skl.optimal.planes[plane_id];
 	const struct skl_ddb_entry *ddb =
 		&crtc_state->wm.skl.plane_ddb_y[plane_id];
+	const struct skl_wm_level *wm_level;
 
 	for (level = 0; level <= max_level; level++) {
+		wm_level = skl_plane_wm_level(plane, crtc_state, level, false);
+
 		skl_write_wm_level(dev_priv, CUR_WM(pipe, level),
-				   &wm->wm[level]);
+				   wm_level);
 	}
 	skl_write_wm_level(dev_priv, CUR_WM_TRANS(pipe), &wm->trans_wm);
 
@@ -5473,18 +5738,68 @@ static int skl_wm_add_affected_planes(struct intel_atomic_state *state,
 	return 0;
 }
 
+static void tgl_compute_sagv_mask(struct intel_atomic_state *state)
+{
+	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
+	struct intel_crtc *crtc;
+	struct intel_crtc_state *new_crtc_state;
+	struct intel_crtc_state *old_crtc_state;
+	struct skl_ddb_allocation *ddb = &state->wm_results.ddb;
+	int ret;
+	int i;
+	struct intel_plane *plane;
+
+	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
+					    new_crtc_state, i) {
+		int pipe_bit = BIT(crtc->pipe);
+		bool skip = true;
+
+		/*
+		 * If we had set this mast already once for this state,
+		 * no need to waste CPU cycles for doing this again.
+		 */
+		for_each_intel_plane_on_crtc(&dev_priv->drm, crtc, plane) {
+			enum plane_id plane_id = plane->id;
+
+			if (!skl_plane_wm_equals(dev_priv,
+						 &old_crtc_state->wm.skl.optimal.planes[plane_id],
+						 &new_crtc_state->wm.skl.optimal.planes[plane_id])) {
+				skip = false;
+				break;
+			}
+		}
+
+		/*
+		 * Check if wm levels are actually the same as for previous
+		 * state, which means we can just skip doing this long check
+		 * and just  copy correspondent bit from previous state.
+		 */
+		if (skip)
+			continue;
+
+		ret = tgl_check_pipe_fits_sagv_wm(new_crtc_state, ddb);
+		if (!ret)
+			state->crtc_sagv_mask |= pipe_bit;
+		else
+			state->crtc_sagv_mask &= ~pipe_bit;
+	}
+}
+
 static int
 skl_compute_wm(struct intel_atomic_state *state)
 {
 	struct intel_crtc *crtc;
 	struct intel_crtc_state *new_crtc_state;
 	struct intel_crtc_state *old_crtc_state;
-	struct skl_ddb_values *results = &state->wm_results;
 	int ret, i;
+	struct skl_ddb_values *results = &state->wm_results;
 
 	/* Clear all dirty flags */
 	results->dirty_pipes = 0;
 
+	/* No SAGV until we check if it's possible */
+	state->crtc_sagv_mask = 0;
+
 	ret = skl_ddb_add_affected_pipes(state);
 	if (ret)
 		return ret;
@@ -5664,6 +5979,9 @@ void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
 				val = I915_READ(CUR_WM(pipe, level));
 
 			skl_wm_level_from_reg_val(val, &wm->wm[level]);
+			if (level == 0)
+				memcpy(&wm->sagv_wm0, &wm->wm[level],
+				       sizeof(struct skl_wm_level));
 		}
 
 		if (plane_id != PLANE_CURSOR)
diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h
index c06c6a846d9a..4136d4508e63 100644
--- a/drivers/gpu/drm/i915/intel_pm.h
+++ b/drivers/gpu/drm/i915/intel_pm.h
@@ -43,6 +43,7 @@ void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
 void g4x_wm_sanitize(struct drm_i915_private *dev_priv);
 void vlv_wm_sanitize(struct drm_i915_private *dev_priv);
 bool intel_can_enable_sagv(struct intel_atomic_state *state);
+bool intel_has_sagv(struct drm_i915_private *dev_priv);
 int intel_enable_sagv(struct drm_i915_private *dev_priv);
 int intel_disable_sagv(struct drm_i915_private *dev_priv);
 bool skl_wm_level_equals(const struct skl_wm_level *l1,
-- 
2.17.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-12-17 17:04 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-12 12:40 [Intel-gfx] [PATCH v12 0/3] Refactor Gen11+ SAGV support Stanislav Lisovskiy
2019-12-12 12:40 ` [Intel-gfx] [PATCH v12 1/3] drm/i915: Refactor intel_can_enable_sagv Stanislav Lisovskiy
2019-12-12 12:40 ` [Intel-gfx] [PATCH v12 2/3] drm/i915: Restrict qgv points which don't have enough bandwidth Stanislav Lisovskiy
2019-12-12 12:40 ` [Intel-gfx] [PATCH v12 3/3] drm/i915: Enable SAGV support for Gen12 Stanislav Lisovskiy
2019-12-12 17:20 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Refactor Gen11+ SAGV support (rev13) Patchwork
2019-12-12 17:23 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2019-12-12 17:43 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2019-12-13 10:32 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2019-12-13 14:12 [Intel-gfx] [PATCH v12 0/3] Refactor Gen11+ SAGV support Stanislav Lisovskiy
2019-12-13 14:12 ` [Intel-gfx] [PATCH v12 1/3] drm/i915: Refactor intel_can_enable_sagv Stanislav Lisovskiy
2019-12-17 17:04   ` Ville Syrjälä

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.