intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state
@ 2020-02-25 17:11 Ville Syrjala
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 01/20] drm/i915: Handle some leftover s/intel_crtc/crtc/ Ville Syrjala
                   ` (23 more replies)
  0 siblings, 24 replies; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

More complete version of intel_dbuf_state. We finally get rid of
distrust_bios_wm and all the uglyness surrounding it. And we no
longer have to know ahead of time whether the duf allocation
might change or not, and thus don't need to pull in all crtcs
into the state up front. Now we just compute the new dbuf
state, and if it changes the affected crtcs get added to the
state naturally.

+ a bunch of cleanups.

Entire series available here:
git://github.com/vsyrjala/linux.git dbuf_state_2

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>

Ville Syrjälä (20):
  drm/i915: Handle some leftover s/intel_crtc/crtc/
  drm/i915: Remove garbage WARNs
  drm/i915: Add missing commas to dbuf tables
  drm/i915: Use a sentinel to terminate the dbuf slice arrays
  drm/i915: Make skl_compute_dbuf_slices() behave consistently for all
    platforms
  drm/i915: Polish some dbuf debugs
  drm/i915: Unify the low level dbuf code
  drm/i915: Introduce proper dbuf state
  drm/i915: Nuke skl_ddb_get_hw_state()
  drm/i915: Move the dbuf pre/post plane update
  drm/i915: Clean up dbuf debugs during .atomic_check()
  drm/i915: Extract intel_crtc_ddb_weight()
  drm/i915: Pass the crtc to skl_compute_dbuf_slices()
  drm/i915: Introduce intel_dbuf_slice_size()
  drm/i915: Introduce skl_ddb_entry_for_slices()
  drm/i915: Move pipe ddb entries into the dbuf state
  drm/i915: Extract intel_crtc_dbuf_weights()
  drm/i915: Encapsulate dbuf state handling harder
  drm/i915: Do a bit more initial readout for dbuf
  drm/i915: Check slice mask for holes

 drivers/gpu/drm/i915/display/intel_display.c  |  95 +--
 .../drm/i915/display/intel_display_debugfs.c  |   1 -
 .../drm/i915/display/intel_display_power.c    |  80 +-
 .../drm/i915/display/intel_display_power.h    |   6 +-
 .../drm/i915/display/intel_display_types.h    |  14 -
 drivers/gpu/drm/i915/i915_drv.h               |  16 +-
 drivers/gpu/drm/i915/intel_pm.c               | 741 ++++++++++--------
 drivers/gpu/drm/i915/intel_pm.h               |  31 +-
 8 files changed, 521 insertions(+), 463 deletions(-)

-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 01/20] drm/i915: Handle some leftover s/intel_crtc/crtc/
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-26  9:29   ` Jani Nikula
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 02/20] drm/i915: Remove garbage WARNs Ville Syrjala
                   ` (22 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Switch to the preferred 'crtc' name for our crtc variables.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 22aa205793e5..543634d3e10c 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -2776,7 +2776,7 @@ static bool ilk_validate_wm_level(int level,
 }
 
 static void ilk_compute_wm_level(const struct drm_i915_private *dev_priv,
-				 const struct intel_crtc *intel_crtc,
+				 const struct intel_crtc *crtc,
 				 int level,
 				 struct intel_crtc_state *crtc_state,
 				 const struct intel_plane_state *pristate,
@@ -3107,7 +3107,7 @@ static bool ilk_validate_pipe_wm(const struct drm_i915_private *dev_priv,
 static int ilk_compute_pipe_wm(struct intel_crtc_state *crtc_state)
 {
 	struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
-	struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
+	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
 	struct intel_pipe_wm *pipe_wm;
 	struct intel_plane *plane;
 	const struct intel_plane_state *plane_state;
@@ -3147,7 +3147,7 @@ static int ilk_compute_pipe_wm(struct intel_crtc_state *crtc_state)
 		usable_level = 0;
 
 	memset(&pipe_wm->wm, 0, sizeof(pipe_wm->wm));
-	ilk_compute_wm_level(dev_priv, intel_crtc, 0, crtc_state,
+	ilk_compute_wm_level(dev_priv, crtc, 0, crtc_state,
 			     pristate, sprstate, curstate, &pipe_wm->wm[0]);
 
 	if (!ilk_validate_pipe_wm(dev_priv, pipe_wm))
@@ -3158,7 +3158,7 @@ static int ilk_compute_pipe_wm(struct intel_crtc_state *crtc_state)
 	for (level = 1; level <= usable_level; level++) {
 		struct intel_wm_level *wm = &pipe_wm->wm[level];
 
-		ilk_compute_wm_level(dev_priv, intel_crtc, level, crtc_state,
+		ilk_compute_wm_level(dev_priv, crtc, level, crtc_state,
 				     pristate, sprstate, curstate, wm);
 
 		/*
@@ -4549,9 +4549,8 @@ static int
 skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 {
 	struct drm_atomic_state *state = crtc_state->uapi.state;
-	struct drm_crtc *crtc = crtc_state->uapi.crtc;
-	struct drm_i915_private *dev_priv = to_i915(crtc->dev);
-	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
+	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
 	struct skl_ddb_entry *alloc = &crtc_state->wm.skl.ddb;
 	u16 alloc_size, start = 0;
 	u16 total[I915_MAX_PLANES] = {};
@@ -4609,7 +4608,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 	 */
 	for (level = ilk_wm_max_level(dev_priv); level >= 0; level--) {
 		blocks = 0;
-		for_each_plane_id_on_crtc(intel_crtc, plane_id) {
+		for_each_plane_id_on_crtc(crtc, plane_id) {
 			const struct skl_plane_wm *wm =
 				&crtc_state->wm.skl.optimal.planes[plane_id];
 
@@ -4646,7 +4645,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 	 * watermark level, plus an extra share of the leftover blocks
 	 * proportional to its relative data rate.
 	 */
-	for_each_plane_id_on_crtc(intel_crtc, plane_id) {
+	for_each_plane_id_on_crtc(crtc, plane_id) {
 		const struct skl_plane_wm *wm =
 			&crtc_state->wm.skl.optimal.planes[plane_id];
 		u64 rate;
@@ -4685,7 +4684,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 
 	/* Set the actual DDB start/end points for each plane */
 	start = alloc->start;
-	for_each_plane_id_on_crtc(intel_crtc, plane_id) {
+	for_each_plane_id_on_crtc(crtc, plane_id) {
 		struct skl_ddb_entry *plane_alloc =
 			&crtc_state->wm.skl.plane_ddb_y[plane_id];
 		struct skl_ddb_entry *uv_plane_alloc =
@@ -4719,7 +4718,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 	 * that aren't actually possible.
 	 */
 	for (level++; level <= ilk_wm_max_level(dev_priv); level++) {
-		for_each_plane_id_on_crtc(intel_crtc, plane_id) {
+		for_each_plane_id_on_crtc(crtc, plane_id) {
 			struct skl_plane_wm *wm =
 				&crtc_state->wm.skl.optimal.planes[plane_id];
 
@@ -4756,7 +4755,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 	 * Go back and disable the transition watermark if it turns out we
 	 * don't have enough DDB blocks for it.
 	 */
-	for_each_plane_id_on_crtc(intel_crtc, plane_id) {
+	for_each_plane_id_on_crtc(crtc, plane_id) {
 		struct skl_plane_wm *wm =
 			&crtc_state->wm.skl.optimal.planes[plane_id];
 
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 02/20] drm/i915: Remove garbage WARNs
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 01/20] drm/i915: Handle some leftover s/intel_crtc/crtc/ Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-26  9:30   ` Jani Nikula
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 03/20] drm/i915: Add missing commas to dbuf tables Ville Syrjala
                   ` (21 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

These things can never happen, and probably we'd have oopsed long ago
if they did. Just get rid of this pointless noise in the code.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 11 -----------
 1 file changed, 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 543634d3e10c..59fc461bc454 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -4470,14 +4470,10 @@ skl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state,
 				 u64 *plane_data_rate,
 				 u64 *uv_plane_data_rate)
 {
-	struct drm_atomic_state *state = crtc_state->uapi.state;
 	struct intel_plane *plane;
 	const struct intel_plane_state *plane_state;
 	u64 total_data_rate = 0;
 
-	if (WARN_ON(!state))
-		return 0;
-
 	/* Calculate and cache data rate for each plane */
 	intel_atomic_crtc_state_for_each_plane_state(plane, plane_state, crtc_state) {
 		enum plane_id plane_id = plane->id;
@@ -4505,9 +4501,6 @@ icl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state,
 	const struct intel_plane_state *plane_state;
 	u64 total_data_rate = 0;
 
-	if (WARN_ON(!crtc_state->uapi.state))
-		return 0;
-
 	/* Calculate and cache data rate for each plane */
 	intel_atomic_crtc_state_for_each_plane_state(plane, plane_state, crtc_state) {
 		enum plane_id plane_id = plane->id;
@@ -4548,7 +4541,6 @@ icl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state,
 static int
 skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 {
-	struct drm_atomic_state *state = crtc_state->uapi.state;
 	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
 	struct skl_ddb_entry *alloc = &crtc_state->wm.skl.ddb;
@@ -4567,9 +4559,6 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 	memset(crtc_state->wm.skl.plane_ddb_y, 0, sizeof(crtc_state->wm.skl.plane_ddb_y));
 	memset(crtc_state->wm.skl.plane_ddb_uv, 0, sizeof(crtc_state->wm.skl.plane_ddb_uv));
 
-	if (drm_WARN_ON(&dev_priv->drm, !state))
-		return 0;
-
 	if (!crtc_state->hw.active) {
 		alloc->start = alloc->end = 0;
 		return 0;
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 03/20] drm/i915: Add missing commas to dbuf tables
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 01/20] drm/i915: Handle some leftover s/intel_crtc/crtc/ Ville Syrjala
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 02/20] drm/i915: Remove garbage WARNs Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-26  9:30   ` Jani Nikula
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 04/20] drm/i915: Use a sentinel to terminate the dbuf slice arrays Ville Syrjala
                   ` (20 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

The preferred style is to sprinkle commas after each array and
structure initialization, whether or not it happens to be the
last element/member (only exception being sentinel entries which
never have anything after them). This leads to much prettier
diffs if/when new elements/members get added to the end of the
initialization. We're not bound by some ancient silly mandate
to omit the final comma.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 88 ++++++++++++++++-----------------
 1 file changed, 44 insertions(+), 44 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 59fc461bc454..abeb4b19071f 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -4184,49 +4184,49 @@ static const struct dbuf_slice_conf_entry icl_allowed_dbufs[] =
 	{
 		.active_pipes = BIT(PIPE_A),
 		.dbuf_mask = {
-			[PIPE_A] = BIT(DBUF_S1)
-		}
+			[PIPE_A] = BIT(DBUF_S1),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_B),
 		.dbuf_mask = {
-			[PIPE_B] = BIT(DBUF_S1)
-		}
+			[PIPE_B] = BIT(DBUF_S1),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_A) | BIT(PIPE_B),
 		.dbuf_mask = {
 			[PIPE_A] = BIT(DBUF_S1),
-			[PIPE_B] = BIT(DBUF_S2)
-		}
+			[PIPE_B] = BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_C),
 		.dbuf_mask = {
-			[PIPE_C] = BIT(DBUF_S2)
-		}
+			[PIPE_C] = BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_A) | BIT(PIPE_C),
 		.dbuf_mask = {
 			[PIPE_A] = BIT(DBUF_S1),
-			[PIPE_C] = BIT(DBUF_S2)
-		}
+			[PIPE_C] = BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_B) | BIT(PIPE_C),
 		.dbuf_mask = {
 			[PIPE_B] = BIT(DBUF_S1),
-			[PIPE_C] = BIT(DBUF_S2)
-		}
+			[PIPE_C] = BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C),
 		.dbuf_mask = {
 			[PIPE_A] = BIT(DBUF_S1),
 			[PIPE_B] = BIT(DBUF_S1),
-			[PIPE_C] = BIT(DBUF_S2)
-		}
+			[PIPE_C] = BIT(DBUF_S2),
+		},
 	},
 };
 
@@ -4246,100 +4246,100 @@ static const struct dbuf_slice_conf_entry tgl_allowed_dbufs[] =
 	{
 		.active_pipes = BIT(PIPE_A),
 		.dbuf_mask = {
-			[PIPE_A] = BIT(DBUF_S1) | BIT(DBUF_S2)
-		}
+			[PIPE_A] = BIT(DBUF_S1) | BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_B),
 		.dbuf_mask = {
-			[PIPE_B] = BIT(DBUF_S1) | BIT(DBUF_S2)
-		}
+			[PIPE_B] = BIT(DBUF_S1) | BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_A) | BIT(PIPE_B),
 		.dbuf_mask = {
 			[PIPE_A] = BIT(DBUF_S2),
-			[PIPE_B] = BIT(DBUF_S1)
-		}
+			[PIPE_B] = BIT(DBUF_S1),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_C),
 		.dbuf_mask = {
-			[PIPE_C] = BIT(DBUF_S2) | BIT(DBUF_S1)
-		}
+			[PIPE_C] = BIT(DBUF_S2) | BIT(DBUF_S1),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_A) | BIT(PIPE_C),
 		.dbuf_mask = {
 			[PIPE_A] = BIT(DBUF_S1),
-			[PIPE_C] = BIT(DBUF_S2)
-		}
+			[PIPE_C] = BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_B) | BIT(PIPE_C),
 		.dbuf_mask = {
 			[PIPE_B] = BIT(DBUF_S1),
-			[PIPE_C] = BIT(DBUF_S2)
-		}
+			[PIPE_C] = BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C),
 		.dbuf_mask = {
 			[PIPE_A] = BIT(DBUF_S1),
 			[PIPE_B] = BIT(DBUF_S1),
-			[PIPE_C] = BIT(DBUF_S2)
-		}
+			[PIPE_C] = BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_D),
 		.dbuf_mask = {
-			[PIPE_D] = BIT(DBUF_S2) | BIT(DBUF_S1)
-		}
+			[PIPE_D] = BIT(DBUF_S2) | BIT(DBUF_S1),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_A) | BIT(PIPE_D),
 		.dbuf_mask = {
 			[PIPE_A] = BIT(DBUF_S1),
-			[PIPE_D] = BIT(DBUF_S2)
-		}
+			[PIPE_D] = BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_B) | BIT(PIPE_D),
 		.dbuf_mask = {
 			[PIPE_B] = BIT(DBUF_S1),
-			[PIPE_D] = BIT(DBUF_S2)
-		}
+			[PIPE_D] = BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_D),
 		.dbuf_mask = {
 			[PIPE_A] = BIT(DBUF_S1),
 			[PIPE_B] = BIT(DBUF_S1),
-			[PIPE_D] = BIT(DBUF_S2)
-		}
+			[PIPE_D] = BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_C) | BIT(PIPE_D),
 		.dbuf_mask = {
 			[PIPE_C] = BIT(DBUF_S1),
-			[PIPE_D] = BIT(DBUF_S2)
-		}
+			[PIPE_D] = BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_A) | BIT(PIPE_C) | BIT(PIPE_D),
 		.dbuf_mask = {
 			[PIPE_A] = BIT(DBUF_S1),
 			[PIPE_C] = BIT(DBUF_S2),
-			[PIPE_D] = BIT(DBUF_S2)
-		}
+			[PIPE_D] = BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D),
 		.dbuf_mask = {
 			[PIPE_B] = BIT(DBUF_S1),
 			[PIPE_C] = BIT(DBUF_S2),
-			[PIPE_D] = BIT(DBUF_S2)
-		}
+			[PIPE_D] = BIT(DBUF_S2),
+		},
 	},
 	{
 		.active_pipes = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D),
@@ -4347,8 +4347,8 @@ static const struct dbuf_slice_conf_entry tgl_allowed_dbufs[] =
 			[PIPE_A] = BIT(DBUF_S1),
 			[PIPE_B] = BIT(DBUF_S1),
 			[PIPE_C] = BIT(DBUF_S2),
-			[PIPE_D] = BIT(DBUF_S2)
-		}
+			[PIPE_D] = BIT(DBUF_S2),
+		},
 	},
 };
 
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 04/20] drm/i915: Use a sentinel to terminate the dbuf slice arrays
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (2 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 03/20] drm/i915: Add missing commas to dbuf tables Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-26  9:32   ` Jani Nikula
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 05/20] drm/i915: Make skl_compute_dbuf_slices() behave consistently for all platforms Ville Syrjala
                   ` (19 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Make life a bit simpler by sticking a sentinel at the end of
the dbuf slice arrays. This way we don't need to pass in the
size. Also unify the types (u8 vs. u32) for active_pipes.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 34 +++++++++++++--------------------
 1 file changed, 13 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index abeb4b19071f..a2e78969c0df 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3843,7 +3843,7 @@ static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv)
 }
 
 static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state,
-				  u32 active_pipes);
+				  u8 active_pipes);
 
 static void
 skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
@@ -4228,6 +4228,7 @@ static const struct dbuf_slice_conf_entry icl_allowed_dbufs[] =
 			[PIPE_C] = BIT(DBUF_S2),
 		},
 	},
+	{}
 };
 
 /*
@@ -4350,16 +4351,15 @@ static const struct dbuf_slice_conf_entry tgl_allowed_dbufs[] =
 			[PIPE_D] = BIT(DBUF_S2),
 		},
 	},
+	{}
 };
 
-static u8 compute_dbuf_slices(enum pipe pipe,
-			      u32 active_pipes,
-			      const struct dbuf_slice_conf_entry *dbuf_slices,
-			      int size)
+static u8 compute_dbuf_slices(enum pipe pipe, u8 active_pipes,
+			      const struct dbuf_slice_conf_entry *dbuf_slices)
 {
 	int i;
 
-	for (i = 0; i < size; i++) {
+	for (i = 0; i < dbuf_slices[i].active_pipes; i++) {
 		if (dbuf_slices[i].active_pipes == active_pipes)
 			return dbuf_slices[i].dbuf_mask[pipe];
 	}
@@ -4371,8 +4371,7 @@ static u8 compute_dbuf_slices(enum pipe pipe,
  * returns correspondent DBuf slice mask as stated in BSpec for particular
  * platform.
  */
-static u32 icl_compute_dbuf_slices(enum pipe pipe,
-				   u32 active_pipes)
+static u8 icl_compute_dbuf_slices(enum pipe pipe, u8 active_pipes)
 {
 	/*
 	 * FIXME: For ICL this is still a bit unclear as prev BSpec revision
@@ -4386,32 +4385,25 @@ static u32 icl_compute_dbuf_slices(enum pipe pipe,
 	 * still here - we will need it once those additional constraints
 	 * pop up.
 	 */
-	return compute_dbuf_slices(pipe, active_pipes,
-				   icl_allowed_dbufs,
-				   ARRAY_SIZE(icl_allowed_dbufs));
+	return compute_dbuf_slices(pipe, active_pipes, icl_allowed_dbufs);
 }
 
-static u32 tgl_compute_dbuf_slices(enum pipe pipe,
-				   u32 active_pipes)
+static u8 tgl_compute_dbuf_slices(enum pipe pipe, u8 active_pipes)
 {
-	return compute_dbuf_slices(pipe, active_pipes,
-				   tgl_allowed_dbufs,
-				   ARRAY_SIZE(tgl_allowed_dbufs));
+	return compute_dbuf_slices(pipe, active_pipes, tgl_allowed_dbufs);
 }
 
 static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state,
-				  u32 active_pipes)
+				  u8 active_pipes)
 {
 	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
 	enum pipe pipe = crtc->pipe;
 
 	if (IS_GEN(dev_priv, 12))
-		return tgl_compute_dbuf_slices(pipe,
-					       active_pipes);
+		return tgl_compute_dbuf_slices(pipe, active_pipes);
 	else if (IS_GEN(dev_priv, 11))
-		return icl_compute_dbuf_slices(pipe,
-					       active_pipes);
+		return icl_compute_dbuf_slices(pipe, active_pipes);
 	/*
 	 * For anything else just return one slice yet.
 	 * Should be extended for other platforms.
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 05/20] drm/i915: Make skl_compute_dbuf_slices() behave consistently for all platforms
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (3 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 04/20] drm/i915: Use a sentinel to terminate the dbuf slice arrays Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-25 17:30   ` Lisovskiy, Stanislav
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 06/20] drm/i915: Polish some dbuf debugs Ville Syrjala
                   ` (18 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Currently skl_compute_dbuf_slices() returns 0 for any inactive pipe on
icl+, but returns BIT(S1) on pre-icl for any pipe (whether it's active or
not). Let's make the behaviour consistent and always return 0 for any
inactive pipe.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index a2e78969c0df..640f4c4fd508 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -4408,7 +4408,7 @@ static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state,
 	 * For anything else just return one slice yet.
 	 * Should be extended for other platforms.
 	 */
-	return BIT(DBUF_S1);
+	return active_pipes & BIT(pipe) ? BIT(DBUF_S1) : 0;
 }
 
 static u64
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 06/20] drm/i915: Polish some dbuf debugs
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (4 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 05/20] drm/i915: Make skl_compute_dbuf_slices() behave consistently for all platforms Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-03-04 16:29   ` Lisovskiy, Stanislav
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code Ville Syrjala
                   ` (17 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Polish some of the dbuf code to give more meaningful debug
messages and whatnot. Also we can switch over to the per-device
debugs/warns at the same time.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 .../drm/i915/display/intel_display_power.c    | 40 +++++++++----------
 1 file changed, 19 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
index 6e25a1317161..e81e561e8ac0 100644
--- a/drivers/gpu/drm/i915/display/intel_display_power.c
+++ b/drivers/gpu/drm/i915/display/intel_display_power.c
@@ -4433,11 +4433,12 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
 	mutex_unlock(&power_domains->lock);
 }
 
-static inline
-bool intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
-			  i915_reg_t reg, bool enable)
+static void intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
+				 enum dbuf_slice slice, bool enable)
 {
-	u32 val, status;
+	i915_reg_t reg = DBUF_CTL_S(slice);
+	bool state;
+	u32 val;
 
 	val = intel_de_read(dev_priv, reg);
 	val = enable ? (val | DBUF_POWER_REQUEST) : (val & ~DBUF_POWER_REQUEST);
@@ -4445,13 +4446,10 @@ bool intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
 	intel_de_posting_read(dev_priv, reg);
 	udelay(10);
 
-	status = intel_de_read(dev_priv, reg) & DBUF_POWER_STATE;
-	if ((enable && !status) || (!enable && status)) {
-		drm_err(&dev_priv->drm, "DBus power %s timeout!\n",
-			enable ? "enable" : "disable");
-		return false;
-	}
-	return true;
+	state = intel_de_read(dev_priv, reg) & DBUF_POWER_STATE;
+	drm_WARN(&dev_priv->drm, enable != state,
+		 "DBuf slice %d power %s timeout!\n",
+		 slice, enable ? "enable" : "disable");
 }
 
 static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
@@ -4467,14 +4465,16 @@ static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
 void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
 			    u8 req_slices)
 {
-	int i;
-	int max_slices = INTEL_INFO(dev_priv)->num_supported_dbuf_slices;
+	int num_slices = INTEL_INFO(dev_priv)->num_supported_dbuf_slices;
 	struct i915_power_domains *power_domains = &dev_priv->power_domains;
+	enum dbuf_slice slice;
 
-	drm_WARN(&dev_priv->drm, hweight8(req_slices) > max_slices,
-		 "Invalid number of dbuf slices requested\n");
+	drm_WARN(&dev_priv->drm, req_slices & ~(BIT(num_slices) - 1),
+		 "Invalid set of dbuf slices (0x%x) requested (num dbuf slices %d)\n",
+		 req_slices, num_slices);
 
-	DRM_DEBUG_KMS("Updating dbuf slices to 0x%x\n", req_slices);
+	drm_dbg_kms(&dev_priv->drm,
+		    "Updating dbuf slices to 0x%x\n", req_slices);
 
 	/*
 	 * Might be running this in parallel to gen9_dc_off_power_well_enable
@@ -4485,11 +4485,9 @@ void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
 	 */
 	mutex_lock(&power_domains->lock);
 
-	for (i = 0; i < max_slices; i++) {
-		intel_dbuf_slice_set(dev_priv,
-				     DBUF_CTL_S(i),
-				     (req_slices & BIT(i)) != 0);
-	}
+	for (slice = DBUF_S1; slice < num_slices; slice++)
+		intel_dbuf_slice_set(dev_priv, slice,
+				     req_slices & BIT(slice));
 
 	dev_priv->enabled_dbuf_slices_mask = req_slices;
 
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (5 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 06/20] drm/i915: Polish some dbuf debugs Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-03-04 17:14   ` Lisovskiy, Stanislav
                     ` (2 more replies)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 08/20] drm/i915: Introduce proper dbuf state Ville Syrjala
                   ` (16 subsequent siblings)
  23 siblings, 3 replies; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

The low level dbuf slice code is rather inconsitent with its
functiona naming and organization. Make it more consistent.

Also share the enable/disable functions between all platforms
since the same code works just fine for all of them.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c  |  6 +--
 .../drm/i915/display/intel_display_power.c    | 44 ++++++++-----------
 .../drm/i915/display/intel_display_power.h    |  6 +--
 3 files changed, 24 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 3031e64ee518..6952c398cc43 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -15296,9 +15296,8 @@ static void icl_dbuf_slice_pre_update(struct intel_atomic_state *state)
 	u8 required_slices = state->enabled_dbuf_slices_mask;
 	u8 slices_union = hw_enabled_slices | required_slices;
 
-	/* If 2nd DBuf slice required, enable it here */
 	if (INTEL_GEN(dev_priv) >= 11 && slices_union != hw_enabled_slices)
-		icl_dbuf_slices_update(dev_priv, slices_union);
+		gen9_dbuf_slices_update(dev_priv, slices_union);
 }
 
 static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
@@ -15307,9 +15306,8 @@ static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
 	u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
 	u8 required_slices = state->enabled_dbuf_slices_mask;
 
-	/* If 2nd DBuf slice is no more required disable it */
 	if (INTEL_GEN(dev_priv) >= 11 && required_slices != hw_enabled_slices)
-		icl_dbuf_slices_update(dev_priv, required_slices);
+		gen9_dbuf_slices_update(dev_priv, required_slices);
 }
 
 static void skl_commit_modeset_enables(struct intel_atomic_state *state)
diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
index e81e561e8ac0..ce3bbc4c7a27 100644
--- a/drivers/gpu/drm/i915/display/intel_display_power.c
+++ b/drivers/gpu/drm/i915/display/intel_display_power.c
@@ -4433,15 +4433,18 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
 	mutex_unlock(&power_domains->lock);
 }
 
-static void intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
-				 enum dbuf_slice slice, bool enable)
+static void gen9_dbuf_slice_set(struct drm_i915_private *dev_priv,
+				enum dbuf_slice slice, bool enable)
 {
 	i915_reg_t reg = DBUF_CTL_S(slice);
 	bool state;
 	u32 val;
 
 	val = intel_de_read(dev_priv, reg);
-	val = enable ? (val | DBUF_POWER_REQUEST) : (val & ~DBUF_POWER_REQUEST);
+	if (enable)
+		val |= DBUF_POWER_REQUEST;
+	else
+		val &= ~DBUF_POWER_REQUEST;
 	intel_de_write(dev_priv, reg, val);
 	intel_de_posting_read(dev_priv, reg);
 	udelay(10);
@@ -4452,18 +4455,8 @@ static void intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
 		 slice, enable ? "enable" : "disable");
 }
 
-static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
-{
-	icl_dbuf_slices_update(dev_priv, BIT(DBUF_S1));
-}
-
-static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
-{
-	icl_dbuf_slices_update(dev_priv, 0);
-}
-
-void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
-			    u8 req_slices)
+void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
+			     u8 req_slices)
 {
 	int num_slices = INTEL_INFO(dev_priv)->num_supported_dbuf_slices;
 	struct i915_power_domains *power_domains = &dev_priv->power_domains;
@@ -4486,28 +4479,29 @@ void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
 	mutex_lock(&power_domains->lock);
 
 	for (slice = DBUF_S1; slice < num_slices; slice++)
-		intel_dbuf_slice_set(dev_priv, slice,
-				     req_slices & BIT(slice));
+		gen9_dbuf_slice_set(dev_priv, slice, req_slices & BIT(slice));
 
 	dev_priv->enabled_dbuf_slices_mask = req_slices;
 
 	mutex_unlock(&power_domains->lock);
 }
 
-static void icl_dbuf_enable(struct drm_i915_private *dev_priv)
+static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
 {
-	skl_ddb_get_hw_state(dev_priv);
+	dev_priv->enabled_dbuf_slices_mask =
+		intel_enabled_dbuf_slices_mask(dev_priv);
+
 	/*
 	 * Just power up at least 1 slice, we will
 	 * figure out later which slices we have and what we need.
 	 */
-	icl_dbuf_slices_update(dev_priv, dev_priv->enabled_dbuf_slices_mask |
-			       BIT(DBUF_S1));
+	gen9_dbuf_slices_update(dev_priv, BIT(DBUF_S1) |
+				dev_priv->enabled_dbuf_slices_mask);
 }
 
-static void icl_dbuf_disable(struct drm_i915_private *dev_priv)
+static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
 {
-	icl_dbuf_slices_update(dev_priv, 0);
+	gen9_dbuf_slices_update(dev_priv, 0);
 }
 
 static void icl_mbus_init(struct drm_i915_private *dev_priv)
@@ -5067,7 +5061,7 @@ static void icl_display_core_init(struct drm_i915_private *dev_priv,
 	intel_cdclk_init_hw(dev_priv);
 
 	/* 5. Enable DBUF. */
-	icl_dbuf_enable(dev_priv);
+	gen9_dbuf_enable(dev_priv);
 
 	/* 6. Setup MBUS. */
 	icl_mbus_init(dev_priv);
@@ -5090,7 +5084,7 @@ static void icl_display_core_uninit(struct drm_i915_private *dev_priv)
 	/* 1. Disable all display engine functions -> aready done */
 
 	/* 2. Disable DBUF */
-	icl_dbuf_disable(dev_priv);
+	gen9_dbuf_disable(dev_priv);
 
 	/* 3. Disable CD clock */
 	intel_cdclk_uninit_hw(dev_priv);
diff --git a/drivers/gpu/drm/i915/display/intel_display_power.h b/drivers/gpu/drm/i915/display/intel_display_power.h
index 601e000ffd0d..1a275611241e 100644
--- a/drivers/gpu/drm/i915/display/intel_display_power.h
+++ b/drivers/gpu/drm/i915/display/intel_display_power.h
@@ -312,13 +312,13 @@ enum dbuf_slice {
 	DBUF_S2,
 };
 
+void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
+			     u8 req_slices);
+
 #define with_intel_display_power(i915, domain, wf) \
 	for ((wf) = intel_display_power_get((i915), (domain)); (wf); \
 	     intel_display_power_put_async((i915), (domain), (wf)), (wf) = 0)
 
-void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
-			    u8 req_slices);
-
 void chv_phy_powergate_lanes(struct intel_encoder *encoder,
 			     bool override, unsigned int mask);
 bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum dpio_phy phy,
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 08/20] drm/i915: Introduce proper dbuf state
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (6 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-25 17:43   ` Lisovskiy, Stanislav
  2020-04-01  8:13   ` Lisovskiy, Stanislav
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 09/20] drm/i915: Nuke skl_ddb_get_hw_state() Ville Syrjala
                   ` (15 subsequent siblings)
  23 siblings, 2 replies; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Add a global state to track the dbuf slices. Gets rid of all the nasty
coupling between state->modeset and dbuf recomputation. Also we can now
totally nuke state->active_pipe_changes.

dev_priv->wm.distrust_bios_wm still remains, but that too will get
nuked soon.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c  |  67 +++++--
 .../drm/i915/display/intel_display_power.c    |   8 +-
 .../drm/i915/display/intel_display_types.h    |  13 --
 drivers/gpu/drm/i915/i915_drv.h               |  11 +-
 drivers/gpu/drm/i915/intel_pm.c               | 189 ++++++++++++------
 drivers/gpu/drm/i915/intel_pm.h               |  22 ++
 6 files changed, 209 insertions(+), 101 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 6952c398cc43..659b952c8e2f 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -7581,6 +7581,8 @@ static void intel_crtc_disable_noatomic(struct intel_crtc *crtc,
 		to_intel_bw_state(dev_priv->bw_obj.state);
 	struct intel_cdclk_state *cdclk_state =
 		to_intel_cdclk_state(dev_priv->cdclk.obj.state);
+	struct intel_dbuf_state *dbuf_state =
+		to_intel_dbuf_state(dev_priv->dbuf.obj.state);
 	struct intel_crtc_state *crtc_state =
 		to_intel_crtc_state(crtc->base.state);
 	enum intel_display_power_domain domain;
@@ -7654,6 +7656,8 @@ static void intel_crtc_disable_noatomic(struct intel_crtc *crtc,
 	cdclk_state->min_voltage_level[pipe] = 0;
 	cdclk_state->active_pipes &= ~BIT(pipe);
 
+	dbuf_state->active_pipes &= ~BIT(pipe);
+
 	bw_state->data_rate[pipe] = 0;
 	bw_state->num_active_planes[pipe] = 0;
 }
@@ -13991,10 +13995,10 @@ static void verify_wm_state(struct intel_crtc *crtc,
 	hw_enabled_slices = intel_enabled_dbuf_slices_mask(dev_priv);
 
 	if (INTEL_GEN(dev_priv) >= 11 &&
-	    hw_enabled_slices != dev_priv->enabled_dbuf_slices_mask)
+	    hw_enabled_slices != dev_priv->dbuf.enabled_slices)
 		drm_err(&dev_priv->drm,
 			"mismatch in DBUF Slices (expected 0x%x, got 0x%x)\n",
-			dev_priv->enabled_dbuf_slices_mask,
+			dev_priv->dbuf.enabled_slices,
 			hw_enabled_slices);
 
 	/* planes */
@@ -14529,9 +14533,7 @@ static int intel_modeset_checks(struct intel_atomic_state *state)
 	state->modeset = true;
 	state->active_pipes = intel_calc_active_pipes(state, dev_priv->active_pipes);
 
-	state->active_pipe_changes = state->active_pipes ^ dev_priv->active_pipes;
-
-	if (state->active_pipe_changes) {
+	if (state->active_pipes != dev_priv->active_pipes) {
 		ret = _intel_atomic_lock_global_state(state);
 		if (ret)
 			return ret;
@@ -15292,22 +15294,38 @@ static void intel_update_trans_port_sync_crtcs(struct intel_crtc *crtc,
 static void icl_dbuf_slice_pre_update(struct intel_atomic_state *state)
 {
 	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
-	u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
-	u8 required_slices = state->enabled_dbuf_slices_mask;
-	u8 slices_union = hw_enabled_slices | required_slices;
+	const struct intel_dbuf_state *new_dbuf_state =
+		intel_atomic_get_new_dbuf_state(state);
+	const struct intel_dbuf_state *old_dbuf_state =
+		intel_atomic_get_old_dbuf_state(state);
 
-	if (INTEL_GEN(dev_priv) >= 11 && slices_union != hw_enabled_slices)
-		gen9_dbuf_slices_update(dev_priv, slices_union);
+	if (!new_dbuf_state ||
+	    new_dbuf_state->enabled_slices == old_dbuf_state->enabled_slices)
+		return;
+
+	WARN_ON(!new_dbuf_state->base.changed);
+
+	gen9_dbuf_slices_update(dev_priv,
+				old_dbuf_state->enabled_slices |
+				new_dbuf_state->enabled_slices);
 }
 
 static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
 {
 	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
-	u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
-	u8 required_slices = state->enabled_dbuf_slices_mask;
+	const struct intel_dbuf_state *new_dbuf_state =
+		intel_atomic_get_new_dbuf_state(state);
+	const struct intel_dbuf_state *old_dbuf_state =
+		intel_atomic_get_old_dbuf_state(state);
 
-	if (INTEL_GEN(dev_priv) >= 11 && required_slices != hw_enabled_slices)
-		gen9_dbuf_slices_update(dev_priv, required_slices);
+	if (!new_dbuf_state ||
+	    new_dbuf_state->enabled_slices == old_dbuf_state->enabled_slices)
+		return;
+
+	WARN_ON(!new_dbuf_state->base.changed);
+
+	gen9_dbuf_slices_update(dev_priv,
+				new_dbuf_state->enabled_slices);
 }
 
 static void skl_commit_modeset_enables(struct intel_atomic_state *state)
@@ -15562,9 +15580,7 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
 	if (state->modeset)
 		intel_encoders_update_prepare(state);
 
-	/* Enable all new slices, we might need */
-	if (state->modeset)
-		icl_dbuf_slice_pre_update(state);
+	icl_dbuf_slice_pre_update(state);
 
 	/* Now enable the clocks, plane, pipe, and connectors that we set up. */
 	dev_priv->display.commit_modeset_enables(state);
@@ -15619,9 +15635,7 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
 			dev_priv->display.optimize_watermarks(state, crtc);
 	}
 
-	/* Disable all slices, we don't need */
-	if (state->modeset)
-		icl_dbuf_slice_post_update(state);
+	icl_dbuf_slice_post_update(state);
 
 	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
 		intel_post_plane_update(state, crtc);
@@ -17507,10 +17521,14 @@ void intel_modeset_init_hw(struct drm_i915_private *i915)
 {
 	struct intel_cdclk_state *cdclk_state =
 		to_intel_cdclk_state(i915->cdclk.obj.state);
+	struct intel_dbuf_state *dbuf_state =
+		to_intel_dbuf_state(i915->dbuf.obj.state);
 
 	intel_update_cdclk(i915);
 	intel_dump_cdclk_config(&i915->cdclk.hw, "Current CDCLK");
 	cdclk_state->logical = cdclk_state->actual = i915->cdclk.hw;
+
+	dbuf_state->enabled_slices = i915->dbuf.enabled_slices;
 }
 
 static int sanitize_watermarks_add_affected(struct drm_atomic_state *state)
@@ -17800,6 +17818,10 @@ int intel_modeset_init(struct drm_i915_private *i915)
 	if (ret)
 		return ret;
 
+	ret = intel_dbuf_init(i915);
+	if (ret)
+		return ret;
+
 	ret = intel_bw_init(i915);
 	if (ret)
 		return ret;
@@ -18303,6 +18325,8 @@ static void intel_modeset_readout_hw_state(struct drm_device *dev)
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct intel_cdclk_state *cdclk_state =
 		to_intel_cdclk_state(dev_priv->cdclk.obj.state);
+	struct intel_dbuf_state *dbuf_state =
+		to_intel_dbuf_state(dev_priv->dbuf.obj.state);
 	enum pipe pipe;
 	struct intel_crtc *crtc;
 	struct intel_encoder *encoder;
@@ -18334,7 +18358,8 @@ static void intel_modeset_readout_hw_state(struct drm_device *dev)
 			    enableddisabled(crtc_state->hw.active));
 	}
 
-	dev_priv->active_pipes = cdclk_state->active_pipes = active_pipes;
+	dev_priv->active_pipes = cdclk_state->active_pipes =
+		dbuf_state->active_pipes = active_pipes;
 
 	readout_plane_state(dev_priv);
 
diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
index ce3bbc4c7a27..dc0c9694b714 100644
--- a/drivers/gpu/drm/i915/display/intel_display_power.c
+++ b/drivers/gpu/drm/i915/display/intel_display_power.c
@@ -1062,7 +1062,7 @@ static bool gen9_dc_off_power_well_enabled(struct drm_i915_private *dev_priv,
 static void gen9_assert_dbuf_enabled(struct drm_i915_private *dev_priv)
 {
 	u8 hw_enabled_dbuf_slices = intel_enabled_dbuf_slices_mask(dev_priv);
-	u8 enabled_dbuf_slices = dev_priv->enabled_dbuf_slices_mask;
+	u8 enabled_dbuf_slices = dev_priv->dbuf.enabled_slices;
 
 	drm_WARN(&dev_priv->drm,
 		 hw_enabled_dbuf_slices != enabled_dbuf_slices,
@@ -4481,14 +4481,14 @@ void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
 	for (slice = DBUF_S1; slice < num_slices; slice++)
 		gen9_dbuf_slice_set(dev_priv, slice, req_slices & BIT(slice));
 
-	dev_priv->enabled_dbuf_slices_mask = req_slices;
+	dev_priv->dbuf.enabled_slices = req_slices;
 
 	mutex_unlock(&power_domains->lock);
 }
 
 static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
 {
-	dev_priv->enabled_dbuf_slices_mask =
+	dev_priv->dbuf.enabled_slices =
 		intel_enabled_dbuf_slices_mask(dev_priv);
 
 	/*
@@ -4496,7 +4496,7 @@ static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
 	 * figure out later which slices we have and what we need.
 	 */
 	gen9_dbuf_slices_update(dev_priv, BIT(DBUF_S1) |
-				dev_priv->enabled_dbuf_slices_mask);
+				dev_priv->dbuf.enabled_slices);
 }
 
 static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
index 0d8a64305464..165efa00d88b 100644
--- a/drivers/gpu/drm/i915/display/intel_display_types.h
+++ b/drivers/gpu/drm/i915/display/intel_display_types.h
@@ -471,16 +471,6 @@ struct intel_atomic_state {
 
 	bool dpll_set, modeset;
 
-	/*
-	 * Does this transaction change the pipes that are active?  This mask
-	 * tracks which CRTC's have changed their active state at the end of
-	 * the transaction (not counting the temporary disable during modesets).
-	 * This mask should only be non-zero when intel_state->modeset is true,
-	 * but the converse is not necessarily true; simply changing a mode may
-	 * not flip the final active status of any CRTC's
-	 */
-	u8 active_pipe_changes;
-
 	u8 active_pipes;
 
 	struct intel_shared_dpll_state shared_dpll[I915_NUM_PLLS];
@@ -498,9 +488,6 @@ struct intel_atomic_state {
 	 */
 	bool global_state_changed;
 
-	/* Number of enabled DBuf slices */
-	u8 enabled_dbuf_slices_mask;
-
 	struct i915_sw_fence commit_ready;
 
 	struct llist_node freed;
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 88e4fb8ac739..d03c84f373e6 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1006,6 +1006,13 @@ struct drm_i915_private {
 		struct intel_global_obj obj;
 	} cdclk;
 
+	struct {
+		/* The current hardware dbuf configuration */
+		u8 enabled_slices;
+
+		struct intel_global_obj obj;
+	} dbuf;
+
 	/**
 	 * wq - Driver workqueue for GEM.
 	 *
@@ -1181,12 +1188,12 @@ struct drm_i915_private {
 		 * Set during HW readout of watermarks/DDB.  Some platforms
 		 * need to know when we're still using BIOS-provided values
 		 * (which we don't fully trust).
+		 *
+		 * FIXME get rid of this.
 		 */
 		bool distrust_bios_wm;
 	} wm;
 
-	u8 enabled_dbuf_slices_mask; /* GEN11 has configurable 2 slices */
-
 	struct dram_info {
 		bool valid;
 		bool is_16gb_dimm;
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 640f4c4fd508..d4730d9b4e1b 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3845,7 +3845,7 @@ static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv)
 static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state,
 				  u8 active_pipes);
 
-static void
+static int
 skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 				   const struct intel_crtc_state *crtc_state,
 				   const u64 total_data_rate,
@@ -3858,30 +3858,29 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 	const struct intel_crtc *crtc;
 	u32 pipe_width = 0, total_width_in_range = 0, width_before_pipe_in_range = 0;
 	enum pipe for_pipe = to_intel_crtc(for_crtc)->pipe;
+	struct intel_dbuf_state *new_dbuf_state =
+		intel_atomic_get_new_dbuf_state(intel_state);
+	const struct intel_dbuf_state *old_dbuf_state =
+		intel_atomic_get_old_dbuf_state(intel_state);
+	u8 active_pipes = new_dbuf_state->active_pipes;
 	u16 ddb_size;
 	u32 ddb_range_size;
 	u32 i;
 	u32 dbuf_slice_mask;
-	u32 active_pipes;
 	u32 offset;
 	u32 slice_size;
 	u32 total_slice_mask;
 	u32 start, end;
+	int ret;
 
-	if (drm_WARN_ON(&dev_priv->drm, !state) || !crtc_state->hw.active) {
+	*num_active = hweight8(active_pipes);
+
+	if (!crtc_state->hw.active) {
 		alloc->start = 0;
 		alloc->end = 0;
-		*num_active = hweight8(dev_priv->active_pipes);
-		return;
+		return 0;
 	}
 
-	if (intel_state->active_pipe_changes)
-		active_pipes = intel_state->active_pipes;
-	else
-		active_pipes = dev_priv->active_pipes;
-
-	*num_active = hweight8(active_pipes);
-
 	ddb_size = intel_get_ddb_size(dev_priv);
 
 	slice_size = ddb_size / INTEL_INFO(dev_priv)->num_supported_dbuf_slices;
@@ -3894,13 +3893,16 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 	 * that changes the active CRTC list or do modeset would need to
 	 * grab _all_ crtc locks, including the one we currently hold.
 	 */
-	if (!intel_state->active_pipe_changes && !intel_state->modeset) {
+	if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes &&
+	    !dev_priv->wm.distrust_bios_wm) {
 		/*
 		 * alloc may be cleared by clear_intel_crtc_state,
 		 * copy from old state to be sure
+		 *
+		 * FIXME get rid of this mess
 		 */
 		*alloc = to_intel_crtc_state(for_crtc->state)->wm.skl.ddb;
-		return;
+		return 0;
 	}
 
 	/*
@@ -3979,7 +3981,13 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 	 * FIXME: For now we always enable slice S1 as per
 	 * the Bspec display initialization sequence.
 	 */
-	intel_state->enabled_dbuf_slices_mask = total_slice_mask | BIT(DBUF_S1);
+	new_dbuf_state->enabled_slices = total_slice_mask | BIT(DBUF_S1);
+
+	if (old_dbuf_state->enabled_slices != new_dbuf_state->enabled_slices) {
+		ret = intel_atomic_serialize_global_state(&new_dbuf_state->base);
+		if (ret)
+			return ret;
+	}
 
 	start = ddb_range_size * width_before_pipe_in_range / total_width_in_range;
 	end = ddb_range_size *
@@ -3990,9 +3998,8 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 
 	DRM_DEBUG_KMS("Pipe %d ddb %d-%d\n", for_pipe,
 		      alloc->start, alloc->end);
-	DRM_DEBUG_KMS("Enabled ddb slices mask %x num supported %d\n",
-		      intel_state->enabled_dbuf_slices_mask,
-		      INTEL_INFO(dev_priv)->num_supported_dbuf_slices);
+
+	return 0;
 }
 
 static int skl_compute_wm_params(const struct intel_crtc_state *crtc_state,
@@ -4112,8 +4119,8 @@ void skl_pipe_ddb_get_hw_state(struct intel_crtc *crtc,
 
 void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv)
 {
-	dev_priv->enabled_dbuf_slices_mask =
-				intel_enabled_dbuf_slices_mask(dev_priv);
+	dev_priv->dbuf.enabled_slices =
+		intel_enabled_dbuf_slices_mask(dev_priv);
 }
 
 /*
@@ -4546,6 +4553,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 	u64 uv_plane_data_rate[I915_MAX_PLANES] = {};
 	u32 blocks;
 	int level;
+	int ret;
 
 	/* Clear the partitioning for disabled planes. */
 	memset(crtc_state->wm.skl.plane_ddb_y, 0, sizeof(crtc_state->wm.skl.plane_ddb_y));
@@ -4567,8 +4575,12 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 							 uv_plane_data_rate);
 
 
-	skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state, total_data_rate,
-					   alloc, &num_active);
+	ret = skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state,
+						 total_data_rate,
+						 alloc, &num_active);
+	if (ret)
+		return ret;
+
 	alloc_size = skl_ddb_entry_size(alloc);
 	if (alloc_size == 0)
 		return 0;
@@ -5451,14 +5463,11 @@ skl_ddb_add_affected_planes(const struct intel_crtc_state *old_crtc_state,
 static int
 skl_compute_ddb(struct intel_atomic_state *state)
 {
-	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
 	struct intel_crtc_state *old_crtc_state;
 	struct intel_crtc_state *new_crtc_state;
 	struct intel_crtc *crtc;
 	int ret, i;
 
-	state->enabled_dbuf_slices_mask = dev_priv->enabled_dbuf_slices_mask;
-
 	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
 					    new_crtc_state, i) {
 		ret = skl_allocate_pipe_ddb(new_crtc_state);
@@ -5598,7 +5607,8 @@ skl_print_wm_changes(struct intel_atomic_state *state)
 	}
 }
 
-static int intel_add_all_pipes(struct intel_atomic_state *state)
+static int intel_add_affected_pipes(struct intel_atomic_state *state,
+				    u8 pipe_mask)
 {
 	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
 	struct intel_crtc *crtc;
@@ -5606,6 +5616,9 @@ static int intel_add_all_pipes(struct intel_atomic_state *state)
 	for_each_intel_crtc(&dev_priv->drm, crtc) {
 		struct intel_crtc_state *crtc_state;
 
+		if ((pipe_mask & BIT(crtc->pipe)) == 0)
+			continue;
+
 		crtc_state = intel_atomic_get_crtc_state(&state->base, crtc);
 		if (IS_ERR(crtc_state))
 			return PTR_ERR(crtc_state);
@@ -5618,49 +5631,54 @@ static int
 skl_ddb_add_affected_pipes(struct intel_atomic_state *state)
 {
 	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
-	int ret;
+	struct intel_crtc_state *crtc_state;
+	struct intel_crtc *crtc;
+	int i, ret;
 
-	/*
-	 * If this is our first atomic update following hardware readout,
-	 * we can't trust the DDB that the BIOS programmed for us.  Let's
-	 * pretend that all pipes switched active status so that we'll
-	 * ensure a full DDB recompute.
-	 */
 	if (dev_priv->wm.distrust_bios_wm) {
-		ret = drm_modeset_lock(&dev_priv->drm.mode_config.connection_mutex,
-				       state->base.acquire_ctx);
-		if (ret)
-			return ret;
-
-		state->active_pipe_changes = INTEL_INFO(dev_priv)->pipe_mask;
-
 		/*
-		 * We usually only initialize state->active_pipes if we
-		 * we're doing a modeset; make sure this field is always
-		 * initialized during the sanitization process that happens
-		 * on the first commit too.
+		 * skl_ddb_get_pipe_allocation_limits() currently requires
+		 * all active pipes to be included in the state so that
+		 * it can redistribute the dbuf among them, and it really
+		 * wants to recompute things when distrust_bios_wm is set
+		 * so we add all the pipes to the state.
 		 */
-		if (!state->modeset)
-			state->active_pipes = dev_priv->active_pipes;
+		ret = intel_add_affected_pipes(state, ~0);
+		if (ret)
+			return ret;
 	}
 
-	/*
-	 * If the modeset changes which CRTC's are active, we need to
-	 * recompute the DDB allocation for *all* active pipes, even
-	 * those that weren't otherwise being modified in any way by this
-	 * atomic commit.  Due to the shrinking of the per-pipe allocations
-	 * when new active CRTC's are added, it's possible for a pipe that
-	 * we were already using and aren't changing at all here to suddenly
-	 * become invalid if its DDB needs exceeds its new allocation.
-	 *
-	 * Note that if we wind up doing a full DDB recompute, we can't let
-	 * any other display updates race with this transaction, so we need
-	 * to grab the lock on *all* CRTC's.
-	 */
-	if (state->active_pipe_changes || state->modeset) {
-		ret = intel_add_all_pipes(state);
+	for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
+		struct intel_dbuf_state *new_dbuf_state;
+		const struct intel_dbuf_state *old_dbuf_state;
+
+		new_dbuf_state = intel_atomic_get_dbuf_state(state);
+		if (IS_ERR(new_dbuf_state))
+			return ret;
+
+		old_dbuf_state = intel_atomic_get_old_dbuf_state(state);
+
+		new_dbuf_state->active_pipes =
+			intel_calc_active_pipes(state, old_dbuf_state->active_pipes);
+
+		if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes)
+			break;
+
+		ret = intel_atomic_lock_global_state(&new_dbuf_state->base);
+		if (ret)
+			return ret;
+
+		/*
+		 * skl_ddb_get_pipe_allocation_limits() currently requires
+		 * all active pipes to be included in the state so that
+		 * it can redistribute the dbuf among them.
+		 */
+		ret = intel_add_affected_pipes(state,
+					       new_dbuf_state->active_pipes);
 		if (ret)
 			return ret;
+
+		break;
 	}
 
 	return 0;
@@ -7493,3 +7511,52 @@ void intel_pm_setup(struct drm_i915_private *dev_priv)
 	dev_priv->runtime_pm.suspended = false;
 	atomic_set(&dev_priv->runtime_pm.wakeref_count, 0);
 }
+
+static struct intel_global_state *intel_dbuf_duplicate_state(struct intel_global_obj *obj)
+{
+	struct intel_dbuf_state *dbuf_state;
+
+	dbuf_state = kmemdup(obj->state, sizeof(*dbuf_state), GFP_KERNEL);
+	if (!dbuf_state)
+		return NULL;
+
+	return &dbuf_state->base;
+}
+
+static void intel_dbuf_destroy_state(struct intel_global_obj *obj,
+				     struct intel_global_state *state)
+{
+	kfree(state);
+}
+
+static const struct intel_global_state_funcs intel_dbuf_funcs = {
+	.atomic_duplicate_state = intel_dbuf_duplicate_state,
+	.atomic_destroy_state = intel_dbuf_destroy_state,
+};
+
+struct intel_dbuf_state *
+intel_atomic_get_dbuf_state(struct intel_atomic_state *state)
+{
+	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
+	struct intel_global_state *dbuf_state;
+
+	dbuf_state = intel_atomic_get_global_obj_state(state, &dev_priv->dbuf.obj);
+	if (IS_ERR(dbuf_state))
+		return ERR_CAST(dbuf_state);
+
+	return to_intel_dbuf_state(dbuf_state);
+}
+
+int intel_dbuf_init(struct drm_i915_private *dev_priv)
+{
+	struct intel_dbuf_state *dbuf_state;
+
+	dbuf_state = kzalloc(sizeof(*dbuf_state), GFP_KERNEL);
+	if (!dbuf_state)
+		return -ENOMEM;
+
+	intel_atomic_global_obj_init(dev_priv, &dev_priv->dbuf.obj,
+				     &dbuf_state->base, &intel_dbuf_funcs);
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h
index d60a85421c5a..fadf7cbc44c4 100644
--- a/drivers/gpu/drm/i915/intel_pm.h
+++ b/drivers/gpu/drm/i915/intel_pm.h
@@ -8,6 +8,8 @@
 
 #include <linux/types.h>
 
+#include "display/intel_global_state.h"
+
 #include "i915_reg.h"
 
 struct drm_device;
@@ -59,4 +61,24 @@ void intel_enable_ipc(struct drm_i915_private *dev_priv);
 
 bool intel_set_memory_cxsr(struct drm_i915_private *dev_priv, bool enable);
 
+struct intel_dbuf_state {
+	struct intel_global_state base;
+
+	u8 enabled_slices;
+	u8 active_pipes;
+};
+
+int intel_dbuf_init(struct drm_i915_private *dev_priv);
+
+struct intel_dbuf_state *
+intel_atomic_get_dbuf_state(struct intel_atomic_state *state);
+
+#define to_intel_dbuf_state(x) container_of((x), struct intel_dbuf_state, base)
+#define intel_atomic_get_old_dbuf_state(state) \
+	to_intel_dbuf_state(intel_atomic_get_old_global_obj_state(state, &to_i915(state->base.dev)->dbuf.obj))
+#define intel_atomic_get_new_dbuf_state(state) \
+	to_intel_dbuf_state(intel_atomic_get_new_global_obj_state(state, &to_i915(state->base.dev)->dbuf.obj))
+
+int intel_dbuf_init(struct drm_i915_private *dev_priv);
+
 #endif /* __INTEL_PM_H__ */
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 09/20] drm/i915: Nuke skl_ddb_get_hw_state()
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (7 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 08/20] drm/i915: Introduce proper dbuf state Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-26 11:40   ` Lisovskiy, Stanislav
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 10/20] drm/i915: Move the dbuf pre/post plane update Ville Syrjala
                   ` (14 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

skl_ddb_get_hw_state() is redundant and kinda called in thw wrong
spot anyway. Just kill it.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 7 -------
 drivers/gpu/drm/i915/intel_pm.h | 1 -
 2 files changed, 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index d4730d9b4e1b..87f88ea6b7ae 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -4117,12 +4117,6 @@ void skl_pipe_ddb_get_hw_state(struct intel_crtc *crtc,
 	intel_display_power_put(dev_priv, power_domain, wakeref);
 }
 
-void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv)
-{
-	dev_priv->dbuf.enabled_slices =
-		intel_enabled_dbuf_slices_mask(dev_priv);
-}
-
 /*
  * Determines the downscale amount of a plane for the purposes of watermark calculations.
  * The bspec defines downscale amount as:
@@ -5910,7 +5904,6 @@ void skl_wm_get_hw_state(struct drm_i915_private *dev_priv)
 	struct intel_crtc *crtc;
 	struct intel_crtc_state *crtc_state;
 
-	skl_ddb_get_hw_state(dev_priv);
 	for_each_intel_crtc(&dev_priv->drm, crtc) {
 		crtc_state = to_intel_crtc_state(crtc->base.state);
 
diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h
index fadf7cbc44c4..1054a0ab1e40 100644
--- a/drivers/gpu/drm/i915/intel_pm.h
+++ b/drivers/gpu/drm/i915/intel_pm.h
@@ -38,7 +38,6 @@ u8 intel_enabled_dbuf_slices_mask(struct drm_i915_private *dev_priv);
 void skl_pipe_ddb_get_hw_state(struct intel_crtc *crtc,
 			       struct skl_ddb_entry *ddb_y,
 			       struct skl_ddb_entry *ddb_uv);
-void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv);
 void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
 			      struct skl_pipe_wm *out);
 void g4x_wm_sanitize(struct drm_i915_private *dev_priv);
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 10/20] drm/i915: Move the dbuf pre/post plane update
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (8 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 09/20] drm/i915: Nuke skl_ddb_get_hw_state() Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-26 11:38   ` Lisovskiy, Stanislav
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 11/20] drm/i915: Clean up dbuf debugs during .atomic_check() Ville Syrjala
                   ` (13 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Encapsulate the dbuf state more by moving the pre/post
plane functions out from intel_display.c. We stick them
into intel_pm.c since that's where the rest of the code
lives for now.

Eventually we should add a new file for this stuff at which
point we also need to decide if it makes sense to even split
the wm code from the ddb code, or to keep them together.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c | 41 +-------------------
 drivers/gpu/drm/i915/intel_pm.c              | 37 ++++++++++++++++++
 drivers/gpu/drm/i915/intel_pm.h              |  2 +
 3 files changed, 41 insertions(+), 39 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 659b952c8e2f..6e96756f9a69 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -15291,43 +15291,6 @@ static void intel_update_trans_port_sync_crtcs(struct intel_crtc *crtc,
 				       state);
 }
 
-static void icl_dbuf_slice_pre_update(struct intel_atomic_state *state)
-{
-	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
-	const struct intel_dbuf_state *new_dbuf_state =
-		intel_atomic_get_new_dbuf_state(state);
-	const struct intel_dbuf_state *old_dbuf_state =
-		intel_atomic_get_old_dbuf_state(state);
-
-	if (!new_dbuf_state ||
-	    new_dbuf_state->enabled_slices == old_dbuf_state->enabled_slices)
-		return;
-
-	WARN_ON(!new_dbuf_state->base.changed);
-
-	gen9_dbuf_slices_update(dev_priv,
-				old_dbuf_state->enabled_slices |
-				new_dbuf_state->enabled_slices);
-}
-
-static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
-{
-	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
-	const struct intel_dbuf_state *new_dbuf_state =
-		intel_atomic_get_new_dbuf_state(state);
-	const struct intel_dbuf_state *old_dbuf_state =
-		intel_atomic_get_old_dbuf_state(state);
-
-	if (!new_dbuf_state ||
-	    new_dbuf_state->enabled_slices == old_dbuf_state->enabled_slices)
-		return;
-
-	WARN_ON(!new_dbuf_state->base.changed);
-
-	gen9_dbuf_slices_update(dev_priv,
-				new_dbuf_state->enabled_slices);
-}
-
 static void skl_commit_modeset_enables(struct intel_atomic_state *state)
 {
 	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
@@ -15580,7 +15543,7 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
 	if (state->modeset)
 		intel_encoders_update_prepare(state);
 
-	icl_dbuf_slice_pre_update(state);
+	intel_dbuf_pre_plane_update(state);
 
 	/* Now enable the clocks, plane, pipe, and connectors that we set up. */
 	dev_priv->display.commit_modeset_enables(state);
@@ -15635,7 +15598,7 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
 			dev_priv->display.optimize_watermarks(state, crtc);
 	}
 
-	icl_dbuf_slice_post_update(state);
+	intel_dbuf_post_plane_update(state);
 
 	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
 		intel_post_plane_update(state, crtc);
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 87f88ea6b7ae..de2822e5c62c 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -7553,3 +7553,40 @@ int intel_dbuf_init(struct drm_i915_private *dev_priv)
 
 	return 0;
 }
+
+void intel_dbuf_pre_plane_update(struct intel_atomic_state *state)
+{
+	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
+	const struct intel_dbuf_state *new_dbuf_state =
+		intel_atomic_get_new_dbuf_state(state);
+	const struct intel_dbuf_state *old_dbuf_state =
+		intel_atomic_get_old_dbuf_state(state);
+
+	if (!new_dbuf_state ||
+	    new_dbuf_state->enabled_slices == old_dbuf_state->enabled_slices)
+		return;
+
+	WARN_ON(!new_dbuf_state->base.changed);
+
+	gen9_dbuf_slices_update(dev_priv,
+				old_dbuf_state->enabled_slices |
+				new_dbuf_state->enabled_slices);
+}
+
+void intel_dbuf_post_plane_update(struct intel_atomic_state *state)
+{
+	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
+	const struct intel_dbuf_state *new_dbuf_state =
+		intel_atomic_get_new_dbuf_state(state);
+	const struct intel_dbuf_state *old_dbuf_state =
+		intel_atomic_get_old_dbuf_state(state);
+
+	if (!new_dbuf_state ||
+	    new_dbuf_state->enabled_slices == old_dbuf_state->enabled_slices)
+		return;
+
+	WARN_ON(!new_dbuf_state->base.changed);
+
+	gen9_dbuf_slices_update(dev_priv,
+				new_dbuf_state->enabled_slices);
+}
diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h
index 1054a0ab1e40..8204d6a5526c 100644
--- a/drivers/gpu/drm/i915/intel_pm.h
+++ b/drivers/gpu/drm/i915/intel_pm.h
@@ -79,5 +79,7 @@ intel_atomic_get_dbuf_state(struct intel_atomic_state *state);
 	to_intel_dbuf_state(intel_atomic_get_new_global_obj_state(state, &to_i915(state->base.dev)->dbuf.obj))
 
 int intel_dbuf_init(struct drm_i915_private *dev_priv);
+void intel_dbuf_pre_plane_update(struct intel_atomic_state *state);
+void intel_dbuf_post_plane_update(struct intel_atomic_state *state);
 
 #endif /* __INTEL_PM_H__ */
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 11/20] drm/i915: Clean up dbuf debugs during .atomic_check()
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (9 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 10/20] drm/i915: Move the dbuf pre/post plane update Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-26 11:32   ` Lisovskiy, Stanislav
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 12/20] drm/i915: Extract intel_crtc_ddb_weight() Ville Syrjala
                   ` (12 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Combine the two per-pipe dbuf debugs into one, and use the canonical
[CRTC:%d:%s] style to identify the crtc. Also use the same style as
the plane code uses for the ddb start/end, and prefix bitmask properly
with 0x to make it clear they are in fact bitmasks.

The "how many total slices we are going to use" debug we move to
outside the crtc loop so it gets printed only once at the end.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 26 +++++++++++++++++++-------
 1 file changed, 19 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index de2822e5c62c..d2edfb820dd9 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3910,10 +3910,6 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 	 */
 	dbuf_slice_mask = skl_compute_dbuf_slices(crtc_state, active_pipes);
 
-	DRM_DEBUG_KMS("DBuf slice mask %x pipe %c active pipes %x\n",
-		      dbuf_slice_mask,
-		      pipe_name(for_pipe), active_pipes);
-
 	/*
 	 * Figure out at which DBuf slice we start, i.e if we start at Dbuf S2
 	 * and slice size is 1024, the offset would be 1024
@@ -3996,8 +3992,10 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 	alloc->start = offset + start;
 	alloc->end = offset + end;
 
-	DRM_DEBUG_KMS("Pipe %d ddb %d-%d\n", for_pipe,
-		      alloc->start, alloc->end);
+	drm_dbg_kms(&dev_priv->drm,
+		    "[CRTC:%d:%s] dbuf slices 0x%x, ddb (%d - %d), active pipes 0x%x\n",
+		    for_crtc->base.id, for_crtc->name,
+		    dbuf_slice_mask, alloc->start, alloc->end, active_pipes);
 
 	return 0;
 }
@@ -5457,7 +5455,10 @@ skl_ddb_add_affected_planes(const struct intel_crtc_state *old_crtc_state,
 static int
 skl_compute_ddb(struct intel_atomic_state *state)
 {
-	struct intel_crtc_state *old_crtc_state;
+	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
+	const struct intel_dbuf_state *old_dbuf_state;
+	const struct intel_dbuf_state *new_dbuf_state;
+	const struct intel_crtc_state *old_crtc_state;
 	struct intel_crtc_state *new_crtc_state;
 	struct intel_crtc *crtc;
 	int ret, i;
@@ -5474,6 +5475,17 @@ skl_compute_ddb(struct intel_atomic_state *state)
 			return ret;
 	}
 
+	old_dbuf_state = intel_atomic_get_old_dbuf_state(state);
+	new_dbuf_state = intel_atomic_get_new_dbuf_state(state);
+
+	if (new_dbuf_state &&
+	    new_dbuf_state->enabled_slices != old_dbuf_state->enabled_slices)
+		drm_dbg_kms(&dev_priv->drm,
+			    "Enabled dbuf slices 0x%x -> 0x%x (out of %d dbuf slices)\n",
+			    old_dbuf_state->enabled_slices,
+			    new_dbuf_state->enabled_slices,
+			    INTEL_INFO(dev_priv)->num_supported_dbuf_slices);
+
 	return 0;
 }
 
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 12/20] drm/i915: Extract intel_crtc_ddb_weight()
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (10 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 11/20] drm/i915: Clean up dbuf debugs during .atomic_check() Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 13/20] drm/i915: Pass the crtc to skl_compute_dbuf_slices() Ville Syrjala
                   ` (11 subsequent siblings)
  23 siblings, 0 replies; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

skl_ddb_get_pipe_allocation_limits() doesn't care how the weights
for distributing the ddb are caclculated for each pipe. Put that
calculation into a separate function so that such mundane details
are hidden from view.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 46 ++++++++++++++++++++-------------
 1 file changed, 28 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index d2edfb820dd9..3f48ce7517e2 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3842,6 +3842,25 @@ static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv)
 	return ddb_size;
 }
 
+static unsigned int intel_crtc_ddb_weight(const struct intel_crtc_state *crtc_state)
+{
+	const struct drm_display_mode *adjusted_mode =
+		&crtc_state->hw.adjusted_mode;
+	int hdisplay, vdisplay;
+
+	if (!crtc_state->hw.active)
+		return 0;
+
+	/*
+	 * Watermark/ddb requirement highly depends upon width of the
+	 * framebuffer, So instead of allocating DDB equally among pipes
+	 * distribute DDB based on resolution/width of the display.
+	 */
+	drm_mode_get_hv_timing(adjusted_mode, &hdisplay, &vdisplay);
+
+	return hdisplay;
+}
+
 static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state,
 				  u8 active_pipes);
 
@@ -3856,7 +3875,7 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 	struct intel_atomic_state *intel_state = to_intel_atomic_state(state);
 	struct drm_crtc *for_crtc = crtc_state->uapi.crtc;
 	const struct intel_crtc *crtc;
-	u32 pipe_width = 0, total_width_in_range = 0, width_before_pipe_in_range = 0;
+	unsigned int pipe_weight = 0, total_weight = 0, weight_before_pipe = 0;
 	enum pipe for_pipe = to_intel_crtc(for_crtc)->pipe;
 	struct intel_dbuf_state *new_dbuf_state =
 		intel_atomic_get_new_dbuf_state(intel_state);
@@ -3925,18 +3944,11 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 	 */
 	ddb_range_size = hweight8(dbuf_slice_mask) * slice_size;
 
-	/*
-	 * Watermark/ddb requirement highly depends upon width of the
-	 * framebuffer, So instead of allocating DDB equally among pipes
-	 * distribute DDB based on resolution/width of the display.
-	 */
 	total_slice_mask = dbuf_slice_mask;
 	for_each_new_intel_crtc_in_state(intel_state, crtc, crtc_state, i) {
-		const struct drm_display_mode *adjusted_mode =
-			&crtc_state->hw.adjusted_mode;
 		enum pipe pipe = crtc->pipe;
-		int hdisplay, vdisplay;
-		u32 pipe_dbuf_slice_mask;
+		unsigned int weight;
+		u8 pipe_dbuf_slice_mask;
 
 		if (!crtc_state->hw.active)
 			continue;
@@ -3963,14 +3975,13 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 		if (dbuf_slice_mask != pipe_dbuf_slice_mask)
 			continue;
 
-		drm_mode_get_hv_timing(adjusted_mode, &hdisplay, &vdisplay);
-
-		total_width_in_range += hdisplay;
+		weight = intel_crtc_ddb_weight(crtc_state);
+		total_weight += weight;
 
 		if (pipe < for_pipe)
-			width_before_pipe_in_range += hdisplay;
+			weight_before_pipe += weight;
 		else if (pipe == for_pipe)
-			pipe_width = hdisplay;
+			pipe_weight = weight;
 	}
 
 	/*
@@ -3985,9 +3996,8 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 			return ret;
 	}
 
-	start = ddb_range_size * width_before_pipe_in_range / total_width_in_range;
-	end = ddb_range_size *
-		(width_before_pipe_in_range + pipe_width) / total_width_in_range;
+	start = ddb_range_size * weight_before_pipe / total_weight;
+	end = ddb_range_size * (weight_before_pipe + pipe_weight) / total_weight;
 
 	alloc->start = offset + start;
 	alloc->end = offset + end;
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 13/20] drm/i915: Pass the crtc to skl_compute_dbuf_slices()
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (11 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 12/20] drm/i915: Extract intel_crtc_ddb_weight() Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-26  8:41   ` Lisovskiy, Stanislav
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 14/20] drm/i915: Introduce intel_dbuf_slice_size() Ville Syrjala
                   ` (10 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

skl_compute_dbuf_slices() has no use for the crtc state, so
just pass the crtc itself.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 22 ++++++++++------------
 1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 3f48ce7517e2..256622b603cd 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3861,7 +3861,7 @@ static unsigned int intel_crtc_ddb_weight(const struct intel_crtc_state *crtc_st
 	return hdisplay;
 }
 
-static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state,
+static u8 skl_compute_dbuf_slices(struct intel_crtc *crtc,
 				  u8 active_pipes);
 
 static int
@@ -3873,10 +3873,10 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 {
 	struct drm_atomic_state *state = crtc_state->uapi.state;
 	struct intel_atomic_state *intel_state = to_intel_atomic_state(state);
-	struct drm_crtc *for_crtc = crtc_state->uapi.crtc;
-	const struct intel_crtc *crtc;
+	struct intel_crtc *for_crtc = to_intel_crtc(crtc_state->uapi.crtc);
+	struct intel_crtc *crtc;
 	unsigned int pipe_weight = 0, total_weight = 0, weight_before_pipe = 0;
-	enum pipe for_pipe = to_intel_crtc(for_crtc)->pipe;
+	enum pipe for_pipe = for_crtc->pipe;
 	struct intel_dbuf_state *new_dbuf_state =
 		intel_atomic_get_new_dbuf_state(intel_state);
 	const struct intel_dbuf_state *old_dbuf_state =
@@ -3920,14 +3920,14 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 		 *
 		 * FIXME get rid of this mess
 		 */
-		*alloc = to_intel_crtc_state(for_crtc->state)->wm.skl.ddb;
+		*alloc = to_intel_crtc_state(for_crtc->base.state)->wm.skl.ddb;
 		return 0;
 	}
 
 	/*
 	 * Get allowed DBuf slices for correspondent pipe and platform.
 	 */
-	dbuf_slice_mask = skl_compute_dbuf_slices(crtc_state, active_pipes);
+	dbuf_slice_mask = skl_compute_dbuf_slices(for_crtc, active_pipes);
 
 	/*
 	 * Figure out at which DBuf slice we start, i.e if we start at Dbuf S2
@@ -3953,8 +3953,8 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 		if (!crtc_state->hw.active)
 			continue;
 
-		pipe_dbuf_slice_mask = skl_compute_dbuf_slices(crtc_state,
-							       active_pipes);
+		pipe_dbuf_slice_mask =
+			skl_compute_dbuf_slices(crtc, active_pipes);
 
 		/*
 		 * According to BSpec pipe can share one dbuf slice with another
@@ -4004,7 +4004,7 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 
 	drm_dbg_kms(&dev_priv->drm,
 		    "[CRTC:%d:%s] dbuf slices 0x%x, ddb (%d - %d), active pipes 0x%x\n",
-		    for_crtc->base.id, for_crtc->name,
+		    for_crtc->base.base.id, for_crtc->base.name,
 		    dbuf_slice_mask, alloc->start, alloc->end, active_pipes);
 
 	return 0;
@@ -4402,10 +4402,8 @@ static u8 tgl_compute_dbuf_slices(enum pipe pipe, u8 active_pipes)
 	return compute_dbuf_slices(pipe, active_pipes, tgl_allowed_dbufs);
 }
 
-static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state,
-				  u8 active_pipes)
+static u8 skl_compute_dbuf_slices(struct intel_crtc *crtc, u8 active_pipes)
 {
-	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
 	enum pipe pipe = crtc->pipe;
 
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 14/20] drm/i915: Introduce intel_dbuf_slice_size()
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (12 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 13/20] drm/i915: Pass the crtc to skl_compute_dbuf_slices() Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 15/20] drm/i915: Introduce skl_ddb_entry_for_slices() Ville Syrjala
                   ` (9 subsequent siblings)
  23 siblings, 0 replies; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Put the code into a function with a descriptive name. Also relocate
the code a bit help future work.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 34 +++++++++++++++++++--------------
 1 file changed, 20 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 256622b603cd..9baf31e06011 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3809,6 +3809,24 @@ bool intel_can_enable_sagv(struct intel_atomic_state *state)
 	return true;
 }
 
+static int intel_dbuf_size(struct drm_i915_private *dev_priv)
+{
+	int ddb_size = INTEL_INFO(dev_priv)->ddb_size;
+
+	drm_WARN_ON(&dev_priv->drm, ddb_size == 0);
+
+	if (INTEL_GEN(dev_priv) < 11)
+		return ddb_size - 4; /* 4 blocks for bypass path allocation */
+
+	return ddb_size;
+}
+
+static int intel_dbuf_slice_size(struct drm_i915_private *dev_priv)
+{
+	return intel_dbuf_size(dev_priv) /
+		INTEL_INFO(dev_priv)->num_supported_dbuf_slices;
+}
+
 /*
  * Calculate initial DBuf slice offset, based on slice size
  * and mask(i.e if slice size is 1024 and second slice is enabled
@@ -3830,17 +3848,6 @@ icl_get_first_dbuf_slice_offset(u32 dbuf_slice_mask,
 	return offset;
 }
 
-static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv)
-{
-	u16 ddb_size = INTEL_INFO(dev_priv)->ddb_size;
-
-	drm_WARN_ON(&dev_priv->drm, ddb_size == 0);
-
-	if (INTEL_GEN(dev_priv) < 11)
-		return ddb_size - 4; /* 4 blocks for bypass path allocation */
-
-	return ddb_size;
-}
 
 static unsigned int intel_crtc_ddb_weight(const struct intel_crtc_state *crtc_state)
 {
@@ -3900,9 +3907,8 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 		return 0;
 	}
 
-	ddb_size = intel_get_ddb_size(dev_priv);
-
-	slice_size = ddb_size / INTEL_INFO(dev_priv)->num_supported_dbuf_slices;
+	ddb_size = intel_dbuf_size(dev_priv);
+	slice_size = intel_dbuf_slice_size(dev_priv);
 
 	/*
 	 * If the state doesn't change the active CRTC's or there is no
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 15/20] drm/i915: Introduce skl_ddb_entry_for_slices()
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (13 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 14/20] drm/i915: Introduce intel_dbuf_slice_size() Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 16/20] drm/i915: Move pipe ddb entries into the dbuf state Ville Syrjala
                   ` (8 subsequent siblings)
  23 siblings, 0 replies; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Generalize icl_get_first_dbuf_slice_offset() into something that
just gives us the start+end of the dbuf chunk covered by the
specified slices as a standard ddb entry. Initial idea was to use
it during readout as well, but we shall see.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 56 +++++++++++----------------------
 1 file changed, 18 insertions(+), 38 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 9baf31e06011..94847225c84f 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3827,28 +3827,25 @@ static int intel_dbuf_slice_size(struct drm_i915_private *dev_priv)
 		INTEL_INFO(dev_priv)->num_supported_dbuf_slices;
 }
 
-/*
- * Calculate initial DBuf slice offset, based on slice size
- * and mask(i.e if slice size is 1024 and second slice is enabled
- * offset would be 1024)
- */
-static unsigned int
-icl_get_first_dbuf_slice_offset(u32 dbuf_slice_mask,
-				u32 slice_size,
-				u32 ddb_size)
+static void
+skl_ddb_entry_for_slices(struct drm_i915_private *dev_priv, u8 slice_mask,
+			 struct skl_ddb_entry *ddb)
 {
-	unsigned int offset = 0;
+	int slice_size = intel_dbuf_slice_size(dev_priv);
 
-	if (!dbuf_slice_mask)
-		return 0;
+	if (!slice_mask) {
+		ddb->start = 0;
+		ddb->end = 0;
+		return;
+	}
 
-	offset = (ffs(dbuf_slice_mask) - 1) * slice_size;
+	ddb->start = (ffs(slice_mask) - 1) * slice_size;
+	ddb->end = fls(slice_mask) * slice_size;
 
-	WARN_ON(offset >= ddb_size);
-	return offset;
+	WARN_ON(ddb->start >= ddb->end);
+	WARN_ON(ddb->end > intel_dbuf_size(dev_priv));
 }
 
-
 static unsigned int intel_crtc_ddb_weight(const struct intel_crtc_state *crtc_state)
 {
 	const struct drm_display_mode *adjusted_mode =
@@ -3889,12 +3886,10 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 	const struct intel_dbuf_state *old_dbuf_state =
 		intel_atomic_get_old_dbuf_state(intel_state);
 	u8 active_pipes = new_dbuf_state->active_pipes;
-	u16 ddb_size;
+	struct skl_ddb_entry ddb_slices;
 	u32 ddb_range_size;
 	u32 i;
 	u32 dbuf_slice_mask;
-	u32 offset;
-	u32 slice_size;
 	u32 total_slice_mask;
 	u32 start, end;
 	int ret;
@@ -3907,9 +3902,6 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 		return 0;
 	}
 
-	ddb_size = intel_dbuf_size(dev_priv);
-	slice_size = intel_dbuf_slice_size(dev_priv);
-
 	/*
 	 * If the state doesn't change the active CRTC's or there is no
 	 * modeset request, then there's no need to recalculate;
@@ -3935,20 +3927,8 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 	 */
 	dbuf_slice_mask = skl_compute_dbuf_slices(for_crtc, active_pipes);
 
-	/*
-	 * Figure out at which DBuf slice we start, i.e if we start at Dbuf S2
-	 * and slice size is 1024, the offset would be 1024
-	 */
-	offset = icl_get_first_dbuf_slice_offset(dbuf_slice_mask,
-						 slice_size, ddb_size);
-
-	/*
-	 * Figure out total size of allowed DBuf slices, which is basically
-	 * a number of allowed slices for that pipe multiplied by slice size.
-	 * Inside of this
-	 * range ddb entries are still allocated in proportion to display width.
-	 */
-	ddb_range_size = hweight8(dbuf_slice_mask) * slice_size;
+	skl_ddb_entry_for_slices(dev_priv, dbuf_slice_mask, &ddb_slices);
+	ddb_range_size = skl_ddb_entry_size(&ddb_slices);
 
 	total_slice_mask = dbuf_slice_mask;
 	for_each_new_intel_crtc_in_state(intel_state, crtc, crtc_state, i) {
@@ -4005,8 +3985,8 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 	start = ddb_range_size * weight_before_pipe / total_weight;
 	end = ddb_range_size * (weight_before_pipe + pipe_weight) / total_weight;
 
-	alloc->start = offset + start;
-	alloc->end = offset + end;
+	alloc->start = ddb_slices.start + start;
+	alloc->end = ddb_slices.start + end;
 
 	drm_dbg_kms(&dev_priv->drm,
 		    "[CRTC:%d:%s] dbuf slices 0x%x, ddb (%d - %d), active pipes 0x%x\n",
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 16/20] drm/i915: Move pipe ddb entries into the dbuf state
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (14 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 15/20] drm/i915: Introduce skl_ddb_entry_for_slices() Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-27 16:50   ` Ville Syrjala
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 17/20] drm/i915: Extract intel_crtc_dbuf_weights() Ville Syrjala
                   ` (7 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

The dbuf state will be where we collect all the inter-pipe dbuf
allocation stuff. Start by moving the actual per-pipe ddb entries
there.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c  | 28 +++++++++++--------
 .../drm/i915/display/intel_display_types.h    |  1 -
 drivers/gpu/drm/i915/intel_pm.c               | 16 ++++-------
 drivers/gpu/drm/i915/intel_pm.h               |  4 +++
 4 files changed, 27 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 6e96756f9a69..26e4462151a6 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -15294,6 +15294,10 @@ static void intel_update_trans_port_sync_crtcs(struct intel_crtc *crtc,
 static void skl_commit_modeset_enables(struct intel_atomic_state *state)
 {
 	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
+	const struct intel_dbuf_state *old_dbuf_state =
+		intel_atomic_get_old_dbuf_state(state);
+	const struct intel_dbuf_state *new_dbuf_state =
+		intel_atomic_get_new_dbuf_state(state);
 	struct intel_crtc *crtc;
 	struct intel_crtc_state *old_crtc_state, *new_crtc_state;
 	struct skl_ddb_entry entries[I915_MAX_PIPES] = {};
@@ -15309,7 +15313,7 @@ static void skl_commit_modeset_enables(struct intel_atomic_state *state)
 
 		/* ignore allocations for crtc's that have been turned off. */
 		if (!needs_modeset(new_crtc_state)) {
-			entries[pipe] = old_crtc_state->wm.skl.ddb;
+			entries[pipe] = old_dbuf_state->ddb[pipe];
 			update_pipes |= BIT(pipe);
 		} else {
 			modeset_pipes |= BIT(pipe);
@@ -15333,11 +15337,11 @@ static void skl_commit_modeset_enables(struct intel_atomic_state *state)
 			if ((update_pipes & BIT(pipe)) == 0)
 				continue;
 
-			if (skl_ddb_allocation_overlaps(&new_crtc_state->wm.skl.ddb,
+			if (skl_ddb_allocation_overlaps(&new_dbuf_state->ddb[pipe],
 							entries, num_pipes, pipe))
 				continue;
 
-			entries[pipe] = new_crtc_state->wm.skl.ddb;
+			entries[pipe] = new_dbuf_state->ddb[pipe];
 			update_pipes &= ~BIT(pipe);
 
 			intel_update_crtc(crtc, state, old_crtc_state,
@@ -15349,8 +15353,8 @@ static void skl_commit_modeset_enables(struct intel_atomic_state *state)
 			 * then we need to wait for a vblank to pass for the
 			 * new ddb allocation to take effect.
 			 */
-			if (!skl_ddb_entry_equal(&new_crtc_state->wm.skl.ddb,
-						 &old_crtc_state->wm.skl.ddb) &&
+			if (!skl_ddb_entry_equal(&new_dbuf_state->ddb[pipe],
+						 &old_dbuf_state->ddb[pipe]) &&
 			    (update_pipes | modeset_pipes))
 				intel_wait_for_vblank(dev_priv, pipe);
 		}
@@ -15371,10 +15375,11 @@ static void skl_commit_modeset_enables(struct intel_atomic_state *state)
 		    is_trans_port_sync_slave(new_crtc_state))
 			continue;
 
-		drm_WARN_ON(&dev_priv->drm, skl_ddb_allocation_overlaps(&new_crtc_state->wm.skl.ddb,
-									entries, num_pipes, pipe));
+		drm_WARN_ON(&dev_priv->drm,
+			    skl_ddb_allocation_overlaps(&new_dbuf_state->ddb[pipe],
+							entries, num_pipes, pipe));
 
-		entries[pipe] = new_crtc_state->wm.skl.ddb;
+		entries[pipe] = new_dbuf_state->ddb[pipe];
 		modeset_pipes &= ~BIT(pipe);
 
 		if (is_trans_port_sync_mode(new_crtc_state)) {
@@ -15406,10 +15411,11 @@ static void skl_commit_modeset_enables(struct intel_atomic_state *state)
 		if ((modeset_pipes & BIT(pipe)) == 0)
 			continue;
 
-		drm_WARN_ON(&dev_priv->drm, skl_ddb_allocation_overlaps(&new_crtc_state->wm.skl.ddb,
-									entries, num_pipes, pipe));
+		drm_WARN_ON(&dev_priv->drm,
+			    skl_ddb_allocation_overlaps(&new_dbuf_state->ddb[pipe],
+							entries, num_pipes, pipe));
 
-		entries[pipe] = new_crtc_state->wm.skl.ddb;
+		entries[pipe] = new_dbuf_state->ddb[pipe];
 		modeset_pipes &= ~BIT(pipe);
 
 		intel_update_crtc(crtc, state, old_crtc_state, new_crtc_state);
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
index 165efa00d88b..0029d4c0d563 100644
--- a/drivers/gpu/drm/i915/display/intel_display_types.h
+++ b/drivers/gpu/drm/i915/display/intel_display_types.h
@@ -704,7 +704,6 @@ struct intel_crtc_wm_state {
 		struct {
 			/* gen9+ only needs 1-step wm programming */
 			struct skl_pipe_wm optimal;
-			struct skl_ddb_entry ddb;
 			struct skl_ddb_entry plane_ddb_y[I915_MAX_PLANES];
 			struct skl_ddb_entry plane_ddb_uv[I915_MAX_PLANES];
 		} skl;
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 94847225c84f..b33d99a30116 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3911,16 +3911,8 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 	 * grab _all_ crtc locks, including the one we currently hold.
 	 */
 	if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes &&
-	    !dev_priv->wm.distrust_bios_wm) {
-		/*
-		 * alloc may be cleared by clear_intel_crtc_state,
-		 * copy from old state to be sure
-		 *
-		 * FIXME get rid of this mess
-		 */
-		*alloc = to_intel_crtc_state(for_crtc->base.state)->wm.skl.ddb;
+	    !dev_priv->wm.distrust_bios_wm)
 		return 0;
-	}
 
 	/*
 	 * Get allowed DBuf slices for correspondent pipe and platform.
@@ -4528,7 +4520,11 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 {
 	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
-	struct skl_ddb_entry *alloc = &crtc_state->wm.skl.ddb;
+	struct intel_atomic_state *state =
+		to_intel_atomic_state(crtc_state->uapi.state);
+	struct intel_dbuf_state *dbuf_state =
+		intel_atomic_get_new_dbuf_state(state);
+	struct skl_ddb_entry *alloc = &dbuf_state->ddb[crtc->pipe];
 	u16 alloc_size, start = 0;
 	u16 total[I915_MAX_PLANES] = {};
 	u16 uv_total[I915_MAX_PLANES] = {};
diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h
index 8204d6a5526c..d9f84d93280d 100644
--- a/drivers/gpu/drm/i915/intel_pm.h
+++ b/drivers/gpu/drm/i915/intel_pm.h
@@ -8,8 +8,10 @@
 
 #include <linux/types.h>
 
+#include "display/intel_display.h"
 #include "display/intel_global_state.h"
 
+#include "i915_drv.h"
 #include "i915_reg.h"
 
 struct drm_device;
@@ -63,6 +65,8 @@ bool intel_set_memory_cxsr(struct drm_i915_private *dev_priv, bool enable);
 struct intel_dbuf_state {
 	struct intel_global_state base;
 
+	struct skl_ddb_entry ddb[I915_MAX_PIPES];
+
 	u8 enabled_slices;
 	u8 active_pipes;
 };
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 17/20] drm/i915: Extract intel_crtc_dbuf_weights()
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (15 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 16/20] drm/i915: Move pipe ddb entries into the dbuf state Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 18/20] drm/i915: Encapsulate dbuf state handling harder Ville Syrjala
                   ` (6 subsequent siblings)
  23 siblings, 0 replies; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Extract the code to calculate the weights used to chunk up the dbuf
between pipes. There's still extra stuff in there that shouldn't be
there and must be moved out, but that requires a bit more state to
be tracked in the dbuf state.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 145 ++++++++++++++++++++------------
 1 file changed, 89 insertions(+), 56 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index b33d99a30116..085043528f80 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3868,62 +3868,35 @@ static unsigned int intel_crtc_ddb_weight(const struct intel_crtc_state *crtc_st
 static u8 skl_compute_dbuf_slices(struct intel_crtc *crtc,
 				  u8 active_pipes);
 
-static int
-skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
-				   const struct intel_crtc_state *crtc_state,
-				   const u64 total_data_rate,
-				   struct skl_ddb_entry *alloc, /* out */
-				   int *num_active /* out */)
+static int intel_crtc_dbuf_weights(struct intel_atomic_state *state,
+				   struct intel_crtc *for_crtc,
+				   unsigned int *weight_start,
+				   unsigned int *weight_end,
+				   unsigned int *weight_total)
 {
-	struct drm_atomic_state *state = crtc_state->uapi.state;
-	struct intel_atomic_state *intel_state = to_intel_atomic_state(state);
-	struct intel_crtc *for_crtc = to_intel_crtc(crtc_state->uapi.crtc);
-	struct intel_crtc *crtc;
-	unsigned int pipe_weight = 0, total_weight = 0, weight_before_pipe = 0;
+	const struct intel_dbuf_state *old_dbuf_state =
+		intel_atomic_get_old_dbuf_state(state);
+	struct intel_dbuf_state *new_dbuf_state =
+		intel_atomic_get_new_dbuf_state(state);
+	u8 active_pipes = new_dbuf_state->active_pipes;
 	enum pipe for_pipe = for_crtc->pipe;
-	struct intel_dbuf_state *new_dbuf_state =
-		intel_atomic_get_new_dbuf_state(intel_state);
-	const struct intel_dbuf_state *old_dbuf_state =
-		intel_atomic_get_old_dbuf_state(intel_state);
-	u8 active_pipes = new_dbuf_state->active_pipes;
-	struct skl_ddb_entry ddb_slices;
-	u32 ddb_range_size;
-	u32 i;
-	u32 dbuf_slice_mask;
-	u32 total_slice_mask;
-	u32 start, end;
-	int ret;
-
-	*num_active = hweight8(active_pipes);
-
-	if (!crtc_state->hw.active) {
-		alloc->start = 0;
-		alloc->end = 0;
-		return 0;
-	}
-
-	/*
-	 * If the state doesn't change the active CRTC's or there is no
-	 * modeset request, then there's no need to recalculate;
-	 * the existing pipe allocation limits should remain unchanged.
-	 * Note that we're safe from racing commits since any racing commit
-	 * that changes the active CRTC list or do modeset would need to
-	 * grab _all_ crtc locks, including the one we currently hold.
-	 */
-	if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes &&
-	    !dev_priv->wm.distrust_bios_wm)
-		return 0;
+	const struct intel_crtc_state *crtc_state;
+	struct intel_crtc *crtc;
+	u8 dbuf_slice_mask;
+	u8 total_slice_mask;
+	int i, ret;
 
 	/*
 	 * Get allowed DBuf slices for correspondent pipe and platform.
 	 */
 	dbuf_slice_mask = skl_compute_dbuf_slices(for_crtc, active_pipes);
-
-	skl_ddb_entry_for_slices(dev_priv, dbuf_slice_mask, &ddb_slices);
-	ddb_range_size = skl_ddb_entry_size(&ddb_slices);
-
 	total_slice_mask = dbuf_slice_mask;
-	for_each_new_intel_crtc_in_state(intel_state, crtc, crtc_state, i) {
+
+	*weight_start = 0;
+	*weight_end = 0;
+	*weight_total = 0;
+
+	for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
 		enum pipe pipe = crtc->pipe;
 		unsigned int weight;
 		u8 pipe_dbuf_slice_mask;
@@ -3954,12 +3927,14 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 			continue;
 
 		weight = intel_crtc_ddb_weight(crtc_state);
-		total_weight += weight;
+		*weight_total += weight;
 
-		if (pipe < for_pipe)
-			weight_before_pipe += weight;
-		else if (pipe == for_pipe)
-			pipe_weight = weight;
+		if (pipe < for_pipe) {
+			*weight_start += weight;
+			*weight_end += weight;
+		} else if (pipe == for_pipe) {
+			*weight_end += weight;
+		}
 	}
 
 	/*
@@ -3974,15 +3949,73 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 			return ret;
 	}
 
-	start = ddb_range_size * weight_before_pipe / total_weight;
-	end = ddb_range_size * (weight_before_pipe + pipe_weight) / total_weight;
+	return 0;
+}
+
+static int
+skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
+				   const struct intel_crtc_state *crtc_state,
+				   const u64 total_data_rate,
+				   struct skl_ddb_entry *alloc, /* out */
+				   int *num_active /* out */)
+{
+	struct intel_atomic_state *state =
+		to_intel_atomic_state(crtc_state->uapi.state);
+	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
+	unsigned int weight_start, weight_end, weight_total;
+	const struct intel_dbuf_state *old_dbuf_state =
+		intel_atomic_get_old_dbuf_state(state);
+	struct intel_dbuf_state *new_dbuf_state =
+		intel_atomic_get_new_dbuf_state(state);
+	u8 active_pipes = new_dbuf_state->active_pipes;
+	struct skl_ddb_entry ddb_slices;
+	u32 ddb_range_size;
+	u32 dbuf_slice_mask;
+	u32 start, end;
+	int ret;
+
+	*num_active = hweight8(active_pipes);
+
+	if (!crtc_state->hw.active) {
+		alloc->start = 0;
+		alloc->end = 0;
+		return 0;
+	}
+
+	/*
+	 * If the state doesn't change the active CRTC's or there is no
+	 * modeset request, then there's no need to recalculate;
+	 * the existing pipe allocation limits should remain unchanged.
+	 * Note that we're safe from racing commits since any racing commit
+	 * that changes the active CRTC list or do modeset would need to
+	 * grab _all_ crtc locks, including the one we currently hold.
+	 */
+	if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes &&
+	    !dev_priv->wm.distrust_bios_wm)
+		return 0;
+
+	/*
+	 * Get allowed DBuf slices for correspondent pipe and platform.
+	 */
+	dbuf_slice_mask = skl_compute_dbuf_slices(crtc, active_pipes);
+
+	skl_ddb_entry_for_slices(dev_priv, dbuf_slice_mask, &ddb_slices);
+	ddb_range_size = skl_ddb_entry_size(&ddb_slices);
+
+	ret = intel_crtc_dbuf_weights(state, crtc,
+				      &weight_start, &weight_end, &weight_total);
+	if (ret)
+		return ret;
+
+	start = ddb_range_size * weight_start / weight_total;
+	end = ddb_range_size * weight_end / weight_total;
 
 	alloc->start = ddb_slices.start + start;
 	alloc->end = ddb_slices.start + end;
 
 	drm_dbg_kms(&dev_priv->drm,
 		    "[CRTC:%d:%s] dbuf slices 0x%x, ddb (%d - %d), active pipes 0x%x\n",
-		    for_crtc->base.base.id, for_crtc->base.name,
+		    crtc->base.base.id, crtc->base.name,
 		    dbuf_slice_mask, alloc->start, alloc->end, active_pipes);
 
 	return 0;
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 18/20] drm/i915: Encapsulate dbuf state handling harder
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (16 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 17/20] drm/i915: Extract intel_crtc_dbuf_weights() Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2021-01-21 12:55   ` Lisovskiy, Stanislav
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 19/20] drm/i915: Do a bit more initial readout for dbuf Ville Syrjala
                   ` (5 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

In order to make the dbuf state computation less fragile
let's make it stand on its own feet by now requiring someone
to peek into a crystall ball ahead of time to figure out
which pipes need to be added to the state under which potential
future conditions. Instead we compute each piece of the state
as we go along, and if any fallout occurs that affects more than
the current set of pipes we add the affected pipes to the state
naturally.

That requires that we track a few extra thigns in the global
dbuf state: dbuf slices for each pipe, and the weight each
pipe has when distributing the same set of slice(s) between
multiple pipes. Easy enough.

We do need to follow a somewhat careful sequence of computations
though as there are several steps involved in cooking up the dbuf
state. Thoguh we could avoid some of that by computing more things
on demand instead of relying on earlier step of the algorithm to
have filled it out. I think the end result is still reasonable
as the entire sequence is pretty much consolidated into a single
function instead of being spread around all over.

The rough sequence is this:
1. calculate active_pipes
2. calculate dbuf slices for every pipe
3. calculate total enabled slices
4. calculate new dbuf weights for any crtc in the state
5. calculate new ddb entry for every pipe based on the sets of
   slices and weights, and add any affected crtc to the state
6. calculate new plane ddb entries for all crtcs in the state,
   and add any affected plane to the state so that we'll perform
   the requisite hw reprogramming

And as a nice bonus we get to throw dev_priv->wm.distrust_bios_wm
out the window.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c  |  15 -
 .../drm/i915/display/intel_display_debugfs.c  |   1 -
 drivers/gpu/drm/i915/i915_drv.h               |   9 -
 drivers/gpu/drm/i915/intel_pm.c               | 356 +++++++-----------
 drivers/gpu/drm/i915/intel_pm.h               |   2 +
 5 files changed, 138 insertions(+), 245 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 26e4462151a6..e3df43f3932d 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -14856,20 +14856,6 @@ static int intel_atomic_check(struct drm_device *dev,
 	if (new_cdclk_state && new_cdclk_state->force_min_cdclk_changed)
 		any_ms = true;
 
-	/*
-	 * distrust_bios_wm will force a full dbuf recomputation
-	 * but the hardware state will only get updated accordingly
-	 * if state->modeset==true. Hence distrust_bios_wm==true &&
-	 * state->modeset==false is an invalid combination which
-	 * would cause the hardware and software dbuf state to get
-	 * out of sync. We must prevent that.
-	 *
-	 * FIXME clean up this mess and introduce better
-	 * state tracking for dbuf.
-	 */
-	if (dev_priv->wm.distrust_bios_wm)
-		any_ms = true;
-
 	if (any_ms) {
 		ret = intel_modeset_checks(state);
 		if (ret)
@@ -15769,7 +15755,6 @@ static int intel_atomic_commit(struct drm_device *dev,
 		intel_runtime_pm_put(&dev_priv->runtime_pm, state->wakeref);
 		return ret;
 	}
-	dev_priv->wm.distrust_bios_wm = false;
 	intel_shared_dpll_swap_state(state);
 	intel_atomic_track_fbs(state);
 
diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
index 46954cc7b6c0..b505de6287e6 100644
--- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
+++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
@@ -998,7 +998,6 @@ static ssize_t i915_ipc_status_write(struct file *file, const char __user *ubuf,
 		if (!dev_priv->ipc_enabled && enable)
 			drm_info(&dev_priv->drm,
 				 "Enabling IPC: WM will be proper only after next commit\n");
-		dev_priv->wm.distrust_bios_wm = true;
 		dev_priv->ipc_enabled = enable;
 		intel_enable_ipc(dev_priv);
 	}
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index d03c84f373e6..317e6a468e2e 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1183,15 +1183,6 @@ struct drm_i915_private {
 		 * crtc_state->wm.need_postvbl_update.
 		 */
 		struct mutex wm_mutex;
-
-		/*
-		 * Set during HW readout of watermarks/DDB.  Some platforms
-		 * need to know when we're still using BIOS-provided values
-		 * (which we don't fully trust).
-		 *
-		 * FIXME get rid of this.
-		 */
-		bool distrust_bios_wm;
 	} wm;
 
 	struct dram_info {
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 085043528f80..c11508fb3fac 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3865,56 +3865,22 @@ static unsigned int intel_crtc_ddb_weight(const struct intel_crtc_state *crtc_st
 	return hdisplay;
 }
 
-static u8 skl_compute_dbuf_slices(struct intel_crtc *crtc,
-				  u8 active_pipes);
-
-static int intel_crtc_dbuf_weights(struct intel_atomic_state *state,
-				   struct intel_crtc *for_crtc,
-				   unsigned int *weight_start,
-				   unsigned int *weight_end,
-				   unsigned int *weight_total)
+static void intel_crtc_dbuf_weights(const struct intel_dbuf_state *dbuf_state,
+				    enum pipe for_pipe,
+				    unsigned int *weight_start,
+				    unsigned int *weight_end,
+				    unsigned int *weight_total)
 {
-	const struct intel_dbuf_state *old_dbuf_state =
-		intel_atomic_get_old_dbuf_state(state);
-	struct intel_dbuf_state *new_dbuf_state =
-		intel_atomic_get_new_dbuf_state(state);
-	u8 active_pipes = new_dbuf_state->active_pipes;
-	enum pipe for_pipe = for_crtc->pipe;
-	const struct intel_crtc_state *crtc_state;
-	struct intel_crtc *crtc;
-	u8 dbuf_slice_mask;
-	u8 total_slice_mask;
-	int i, ret;
-
-	/*
-	 * Get allowed DBuf slices for correspondent pipe and platform.
-	 */
-	dbuf_slice_mask = skl_compute_dbuf_slices(for_crtc, active_pipes);
-	total_slice_mask = dbuf_slice_mask;
+	struct drm_i915_private *dev_priv =
+		to_i915(dbuf_state->base.state->base.dev);
+	enum pipe pipe;
 
 	*weight_start = 0;
 	*weight_end = 0;
 	*weight_total = 0;
 
-	for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
-		enum pipe pipe = crtc->pipe;
-		unsigned int weight;
-		u8 pipe_dbuf_slice_mask;
-
-		if (!crtc_state->hw.active)
-			continue;
-
-		pipe_dbuf_slice_mask =
-			skl_compute_dbuf_slices(crtc, active_pipes);
-
-		/*
-		 * According to BSpec pipe can share one dbuf slice with another
-		 * pipes or pipe can use multiple dbufs, in both cases we
-		 * account for other pipes only if they have exactly same mask.
-		 * However we need to account how many slices we should enable
-		 * in total.
-		 */
-		total_slice_mask |= pipe_dbuf_slice_mask;
+	for_each_pipe(dev_priv, pipe) {
+		int weight = dbuf_state->weight[pipe];
 
 		/*
 		 * Do not account pipes using other slice sets
@@ -3923,12 +3889,10 @@ static int intel_crtc_dbuf_weights(struct intel_atomic_state *state,
 		 * i.e no partial intersection), so it is enough to check for
 		 * equality for now.
 		 */
-		if (dbuf_slice_mask != pipe_dbuf_slice_mask)
+		if (dbuf_state->slices[pipe] != dbuf_state->slices[for_pipe])
 			continue;
 
-		weight = intel_crtc_ddb_weight(crtc_state);
 		*weight_total += weight;
-
 		if (pipe < for_pipe) {
 			*weight_start += weight;
 			*weight_end += weight;
@@ -3936,87 +3900,65 @@ static int intel_crtc_dbuf_weights(struct intel_atomic_state *state,
 			*weight_end += weight;
 		}
 	}
-
-	/*
-	 * FIXME: For now we always enable slice S1 as per
-	 * the Bspec display initialization sequence.
-	 */
-	new_dbuf_state->enabled_slices = total_slice_mask | BIT(DBUF_S1);
-
-	if (old_dbuf_state->enabled_slices != new_dbuf_state->enabled_slices) {
-		ret = intel_atomic_serialize_global_state(&new_dbuf_state->base);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
 }
 
 static int
-skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
-				   const struct intel_crtc_state *crtc_state,
-				   const u64 total_data_rate,
-				   struct skl_ddb_entry *alloc, /* out */
-				   int *num_active /* out */)
+skl_crtc_allocate_ddb(struct intel_atomic_state *state, struct intel_crtc *crtc)
 {
-	struct intel_atomic_state *state =
-		to_intel_atomic_state(crtc_state->uapi.state);
-	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
-	unsigned int weight_start, weight_end, weight_total;
+	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+	unsigned int weight_total, weight_start, weight_end;
 	const struct intel_dbuf_state *old_dbuf_state =
 		intel_atomic_get_old_dbuf_state(state);
 	struct intel_dbuf_state *new_dbuf_state =
 		intel_atomic_get_new_dbuf_state(state);
-	u8 active_pipes = new_dbuf_state->active_pipes;
+	struct intel_crtc_state *crtc_state;
 	struct skl_ddb_entry ddb_slices;
+	enum pipe pipe = crtc->pipe;
 	u32 ddb_range_size;
 	u32 dbuf_slice_mask;
 	u32 start, end;
 	int ret;
 
-	*num_active = hweight8(active_pipes);
-
-	if (!crtc_state->hw.active) {
-		alloc->start = 0;
-		alloc->end = 0;
-		return 0;
+	if (new_dbuf_state->weight[pipe] == 0) {
+		new_dbuf_state->ddb[pipe].start = 0;
+		new_dbuf_state->ddb[pipe].end = 0;
+		goto out;
 	}
 
-	/*
-	 * If the state doesn't change the active CRTC's or there is no
-	 * modeset request, then there's no need to recalculate;
-	 * the existing pipe allocation limits should remain unchanged.
-	 * Note that we're safe from racing commits since any racing commit
-	 * that changes the active CRTC list or do modeset would need to
-	 * grab _all_ crtc locks, including the one we currently hold.
-	 */
-	if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes &&
-	    !dev_priv->wm.distrust_bios_wm)
-		return 0;
-
-	/*
-	 * Get allowed DBuf slices for correspondent pipe and platform.
-	 */
-	dbuf_slice_mask = skl_compute_dbuf_slices(crtc, active_pipes);
+	dbuf_slice_mask = new_dbuf_state->slices[pipe];
 
 	skl_ddb_entry_for_slices(dev_priv, dbuf_slice_mask, &ddb_slices);
 	ddb_range_size = skl_ddb_entry_size(&ddb_slices);
 
-	ret = intel_crtc_dbuf_weights(state, crtc,
-				      &weight_start, &weight_end, &weight_total);
-	if (ret)
-		return ret;
+	intel_crtc_dbuf_weights(new_dbuf_state, pipe,
+				&weight_start, &weight_end, &weight_total);
 
 	start = ddb_range_size * weight_start / weight_total;
 	end = ddb_range_size * weight_end / weight_total;
 
-	alloc->start = ddb_slices.start + start;
-	alloc->end = ddb_slices.start + end;
+	new_dbuf_state->ddb[pipe].start = ddb_slices.start + start;
+	new_dbuf_state->ddb[pipe].end = ddb_slices.start + end;
+
+out:
+	if (skl_ddb_entry_equal(&old_dbuf_state->ddb[pipe],
+				&new_dbuf_state->ddb[pipe]))
+		return 0;
+
+	ret = intel_atomic_lock_global_state(&new_dbuf_state->base);
+	if (ret)
+		return ret;
+
+	crtc_state = intel_atomic_get_crtc_state(&state->base, crtc);
+	if (IS_ERR(crtc_state))
+		return PTR_ERR(crtc_state);
 
 	drm_dbg_kms(&dev_priv->drm,
-		    "[CRTC:%d:%s] dbuf slices 0x%x, ddb (%d - %d), active pipes 0x%x\n",
+		    "[CRTC:%d:%s] dbuf slices 0x%x -> 0x%x, ddb (%d - %d) -> (%d - %d), active pipes 0x%x -> 0x%x\n",
 		    crtc->base.base.id, crtc->base.name,
-		    dbuf_slice_mask, alloc->start, alloc->end, active_pipes);
+		    old_dbuf_state->slices[pipe], new_dbuf_state->slices[pipe],
+		    old_dbuf_state->ddb[pipe].start, old_dbuf_state->ddb[pipe].end,
+		    new_dbuf_state->ddb[pipe].start, new_dbuf_state->ddb[pipe].end,
+		    old_dbuf_state->active_pipes, new_dbuf_state->active_pipes);
 
 	return 0;
 }
@@ -4549,35 +4491,32 @@ icl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state,
 }
 
 static int
-skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
+skl_allocate_plane_ddb(struct intel_atomic_state *state,
+		       struct intel_crtc *crtc)
 {
-	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
-	struct intel_atomic_state *state =
-		to_intel_atomic_state(crtc_state->uapi.state);
-	struct intel_dbuf_state *dbuf_state =
+	struct intel_crtc_state *crtc_state =
+		intel_atomic_get_new_crtc_state(state, crtc);
+	const struct intel_dbuf_state *dbuf_state =
 		intel_atomic_get_new_dbuf_state(state);
-	struct skl_ddb_entry *alloc = &dbuf_state->ddb[crtc->pipe];
+	const struct skl_ddb_entry *alloc = &dbuf_state->ddb[crtc->pipe];
+	int num_active = hweight8(dbuf_state->active_pipes);
 	u16 alloc_size, start = 0;
 	u16 total[I915_MAX_PLANES] = {};
 	u16 uv_total[I915_MAX_PLANES] = {};
 	u64 total_data_rate;
 	enum plane_id plane_id;
-	int num_active;
 	u64 plane_data_rate[I915_MAX_PLANES] = {};
 	u64 uv_plane_data_rate[I915_MAX_PLANES] = {};
 	u32 blocks;
 	int level;
-	int ret;
 
 	/* Clear the partitioning for disabled planes. */
 	memset(crtc_state->wm.skl.plane_ddb_y, 0, sizeof(crtc_state->wm.skl.plane_ddb_y));
 	memset(crtc_state->wm.skl.plane_ddb_uv, 0, sizeof(crtc_state->wm.skl.plane_ddb_uv));
 
-	if (!crtc_state->hw.active) {
-		alloc->start = alloc->end = 0;
+	if (!crtc_state->hw.active)
 		return 0;
-	}
 
 	if (INTEL_GEN(dev_priv) >= 11)
 		total_data_rate =
@@ -4589,13 +4528,6 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 							 plane_data_rate,
 							 uv_plane_data_rate);
 
-
-	ret = skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state,
-						 total_data_rate,
-						 alloc, &num_active);
-	if (ret)
-		return ret;
-
 	alloc_size = skl_ddb_entry_size(alloc);
 	if (alloc_size == 0)
 		return 0;
@@ -5475,39 +5407,114 @@ skl_ddb_add_affected_planes(const struct intel_crtc_state *old_crtc_state,
 	return 0;
 }
 
+static u8 intel_dbuf_enabled_slices(const struct intel_dbuf_state *dbuf_state)
+{
+	struct drm_i915_private *dev_priv = to_i915(dbuf_state->base.state->base.dev);
+	u8 enabled_slices;
+	enum pipe pipe;
+
+	/*
+	 * FIXME: For now we always enable slice S1 as per
+	 * the Bspec display initialization sequence.
+	 */
+	enabled_slices = BIT(DBUF_S1);
+
+	for_each_pipe(dev_priv, pipe)
+		enabled_slices |= dbuf_state->slices[pipe];
+
+	return enabled_slices;
+}
+
 static int
 skl_compute_ddb(struct intel_atomic_state *state)
 {
 	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
 	const struct intel_dbuf_state *old_dbuf_state;
-	const struct intel_dbuf_state *new_dbuf_state;
+	struct intel_dbuf_state *new_dbuf_state = NULL;
 	const struct intel_crtc_state *old_crtc_state;
 	struct intel_crtc_state *new_crtc_state;
 	struct intel_crtc *crtc;
 	int ret, i;
 
-	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
-					    new_crtc_state, i) {
-		ret = skl_allocate_pipe_ddb(new_crtc_state);
+	for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) {
+		new_dbuf_state = intel_atomic_get_dbuf_state(state);
+		if (IS_ERR(new_dbuf_state))
+			return PTR_ERR(new_dbuf_state);
+
+		old_dbuf_state = intel_atomic_get_old_dbuf_state(state);
+		break;
+	}
+
+	if (!new_dbuf_state)
+		return 0;
+
+	new_dbuf_state->active_pipes =
+		intel_calc_active_pipes(state, old_dbuf_state->active_pipes);
+
+	if (old_dbuf_state->active_pipes != new_dbuf_state->active_pipes) {
+		ret = intel_atomic_lock_global_state(&new_dbuf_state->base);
 		if (ret)
 			return ret;
+	}
 
-		ret = skl_ddb_add_affected_planes(old_crtc_state,
-						  new_crtc_state);
+	for_each_intel_crtc(&dev_priv->drm, crtc) {
+		enum pipe pipe = crtc->pipe;
+
+		new_dbuf_state->slices[pipe] =
+			skl_compute_dbuf_slices(crtc, new_dbuf_state->active_pipes);
+
+		if (old_dbuf_state->slices[pipe] == new_dbuf_state->slices[pipe])
+			continue;
+
+		ret = intel_atomic_lock_global_state(&new_dbuf_state->base);
 		if (ret)
 			return ret;
 	}
 
-	old_dbuf_state = intel_atomic_get_old_dbuf_state(state);
-	new_dbuf_state = intel_atomic_get_new_dbuf_state(state);
+	new_dbuf_state->enabled_slices = intel_dbuf_enabled_slices(new_dbuf_state);
+
+	if (old_dbuf_state->enabled_slices != new_dbuf_state->enabled_slices) {
+		ret = intel_atomic_serialize_global_state(&new_dbuf_state->base);
+		if (ret)
+			return ret;
 
-	if (new_dbuf_state &&
-	    new_dbuf_state->enabled_slices != old_dbuf_state->enabled_slices)
 		drm_dbg_kms(&dev_priv->drm,
 			    "Enabled dbuf slices 0x%x -> 0x%x (out of %d dbuf slices)\n",
 			    old_dbuf_state->enabled_slices,
 			    new_dbuf_state->enabled_slices,
 			    INTEL_INFO(dev_priv)->num_supported_dbuf_slices);
+	}
+
+	for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) {
+		enum pipe pipe = crtc->pipe;
+
+		new_dbuf_state->weight[crtc->pipe] = intel_crtc_ddb_weight(new_crtc_state);
+
+		if (old_dbuf_state->weight[pipe] == new_dbuf_state->weight[pipe])
+			continue;
+
+		ret = intel_atomic_lock_global_state(&new_dbuf_state->base);
+		if (ret)
+			return ret;
+	}
+
+	for_each_intel_crtc(&dev_priv->drm, crtc) {
+		ret = skl_crtc_allocate_ddb(state, crtc);
+		if (ret)
+			return ret;
+	}
+
+	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
+					    new_crtc_state, i) {
+		ret = skl_allocate_plane_ddb(state, crtc);
+		if (ret)
+			return ret;
+
+		ret = skl_ddb_add_affected_planes(old_crtc_state,
+						  new_crtc_state);
+		if (ret)
+			return ret;
+	}
 
 	return 0;
 }
@@ -5636,83 +5643,6 @@ skl_print_wm_changes(struct intel_atomic_state *state)
 	}
 }
 
-static int intel_add_affected_pipes(struct intel_atomic_state *state,
-				    u8 pipe_mask)
-{
-	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
-	struct intel_crtc *crtc;
-
-	for_each_intel_crtc(&dev_priv->drm, crtc) {
-		struct intel_crtc_state *crtc_state;
-
-		if ((pipe_mask & BIT(crtc->pipe)) == 0)
-			continue;
-
-		crtc_state = intel_atomic_get_crtc_state(&state->base, crtc);
-		if (IS_ERR(crtc_state))
-			return PTR_ERR(crtc_state);
-	}
-
-	return 0;
-}
-
-static int
-skl_ddb_add_affected_pipes(struct intel_atomic_state *state)
-{
-	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
-	struct intel_crtc_state *crtc_state;
-	struct intel_crtc *crtc;
-	int i, ret;
-
-	if (dev_priv->wm.distrust_bios_wm) {
-		/*
-		 * skl_ddb_get_pipe_allocation_limits() currently requires
-		 * all active pipes to be included in the state so that
-		 * it can redistribute the dbuf among them, and it really
-		 * wants to recompute things when distrust_bios_wm is set
-		 * so we add all the pipes to the state.
-		 */
-		ret = intel_add_affected_pipes(state, ~0);
-		if (ret)
-			return ret;
-	}
-
-	for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
-		struct intel_dbuf_state *new_dbuf_state;
-		const struct intel_dbuf_state *old_dbuf_state;
-
-		new_dbuf_state = intel_atomic_get_dbuf_state(state);
-		if (IS_ERR(new_dbuf_state))
-			return ret;
-
-		old_dbuf_state = intel_atomic_get_old_dbuf_state(state);
-
-		new_dbuf_state->active_pipes =
-			intel_calc_active_pipes(state, old_dbuf_state->active_pipes);
-
-		if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes)
-			break;
-
-		ret = intel_atomic_lock_global_state(&new_dbuf_state->base);
-		if (ret)
-			return ret;
-
-		/*
-		 * skl_ddb_get_pipe_allocation_limits() currently requires
-		 * all active pipes to be included in the state so that
-		 * it can redistribute the dbuf among them.
-		 */
-		ret = intel_add_affected_pipes(state,
-					       new_dbuf_state->active_pipes);
-		if (ret)
-			return ret;
-
-		break;
-	}
-
-	return 0;
-}
-
 /*
  * To make sure the cursor watermark registers are always consistent
  * with our computed state the following scenario needs special
@@ -5781,15 +5711,6 @@ skl_compute_wm(struct intel_atomic_state *state)
 	struct intel_crtc_state *old_crtc_state;
 	int ret, i;
 
-	ret = skl_ddb_add_affected_pipes(state);
-	if (ret)
-		return ret;
-
-	/*
-	 * Calculate WM's for all pipes that are part of this transaction.
-	 * Note that skl_ddb_add_affected_pipes may have added more CRTC's that
-	 * weren't otherwise being modified if pipe allocations had to change.
-	 */
 	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
 					    new_crtc_state, i) {
 		ret = skl_build_pipe_wm(new_crtc_state);
@@ -5944,11 +5865,6 @@ void skl_wm_get_hw_state(struct drm_i915_private *dev_priv)
 
 		skl_pipe_wm_get_hw_state(crtc, &crtc_state->wm.skl.optimal);
 	}
-
-	if (dev_priv->active_pipes) {
-		/* Fully recompute DDB on first atomic commit */
-		dev_priv->wm.distrust_bios_wm = true;
-	}
 }
 
 static void ilk_pipe_wm_get_hw_state(struct intel_crtc *crtc)
diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h
index d9f84d93280d..3a82b8046f10 100644
--- a/drivers/gpu/drm/i915/intel_pm.h
+++ b/drivers/gpu/drm/i915/intel_pm.h
@@ -66,6 +66,8 @@ struct intel_dbuf_state {
 	struct intel_global_state base;
 
 	struct skl_ddb_entry ddb[I915_MAX_PIPES];
+	unsigned int weight[I915_MAX_PIPES];
+	u8 slices[I915_MAX_PIPES];
 
 	u8 enabled_slices;
 	u8 active_pipes;
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 19/20] drm/i915: Do a bit more initial readout for dbuf
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (17 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 18/20] drm/i915: Encapsulate dbuf state handling harder Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2021-01-21 12:57   ` Lisovskiy, Stanislav
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 20/20] drm/i915: Check slice mask for holes Ville Syrjala
                   ` (4 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Readout the dbuf related stuff during driver init/resume and
stick it into our dbuf state.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c |  4 --
 drivers/gpu/drm/i915/intel_pm.c              | 48 +++++++++++++++++++-
 2 files changed, 46 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index e3df43f3932d..21ad1adcc1eb 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -17475,14 +17475,10 @@ void intel_modeset_init_hw(struct drm_i915_private *i915)
 {
 	struct intel_cdclk_state *cdclk_state =
 		to_intel_cdclk_state(i915->cdclk.obj.state);
-	struct intel_dbuf_state *dbuf_state =
-		to_intel_dbuf_state(i915->dbuf.obj.state);
 
 	intel_update_cdclk(i915);
 	intel_dump_cdclk_config(&i915->cdclk.hw, "Current CDCLK");
 	cdclk_state->logical = cdclk_state->actual = i915->cdclk.hw;
-
-	dbuf_state->enabled_slices = i915->dbuf.enabled_slices;
 }
 
 static int sanitize_watermarks_add_affected(struct drm_atomic_state *state)
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index c11508fb3fac..7edac506d343 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -5363,6 +5363,18 @@ static inline bool skl_ddb_entries_overlap(const struct skl_ddb_entry *a,
 	return a->start < b->end && b->start < a->end;
 }
 
+static void skl_ddb_entry_union(struct skl_ddb_entry *a,
+				const struct skl_ddb_entry *b)
+{
+	if (a->end && b->end) {
+		a->start = min(a->start, b->start);
+		a->end = max(a->end, b->end);
+	} else if (b->end) {
+		a->start = b->start;
+		a->end = b->end;
+	}
+}
+
 bool skl_ddb_allocation_overlaps(const struct skl_ddb_entry *ddb,
 				 const struct skl_ddb_entry *entries,
 				 int num_entries, int ignore_idx)
@@ -5857,14 +5869,46 @@ void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
 
 void skl_wm_get_hw_state(struct drm_i915_private *dev_priv)
 {
+	struct intel_dbuf_state *dbuf_state =
+		to_intel_dbuf_state(dev_priv->dbuf.obj.state);
 	struct intel_crtc *crtc;
-	struct intel_crtc_state *crtc_state;
 
 	for_each_intel_crtc(&dev_priv->drm, crtc) {
-		crtc_state = to_intel_crtc_state(crtc->base.state);
+		struct intel_crtc_state *crtc_state =
+			to_intel_crtc_state(crtc->base.state);
+		enum pipe pipe = crtc->pipe;
+		enum plane_id plane_id;
 
 		skl_pipe_wm_get_hw_state(crtc, &crtc_state->wm.skl.optimal);
+
+		memset(&dbuf_state->ddb[pipe], 0, sizeof(dbuf_state->ddb[pipe]));
+
+		for_each_plane_id_on_crtc(crtc, plane_id) {
+			struct skl_ddb_entry *ddb_y =
+				&crtc_state->wm.skl.plane_ddb_y[plane_id];
+			struct skl_ddb_entry *ddb_uv =
+				&crtc_state->wm.skl.plane_ddb_uv[plane_id];
+
+			skl_ddb_get_hw_plane_state(dev_priv, crtc->pipe,
+						   plane_id, ddb_y, ddb_uv);
+
+			skl_ddb_entry_union(&dbuf_state->ddb[pipe], ddb_y);
+			skl_ddb_entry_union(&dbuf_state->ddb[pipe], ddb_uv);
+		}
+
+		dbuf_state->slices[pipe] =
+			skl_compute_dbuf_slices(crtc, dbuf_state->active_pipes);
+
+		dbuf_state->weight[pipe] = intel_crtc_ddb_weight(crtc_state);
+
+		drm_dbg_kms(&dev_priv->drm,
+			    "[CRTC:%d:%s] dbuf slices 0x%x, ddb (%d - %d), active pipes 0x%x\n",
+			    crtc->base.base.id, crtc->base.name,
+			    dbuf_state->slices[pipe], dbuf_state->ddb[pipe].start,
+			    dbuf_state->ddb[pipe].end, dbuf_state->active_pipes);
 	}
+
+	dbuf_state->enabled_slices = dev_priv->dbuf.enabled_slices;
 }
 
 static void ilk_pipe_wm_get_hw_state(struct intel_crtc *crtc)
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 20/20] drm/i915: Check slice mask for holes
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (18 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 19/20] drm/i915: Do a bit more initial readout for dbuf Ville Syrjala
@ 2020-02-25 17:11 ` Ville Syrjala
  2020-02-25 17:47   ` Lisovskiy, Stanislav
  2020-02-26 18:04 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for drm/i915: Proper dbuf global state (rev2) Patchwork
                   ` (3 subsequent siblings)
  23 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjala @ 2020-02-25 17:11 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Make sure the dbuf slice mask for any individual pipe has no
holes between the slices.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/intel_pm.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 7edac506d343..fa39ab0b1223 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3827,6 +3827,14 @@ static int intel_dbuf_slice_size(struct drm_i915_private *dev_priv)
 		INTEL_INFO(dev_priv)->num_supported_dbuf_slices;
 }
 
+static bool bitmask_is_contiguous(unsigned int bitmask)
+{
+	if (bitmask)
+		bitmask >>= ffs(bitmask) - 1;
+
+	return is_power_of_2(bitmask + 1);
+}
+
 static void
 skl_ddb_entry_for_slices(struct drm_i915_private *dev_priv, u8 slice_mask,
 			 struct skl_ddb_entry *ddb)
@@ -3844,6 +3852,7 @@ skl_ddb_entry_for_slices(struct drm_i915_private *dev_priv, u8 slice_mask,
 
 	WARN_ON(ddb->start >= ddb->end);
 	WARN_ON(ddb->end > intel_dbuf_size(dev_priv));
+	WARN_ON(!bitmask_is_contiguous(slice_mask));
 }
 
 static unsigned int intel_crtc_ddb_weight(const struct intel_crtc_state *crtc_state)
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 05/20] drm/i915: Make skl_compute_dbuf_slices() behave consistently for all platforms
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 05/20] drm/i915: Make skl_compute_dbuf_slices() behave consistently for all platforms Ville Syrjala
@ 2020-02-25 17:30   ` Lisovskiy, Stanislav
  2020-03-02 14:50     ` Ville Syrjälä
  0 siblings, 1 reply; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-02-25 17:30 UTC (permalink / raw)
  To: ville.syrjala, intel-gfx

On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Currently skl_compute_dbuf_slices() returns 0 for any inactive pipe
> on
> icl+, but returns BIT(S1) on pre-icl for any pipe (whether it's
> active or
> not). Let's make the behaviour consistent and always return 0 for any
> inactive pipe.
> 
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/intel_pm.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_pm.c
> b/drivers/gpu/drm/i915/intel_pm.c
> index a2e78969c0df..640f4c4fd508 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -4408,7 +4408,7 @@ static u8 skl_compute_dbuf_slices(const struct
> intel_crtc_state *crtc_state,
>  	 * For anything else just return one slice yet.
>  	 * Should be extended for other platforms.
>  	 */
> -	return BIT(DBUF_S1);
> +	return active_pipes & BIT(pipe) ? BIT(DBUF_S1) : 0;

I think the initial idea was this won't be even called if there 
are no active pipes at all - skl_ddb_get_pipe_allocation_limits would
bail out immediately. If there were some active pipes - then we will
have to use slice S1 anyway - because there were simply no other slices
available. If some pipes were inactive - they are currently skipped by
!crtc_state->hw.active check - so I would just keep it simple and don't
call this function for non-active pipes at all.

Currently we are just ORing slice bitmasks only from active pipes.

Stan

>  }
>  
>  static u64
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 08/20] drm/i915: Introduce proper dbuf state
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 08/20] drm/i915: Introduce proper dbuf state Ville Syrjala
@ 2020-02-25 17:43   ` Lisovskiy, Stanislav
  2020-04-01  8:13   ` Lisovskiy, Stanislav
  1 sibling, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-02-25 17:43 UTC (permalink / raw)
  To: ville.syrjala, intel-gfx

On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Add a global state to track the dbuf slices. Gets rid of all the
> nasty
> coupling between state->modeset and dbuf recomputation. Also we can
> now
> totally nuke state->active_pipe_changes.
> 
> dev_priv->wm.distrust_bios_wm still remains, but that too will get
> nuked soon.
> 
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_display.c  |  67 +++++--
>  .../drm/i915/display/intel_display_power.c    |   8 +-
>  .../drm/i915/display/intel_display_types.h    |  13 --
>  drivers/gpu/drm/i915/i915_drv.h               |  11 +-
>  drivers/gpu/drm/i915/intel_pm.c               | 189 ++++++++++++--
> ----
>  drivers/gpu/drm/i915/intel_pm.h               |  22 ++
>  6 files changed, 209 insertions(+), 101 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c
> b/drivers/gpu/drm/i915/display/intel_display.c
> index 6952c398cc43..659b952c8e2f 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -7581,6 +7581,8 @@ static void intel_crtc_disable_noatomic(struct
> intel_crtc *crtc,
>  		to_intel_bw_state(dev_priv->bw_obj.state);
>  	struct intel_cdclk_state *cdclk_state =
>  		to_intel_cdclk_state(dev_priv->cdclk.obj.state);
> +	struct intel_dbuf_state *dbuf_state =
> +		to_intel_dbuf_state(dev_priv->dbuf.obj.state);
>  	struct intel_crtc_state *crtc_state =
>  		to_intel_crtc_state(crtc->base.state);
>  	enum intel_display_power_domain domain;
> @@ -7654,6 +7656,8 @@ static void intel_crtc_disable_noatomic(struct
> intel_crtc *crtc,
>  	cdclk_state->min_voltage_level[pipe] = 0;
>  	cdclk_state->active_pipes &= ~BIT(pipe);
>  
> +	dbuf_state->active_pipes &= ~BIT(pipe);
> +

Still would vote to encapsulate active_pipes to some other state, which
would be then used by both CDCLK and DBUF states, so that we don't
duplicate same field in semantically different states and thus increase
probability of forgetting to assign it somethere, like we had an issue
with "active_pipe_changes" which would be now eliminated.

Could it be like global_crtc_state->active_pipes? Probably name is
not the best here, just to reflect the idea.

Would be cool to have all subsystems encapsulated into those global
objects, while each is responsible for it's own area.

Stan


>  	bw_state->data_rate[pipe] = 0;
>  	bw_state->num_active_planes[pipe] = 0;
>  }
> @@ -13991,10 +13995,10 @@ static void verify_wm_state(struct
> intel_crtc *crtc,
>  	hw_enabled_slices = intel_enabled_dbuf_slices_mask(dev_priv);
>  
>  	if (INTEL_GEN(dev_priv) >= 11 &&
> -	    hw_enabled_slices != dev_priv->enabled_dbuf_slices_mask)
> +	    hw_enabled_slices != dev_priv->dbuf.enabled_slices)
>  		drm_err(&dev_priv->drm,
>  			"mismatch in DBUF Slices (expected 0x%x, got
> 0x%x)\n",
> -			dev_priv->enabled_dbuf_slices_mask,
> +			dev_priv->dbuf.enabled_slices,
>  			hw_enabled_slices);
>  
>  	/* planes */
> @@ -14529,9 +14533,7 @@ static int intel_modeset_checks(struct
> intel_atomic_state *state)
>  	state->modeset = true;
>  	state->active_pipes = intel_calc_active_pipes(state, dev_priv-
> >active_pipes);
>  
> -	state->active_pipe_changes = state->active_pipes ^ dev_priv-
> >active_pipes;
> -
> -	if (state->active_pipe_changes) {
> +	if (state->active_pipes != dev_priv->active_pipes) {
>  		ret = _intel_atomic_lock_global_state(state);
>  		if (ret)
>  			return ret;
> @@ -15292,22 +15294,38 @@ static void
> intel_update_trans_port_sync_crtcs(struct intel_crtc *crtc,
>  static void icl_dbuf_slice_pre_update(struct intel_atomic_state
> *state)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> -	u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
> -	u8 required_slices = state->enabled_dbuf_slices_mask;
> -	u8 slices_union = hw_enabled_slices | required_slices;
> +	const struct intel_dbuf_state *new_dbuf_state =
> +		intel_atomic_get_new_dbuf_state(state);
> +	const struct intel_dbuf_state *old_dbuf_state =
> +		intel_atomic_get_old_dbuf_state(state);
>  
> -	if (INTEL_GEN(dev_priv) >= 11 && slices_union !=
> hw_enabled_slices)
> -		gen9_dbuf_slices_update(dev_priv, slices_union);
> +	if (!new_dbuf_state ||
> +	    new_dbuf_state->enabled_slices == old_dbuf_state-
> >enabled_slices)
> +		return;
> +
> +	WARN_ON(!new_dbuf_state->base.changed);
> +
> +	gen9_dbuf_slices_update(dev_priv,
> +				old_dbuf_state->enabled_slices |
> +				new_dbuf_state->enabled_slices);
>  }
>  
>  static void icl_dbuf_slice_post_update(struct intel_atomic_state
> *state)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> -	u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
> -	u8 required_slices = state->enabled_dbuf_slices_mask;
> +	const struct intel_dbuf_state *new_dbuf_state =
> +		intel_atomic_get_new_dbuf_state(state);
> +	const struct intel_dbuf_state *old_dbuf_state =
> +		intel_atomic_get_old_dbuf_state(state);
>  
> -	if (INTEL_GEN(dev_priv) >= 11 && required_slices !=
> hw_enabled_slices)
> -		gen9_dbuf_slices_update(dev_priv, required_slices);
> +	if (!new_dbuf_state ||
> +	    new_dbuf_state->enabled_slices == old_dbuf_state-
> >enabled_slices)
> +		return;
> +
> +	WARN_ON(!new_dbuf_state->base.changed);
> +
> +	gen9_dbuf_slices_update(dev_priv,
> +				new_dbuf_state->enabled_slices);
>  }
>  
>  static void skl_commit_modeset_enables(struct intel_atomic_state
> *state)
> @@ -15562,9 +15580,7 @@ static void intel_atomic_commit_tail(struct
> intel_atomic_state *state)
>  	if (state->modeset)
>  		intel_encoders_update_prepare(state);
>  
> -	/* Enable all new slices, we might need */
> -	if (state->modeset)
> -		icl_dbuf_slice_pre_update(state);
> +	icl_dbuf_slice_pre_update(state);
>  
>  	/* Now enable the clocks, plane, pipe, and connectors that we
> set up. */
>  	dev_priv->display.commit_modeset_enables(state);
> @@ -15619,9 +15635,7 @@ static void intel_atomic_commit_tail(struct
> intel_atomic_state *state)
>  			dev_priv->display.optimize_watermarks(state,
> crtc);
>  	}
>  
> -	/* Disable all slices, we don't need */
> -	if (state->modeset)
> -		icl_dbuf_slice_post_update(state);
> +	icl_dbuf_slice_post_update(state);
>  
>  	for_each_oldnew_intel_crtc_in_state(state, crtc,
> old_crtc_state, new_crtc_state, i) {
>  		intel_post_plane_update(state, crtc);
> @@ -17507,10 +17521,14 @@ void intel_modeset_init_hw(struct
> drm_i915_private *i915)
>  {
>  	struct intel_cdclk_state *cdclk_state =
>  		to_intel_cdclk_state(i915->cdclk.obj.state);
> +	struct intel_dbuf_state *dbuf_state =
> +		to_intel_dbuf_state(i915->dbuf.obj.state);
>  
>  	intel_update_cdclk(i915);
>  	intel_dump_cdclk_config(&i915->cdclk.hw, "Current CDCLK");
>  	cdclk_state->logical = cdclk_state->actual = i915->cdclk.hw;
> +
> +	dbuf_state->enabled_slices = i915->dbuf.enabled_slices;
>  }
>  
>  static int sanitize_watermarks_add_affected(struct drm_atomic_state
> *state)
> @@ -17800,6 +17818,10 @@ int intel_modeset_init(struct
> drm_i915_private *i915)
>  	if (ret)
>  		return ret;
>  
> +	ret = intel_dbuf_init(i915);
> +	if (ret)
> +		return ret;
> +
>  	ret = intel_bw_init(i915);
>  	if (ret)
>  		return ret;
> @@ -18303,6 +18325,8 @@ static void
> intel_modeset_readout_hw_state(struct drm_device *dev)
>  	struct drm_i915_private *dev_priv = to_i915(dev);
>  	struct intel_cdclk_state *cdclk_state =
>  		to_intel_cdclk_state(dev_priv->cdclk.obj.state);
> +	struct intel_dbuf_state *dbuf_state =
> +		to_intel_dbuf_state(dev_priv->dbuf.obj.state);
>  	enum pipe pipe;
>  	struct intel_crtc *crtc;
>  	struct intel_encoder *encoder;
> @@ -18334,7 +18358,8 @@ static void
> intel_modeset_readout_hw_state(struct drm_device *dev)
>  			    enableddisabled(crtc_state->hw.active));
>  	}
>  
> -	dev_priv->active_pipes = cdclk_state->active_pipes =
> active_pipes;
> +	dev_priv->active_pipes = cdclk_state->active_pipes =
> +		dbuf_state->active_pipes = active_pipes;
>  
>  	readout_plane_state(dev_priv);
>  
> diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c
> b/drivers/gpu/drm/i915/display/intel_display_power.c
> index ce3bbc4c7a27..dc0c9694b714 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> @@ -1062,7 +1062,7 @@ static bool
> gen9_dc_off_power_well_enabled(struct drm_i915_private *dev_priv,
>  static void gen9_assert_dbuf_enabled(struct drm_i915_private
> *dev_priv)
>  {
>  	u8 hw_enabled_dbuf_slices =
> intel_enabled_dbuf_slices_mask(dev_priv);
> -	u8 enabled_dbuf_slices = dev_priv->enabled_dbuf_slices_mask;
> +	u8 enabled_dbuf_slices = dev_priv->dbuf.enabled_slices;
>  
>  	drm_WARN(&dev_priv->drm,
>  		 hw_enabled_dbuf_slices != enabled_dbuf_slices,
> @@ -4481,14 +4481,14 @@ void gen9_dbuf_slices_update(struct
> drm_i915_private *dev_priv,
>  	for (slice = DBUF_S1; slice < num_slices; slice++)
>  		gen9_dbuf_slice_set(dev_priv, slice, req_slices &
> BIT(slice));
>  
> -	dev_priv->enabled_dbuf_slices_mask = req_slices;
> +	dev_priv->dbuf.enabled_slices = req_slices;
>  
>  	mutex_unlock(&power_domains->lock);
>  }
>  
>  static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
>  {
> -	dev_priv->enabled_dbuf_slices_mask =
> +	dev_priv->dbuf.enabled_slices =
>  		intel_enabled_dbuf_slices_mask(dev_priv);
>  
>  	/*
> @@ -4496,7 +4496,7 @@ static void gen9_dbuf_enable(struct
> drm_i915_private *dev_priv)
>  	 * figure out later which slices we have and what we need.
>  	 */
>  	gen9_dbuf_slices_update(dev_priv, BIT(DBUF_S1) |
> -				dev_priv->enabled_dbuf_slices_mask);
> +				dev_priv->dbuf.enabled_slices);
>  }
>  
>  static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
> diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h
> b/drivers/gpu/drm/i915/display/intel_display_types.h
> index 0d8a64305464..165efa00d88b 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_types.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_types.h
> @@ -471,16 +471,6 @@ struct intel_atomic_state {
>  
>  	bool dpll_set, modeset;
>  
> -	/*
> -	 * Does this transaction change the pipes that are
> active?  This mask
> -	 * tracks which CRTC's have changed their active state at the
> end of
> -	 * the transaction (not counting the temporary disable during
> modesets).
> -	 * This mask should only be non-zero when intel_state->modeset
> is true,
> -	 * but the converse is not necessarily true; simply changing a
> mode may
> -	 * not flip the final active status of any CRTC's
> -	 */
> -	u8 active_pipe_changes;
> -
>  	u8 active_pipes;
>  
>  	struct intel_shared_dpll_state shared_dpll[I915_NUM_PLLS];
> @@ -498,9 +488,6 @@ struct intel_atomic_state {
>  	 */
>  	bool global_state_changed;
>  
> -	/* Number of enabled DBuf slices */
> -	u8 enabled_dbuf_slices_mask;
> -
>  	struct i915_sw_fence commit_ready;
>  
>  	struct llist_node freed;
> diff --git a/drivers/gpu/drm/i915/i915_drv.h
> b/drivers/gpu/drm/i915/i915_drv.h
> index 88e4fb8ac739..d03c84f373e6 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -1006,6 +1006,13 @@ struct drm_i915_private {
>  		struct intel_global_obj obj;
>  	} cdclk;
>  
> +	struct {
> +		/* The current hardware dbuf configuration */
> +		u8 enabled_slices;
> +
> +		struct intel_global_obj obj;
> +	} dbuf;
> +
>  	/**
>  	 * wq - Driver workqueue for GEM.
>  	 *
> @@ -1181,12 +1188,12 @@ struct drm_i915_private {
>  		 * Set during HW readout of watermarks/DDB.  Some
> platforms
>  		 * need to know when we're still using BIOS-provided
> values
>  		 * (which we don't fully trust).
> +		 *
> +		 * FIXME get rid of this.
>  		 */
>  		bool distrust_bios_wm;
>  	} wm;
>  
> -	u8 enabled_dbuf_slices_mask; /* GEN11 has configurable 2 slices
> */
> -
>  	struct dram_info {
>  		bool valid;
>  		bool is_16gb_dimm;
> diff --git a/drivers/gpu/drm/i915/intel_pm.c
> b/drivers/gpu/drm/i915/intel_pm.c
> index 640f4c4fd508..d4730d9b4e1b 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -3845,7 +3845,7 @@ static u16 intel_get_ddb_size(struct
> drm_i915_private *dev_priv)
>  static u8 skl_compute_dbuf_slices(const struct intel_crtc_state
> *crtc_state,
>  				  u8 active_pipes);
>  
> -static void
> +static int
>  skl_ddb_get_pipe_allocation_limits(struct drm_i915_private
> *dev_priv,
>  				   const struct intel_crtc_state
> *crtc_state,
>  				   const u64 total_data_rate,
> @@ -3858,30 +3858,29 @@ skl_ddb_get_pipe_allocation_limits(struct
> drm_i915_private *dev_priv,
>  	const struct intel_crtc *crtc;
>  	u32 pipe_width = 0, total_width_in_range = 0,
> width_before_pipe_in_range = 0;
>  	enum pipe for_pipe = to_intel_crtc(for_crtc)->pipe;
> +	struct intel_dbuf_state *new_dbuf_state =
> +		intel_atomic_get_new_dbuf_state(intel_state);
> +	const struct intel_dbuf_state *old_dbuf_state =
> +		intel_atomic_get_old_dbuf_state(intel_state);
> +	u8 active_pipes = new_dbuf_state->active_pipes;
>  	u16 ddb_size;
>  	u32 ddb_range_size;
>  	u32 i;
>  	u32 dbuf_slice_mask;
> -	u32 active_pipes;
>  	u32 offset;
>  	u32 slice_size;
>  	u32 total_slice_mask;
>  	u32 start, end;
> +	int ret;
>  
> -	if (drm_WARN_ON(&dev_priv->drm, !state) || !crtc_state-
> >hw.active) {
> +	*num_active = hweight8(active_pipes);
> +
> +	if (!crtc_state->hw.active) {
>  		alloc->start = 0;
>  		alloc->end = 0;
> -		*num_active = hweight8(dev_priv->active_pipes);
> -		return;
> +		return 0;
>  	}
>  
> -	if (intel_state->active_pipe_changes)
> -		active_pipes = intel_state->active_pipes;
> -	else
> -		active_pipes = dev_priv->active_pipes;
> -
> -	*num_active = hweight8(active_pipes);
> -
>  	ddb_size = intel_get_ddb_size(dev_priv);
>  
>  	slice_size = ddb_size / INTEL_INFO(dev_priv)-
> >num_supported_dbuf_slices;
> @@ -3894,13 +3893,16 @@ skl_ddb_get_pipe_allocation_limits(struct
> drm_i915_private *dev_priv,
>  	 * that changes the active CRTC list or do modeset would need
> to
>  	 * grab _all_ crtc locks, including the one we currently hold.
>  	 */
> -	if (!intel_state->active_pipe_changes && !intel_state->modeset) 
> {
> +	if (old_dbuf_state->active_pipes == new_dbuf_state-
> >active_pipes &&
> +	    !dev_priv->wm.distrust_bios_wm) {
>  		/*
>  		 * alloc may be cleared by clear_intel_crtc_state,
>  		 * copy from old state to be sure
> +		 *
> +		 * FIXME get rid of this mess
>  		 */
>  		*alloc = to_intel_crtc_state(for_crtc->state)-
> >wm.skl.ddb;
> -		return;
> +		return 0;
>  	}
>  
>  	/*
> @@ -3979,7 +3981,13 @@ skl_ddb_get_pipe_allocation_limits(struct
> drm_i915_private *dev_priv,
>  	 * FIXME: For now we always enable slice S1 as per
>  	 * the Bspec display initialization sequence.
>  	 */
> -	intel_state->enabled_dbuf_slices_mask = total_slice_mask |
> BIT(DBUF_S1);
> +	new_dbuf_state->enabled_slices = total_slice_mask |
> BIT(DBUF_S1);
> +
> +	if (old_dbuf_state->enabled_slices != new_dbuf_state-
> >enabled_slices) {
> +		ret =
> intel_atomic_serialize_global_state(&new_dbuf_state->base);
> +		if (ret)
> +			return ret;
> +	}
>  
>  	start = ddb_range_size * width_before_pipe_in_range /
> total_width_in_range;
>  	end = ddb_range_size *
> @@ -3990,9 +3998,8 @@ skl_ddb_get_pipe_allocation_limits(struct
> drm_i915_private *dev_priv,
>  
>  	DRM_DEBUG_KMS("Pipe %d ddb %d-%d\n", for_pipe,
>  		      alloc->start, alloc->end);
> -	DRM_DEBUG_KMS("Enabled ddb slices mask %x num supported %d\n",
> -		      intel_state->enabled_dbuf_slices_mask,
> -		      INTEL_INFO(dev_priv)->num_supported_dbuf_slices);
> +
> +	return 0;
>  }
>  
>  static int skl_compute_wm_params(const struct intel_crtc_state
> *crtc_state,
> @@ -4112,8 +4119,8 @@ void skl_pipe_ddb_get_hw_state(struct
> intel_crtc *crtc,
>  
>  void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv)
>  {
> -	dev_priv->enabled_dbuf_slices_mask =
> -				intel_enabled_dbuf_slices_mask(dev_priv
> );
> +	dev_priv->dbuf.enabled_slices =
> +		intel_enabled_dbuf_slices_mask(dev_priv);
>  }
>  
>  /*
> @@ -4546,6 +4553,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state
> *crtc_state)
>  	u64 uv_plane_data_rate[I915_MAX_PLANES] = {};
>  	u32 blocks;
>  	int level;
> +	int ret;
>  
>  	/* Clear the partitioning for disabled planes. */
>  	memset(crtc_state->wm.skl.plane_ddb_y, 0, sizeof(crtc_state-
> >wm.skl.plane_ddb_y));
> @@ -4567,8 +4575,12 @@ skl_allocate_pipe_ddb(struct intel_crtc_state
> *crtc_state)
>  							 uv_plane_data_
> rate);
>  
>  
> -	skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state,
> total_data_rate,
> -					   alloc, &num_active);
> +	ret = skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state,
> +						 total_data_rate,
> +						 alloc, &num_active);
> +	if (ret)
> +		return ret;
> +
>  	alloc_size = skl_ddb_entry_size(alloc);
>  	if (alloc_size == 0)
>  		return 0;
> @@ -5451,14 +5463,11 @@ skl_ddb_add_affected_planes(const struct
> intel_crtc_state *old_crtc_state,
>  static int
>  skl_compute_ddb(struct intel_atomic_state *state)
>  {
> -	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
>  	struct intel_crtc_state *old_crtc_state;
>  	struct intel_crtc_state *new_crtc_state;
>  	struct intel_crtc *crtc;
>  	int ret, i;
>  
> -	state->enabled_dbuf_slices_mask = dev_priv-
> >enabled_dbuf_slices_mask;
> -
>  	for_each_oldnew_intel_crtc_in_state(state, crtc,
> old_crtc_state,
>  					    new_crtc_state, i) {
>  		ret = skl_allocate_pipe_ddb(new_crtc_state);
> @@ -5598,7 +5607,8 @@ skl_print_wm_changes(struct intel_atomic_state
> *state)
>  	}
>  }
>  
> -static int intel_add_all_pipes(struct intel_atomic_state *state)
> +static int intel_add_affected_pipes(struct intel_atomic_state
> *state,
> +				    u8 pipe_mask)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
>  	struct intel_crtc *crtc;
> @@ -5606,6 +5616,9 @@ static int intel_add_all_pipes(struct
> intel_atomic_state *state)
>  	for_each_intel_crtc(&dev_priv->drm, crtc) {
>  		struct intel_crtc_state *crtc_state;
>  
> +		if ((pipe_mask & BIT(crtc->pipe)) == 0)
> +			continue;
> +
>  		crtc_state = intel_atomic_get_crtc_state(&state->base,
> crtc);
>  		if (IS_ERR(crtc_state))
>  			return PTR_ERR(crtc_state);
> @@ -5618,49 +5631,54 @@ static int
>  skl_ddb_add_affected_pipes(struct intel_atomic_state *state)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> -	int ret;
> +	struct intel_crtc_state *crtc_state;
> +	struct intel_crtc *crtc;
> +	int i, ret;
>  
> -	/*
> -	 * If this is our first atomic update following hardware
> readout,
> -	 * we can't trust the DDB that the BIOS programmed for
> us.  Let's
> -	 * pretend that all pipes switched active status so that we'll
> -	 * ensure a full DDB recompute.
> -	 */
>  	if (dev_priv->wm.distrust_bios_wm) {
> -		ret = drm_modeset_lock(&dev_priv-
> >drm.mode_config.connection_mutex,
> -				       state->base.acquire_ctx);
> -		if (ret)
> -			return ret;
> -
> -		state->active_pipe_changes = INTEL_INFO(dev_priv)-
> >pipe_mask;
> -
>  		/*
> -		 * We usually only initialize state->active_pipes if we
> -		 * we're doing a modeset; make sure this field is
> always
> -		 * initialized during the sanitization process that
> happens
> -		 * on the first commit too.
> +		 * skl_ddb_get_pipe_allocation_limits() currently
> requires
> +		 * all active pipes to be included in the state so that
> +		 * it can redistribute the dbuf among them, and it
> really
> +		 * wants to recompute things when distrust_bios_wm is
> set
> +		 * so we add all the pipes to the state.
>  		 */
> -		if (!state->modeset)
> -			state->active_pipes = dev_priv->active_pipes;
> +		ret = intel_add_affected_pipes(state, ~0);
> +		if (ret)
> +			return ret;
>  	}
>  
> -	/*
> -	 * If the modeset changes which CRTC's are active, we need to
> -	 * recompute the DDB allocation for *all* active pipes, even
> -	 * those that weren't otherwise being modified in any way by
> this
> -	 * atomic commit.  Due to the shrinking of the per-pipe
> allocations
> -	 * when new active CRTC's are added, it's possible for a pipe
> that
> -	 * we were already using and aren't changing at all here to
> suddenly
> -	 * become invalid if its DDB needs exceeds its new allocation.
> -	 *
> -	 * Note that if we wind up doing a full DDB recompute, we can't
> let
> -	 * any other display updates race with this transaction, so we
> need
> -	 * to grab the lock on *all* CRTC's.
> -	 */
> -	if (state->active_pipe_changes || state->modeset) {
> -		ret = intel_add_all_pipes(state);
> +	for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
> +		struct intel_dbuf_state *new_dbuf_state;
> +		const struct intel_dbuf_state *old_dbuf_state;
> +
> +		new_dbuf_state = intel_atomic_get_dbuf_state(state);
> +		if (IS_ERR(new_dbuf_state))
> +			return ret;
> +
> +		old_dbuf_state =
> intel_atomic_get_old_dbuf_state(state);
> +
> +		new_dbuf_state->active_pipes =
> +			intel_calc_active_pipes(state, old_dbuf_state-
> >active_pipes);
> +
> +		if (old_dbuf_state->active_pipes == new_dbuf_state-
> >active_pipes)
> +			break;
> +
> +		ret = intel_atomic_lock_global_state(&new_dbuf_state-
> >base);
> +		if (ret)
> +			return ret;
> +
> +		/*
> +		 * skl_ddb_get_pipe_allocation_limits() currently
> requires
> +		 * all active pipes to be included in the state so that
> +		 * it can redistribute the dbuf among them.
> +		 */
> +		ret = intel_add_affected_pipes(state,
> +					       new_dbuf_state-
> >active_pipes);
>  		if (ret)
>  			return ret;
> +
> +		break;
>  	}
>  
>  	return 0;
> @@ -7493,3 +7511,52 @@ void intel_pm_setup(struct drm_i915_private
> *dev_priv)
>  	dev_priv->runtime_pm.suspended = false;
>  	atomic_set(&dev_priv->runtime_pm.wakeref_count, 0);
>  }
> +
> +static struct intel_global_state *intel_dbuf_duplicate_state(struct
> intel_global_obj *obj)
> +{
> +	struct intel_dbuf_state *dbuf_state;
> +
> +	dbuf_state = kmemdup(obj->state, sizeof(*dbuf_state),
> GFP_KERNEL);
> +	if (!dbuf_state)
> +		return NULL;
> +
> +	return &dbuf_state->base;
> +}
> +
> +static void intel_dbuf_destroy_state(struct intel_global_obj *obj,
> +				     struct intel_global_state *state)
> +{
> +	kfree(state);
> +}
> +
> +static const struct intel_global_state_funcs intel_dbuf_funcs = {
> +	.atomic_duplicate_state = intel_dbuf_duplicate_state,
> +	.atomic_destroy_state = intel_dbuf_destroy_state,
> +};
> +
> +struct intel_dbuf_state *
> +intel_atomic_get_dbuf_state(struct intel_atomic_state *state)
> +{
> +	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> +	struct intel_global_state *dbuf_state;
> +
> +	dbuf_state = intel_atomic_get_global_obj_state(state,
> &dev_priv->dbuf.obj);
> +	if (IS_ERR(dbuf_state))
> +		return ERR_CAST(dbuf_state);
> +
> +	return to_intel_dbuf_state(dbuf_state);
> +}
> +
> +int intel_dbuf_init(struct drm_i915_private *dev_priv)
> +{
> +	struct intel_dbuf_state *dbuf_state;
> +
> +	dbuf_state = kzalloc(sizeof(*dbuf_state), GFP_KERNEL);
> +	if (!dbuf_state)
> +		return -ENOMEM;
> +
> +	intel_atomic_global_obj_init(dev_priv, &dev_priv->dbuf.obj,
> +				     &dbuf_state->base,
> &intel_dbuf_funcs);
> +
> +	return 0;
> +}
> diff --git a/drivers/gpu/drm/i915/intel_pm.h
> b/drivers/gpu/drm/i915/intel_pm.h
> index d60a85421c5a..fadf7cbc44c4 100644
> --- a/drivers/gpu/drm/i915/intel_pm.h
> +++ b/drivers/gpu/drm/i915/intel_pm.h
> @@ -8,6 +8,8 @@
>  
>  #include <linux/types.h>
>  
> +#include "display/intel_global_state.h"
> +
>  #include "i915_reg.h"
>  
>  struct drm_device;
> @@ -59,4 +61,24 @@ void intel_enable_ipc(struct drm_i915_private
> *dev_priv);
>  
>  bool intel_set_memory_cxsr(struct drm_i915_private *dev_priv, bool
> enable);
>  
> +struct intel_dbuf_state {
> +	struct intel_global_state base;
> +
> +	u8 enabled_slices;
> +	u8 active_pipes;
> +};
> +
> +int intel_dbuf_init(struct drm_i915_private *dev_priv);
> +
> +struct intel_dbuf_state *
> +intel_atomic_get_dbuf_state(struct intel_atomic_state *state);
> +
> +#define to_intel_dbuf_state(x) container_of((x), struct
> intel_dbuf_state, base)
> +#define intel_atomic_get_old_dbuf_state(state) \
> +	to_intel_dbuf_state(intel_atomic_get_old_global_obj_state(state
> , &to_i915(state->base.dev)->dbuf.obj))
> +#define intel_atomic_get_new_dbuf_state(state) \
> +	to_intel_dbuf_state(intel_atomic_get_new_global_obj_state(state
> , &to_i915(state->base.dev)->dbuf.obj))
> +
> +int intel_dbuf_init(struct drm_i915_private *dev_priv);
> +
>  #endif /* __INTEL_PM_H__ */
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 20/20] drm/i915: Check slice mask for holes
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 20/20] drm/i915: Check slice mask for holes Ville Syrjala
@ 2020-02-25 17:47   ` Lisovskiy, Stanislav
  0 siblings, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-02-25 17:47 UTC (permalink / raw)
  To: ville.syrjala, intel-gfx

On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Make sure the dbuf slice mask for any individual pipe has no
> holes between the slices.
> 
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/intel_pm.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/intel_pm.c
> b/drivers/gpu/drm/i915/intel_pm.c
> index 7edac506d343..fa39ab0b1223 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -3827,6 +3827,14 @@ static int intel_dbuf_slice_size(struct
> drm_i915_private *dev_priv)
>  		INTEL_INFO(dev_priv)->num_supported_dbuf_slices;
>  }
>  
> +static bool bitmask_is_contiguous(unsigned int bitmask)
> +{
> +	if (bitmask)
> +		bitmask >>= ffs(bitmask) - 1;
> +
> +	return is_power_of_2(bitmask + 1);
> +}
> +

Well, I guess we just don't trust BSpec tables here :)

Shouldn't this be already taken care of by the actual tables, which we
anyway seem have to encode "as is".

Moreover, I wouldn't even be sure that one day, they won't come up
with that you can have gaps for those, anyway currently
we don't have them according to current tables

Stan

>  static void
>  skl_ddb_entry_for_slices(struct drm_i915_private *dev_priv, u8
> slice_mask,
>  			 struct skl_ddb_entry *ddb)
> @@ -3844,6 +3852,7 @@ skl_ddb_entry_for_slices(struct
> drm_i915_private *dev_priv, u8 slice_mask,
>  
>  	WARN_ON(ddb->start >= ddb->end);
>  	WARN_ON(ddb->end > intel_dbuf_size(dev_priv));
> +	WARN_ON(!bitmask_is_contiguous(slice_mask));
>  }
>  
>  static unsigned int intel_crtc_ddb_weight(const struct
> intel_crtc_state *crtc_state)
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 13/20] drm/i915: Pass the crtc to skl_compute_dbuf_slices()
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 13/20] drm/i915: Pass the crtc to skl_compute_dbuf_slices() Ville Syrjala
@ 2020-02-26  8:41   ` Lisovskiy, Stanislav
  0 siblings, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-02-26  8:41 UTC (permalink / raw)
  To: ville.syrjala, intel-gfx

On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> skl_compute_dbuf_slices() has no use for the crtc state, so
> just pass the crtc itself.


Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>

> 
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/intel_pm.c | 22 ++++++++++------------
>  1 file changed, 10 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_pm.c
> b/drivers/gpu/drm/i915/intel_pm.c
> index 3f48ce7517e2..256622b603cd 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -3861,7 +3861,7 @@ static unsigned int intel_crtc_ddb_weight(const
> struct intel_crtc_state *crtc_st
>  	return hdisplay;
>  }
>  
> -static u8 skl_compute_dbuf_slices(const struct intel_crtc_state
> *crtc_state,
> +static u8 skl_compute_dbuf_slices(struct intel_crtc *crtc,
>  				  u8 active_pipes);
>  
>  static int
> @@ -3873,10 +3873,10 @@ skl_ddb_get_pipe_allocation_limits(struct
> drm_i915_private *dev_priv,
>  {
>  	struct drm_atomic_state *state = crtc_state->uapi.state;
>  	struct intel_atomic_state *intel_state =
> to_intel_atomic_state(state);
> -	struct drm_crtc *for_crtc = crtc_state->uapi.crtc;
> -	const struct intel_crtc *crtc;
> +	struct intel_crtc *for_crtc = to_intel_crtc(crtc_state-
> >uapi.crtc);
> +	struct intel_crtc *crtc;
>  	unsigned int pipe_weight = 0, total_weight = 0,
> weight_before_pipe = 0;
> -	enum pipe for_pipe = to_intel_crtc(for_crtc)->pipe;
> +	enum pipe for_pipe = for_crtc->pipe;
>  	struct intel_dbuf_state *new_dbuf_state =
>  		intel_atomic_get_new_dbuf_state(intel_state);
>  	const struct intel_dbuf_state *old_dbuf_state =
> @@ -3920,14 +3920,14 @@ skl_ddb_get_pipe_allocation_limits(struct
> drm_i915_private *dev_priv,
>  		 *
>  		 * FIXME get rid of this mess
>  		 */
> -		*alloc = to_intel_crtc_state(for_crtc->state)-
> >wm.skl.ddb;
> +		*alloc = to_intel_crtc_state(for_crtc->base.state)-
> >wm.skl.ddb;
>  		return 0;
>  	}
>  
>  	/*
>  	 * Get allowed DBuf slices for correspondent pipe and platform.
>  	 */
> -	dbuf_slice_mask = skl_compute_dbuf_slices(crtc_state,
> active_pipes);
> +	dbuf_slice_mask = skl_compute_dbuf_slices(for_crtc,
> active_pipes);
>  
>  	/*
>  	 * Figure out at which DBuf slice we start, i.e if we start at
> Dbuf S2
> @@ -3953,8 +3953,8 @@ skl_ddb_get_pipe_allocation_limits(struct
> drm_i915_private *dev_priv,
>  		if (!crtc_state->hw.active)
>  			continue;
>  
> -		pipe_dbuf_slice_mask =
> skl_compute_dbuf_slices(crtc_state,
> -							       active_p
> ipes);
> +		pipe_dbuf_slice_mask =
> +			skl_compute_dbuf_slices(crtc, active_pipes);
>  
>  		/*
>  		 * According to BSpec pipe can share one dbuf slice
> with another
> @@ -4004,7 +4004,7 @@ skl_ddb_get_pipe_allocation_limits(struct
> drm_i915_private *dev_priv,
>  
>  	drm_dbg_kms(&dev_priv->drm,
>  		    "[CRTC:%d:%s] dbuf slices 0x%x, ddb (%d - %d),
> active pipes 0x%x\n",
> -		    for_crtc->base.id, for_crtc->name,
> +		    for_crtc->base.base.id, for_crtc->base.name,
>  		    dbuf_slice_mask, alloc->start, alloc->end,
> active_pipes);
>  
>  	return 0;
> @@ -4402,10 +4402,8 @@ static u8 tgl_compute_dbuf_slices(enum pipe
> pipe, u8 active_pipes)
>  	return compute_dbuf_slices(pipe, active_pipes,
> tgl_allowed_dbufs);
>  }
>  
> -static u8 skl_compute_dbuf_slices(const struct intel_crtc_state
> *crtc_state,
> -				  u8 active_pipes)
> +static u8 skl_compute_dbuf_slices(struct intel_crtc *crtc, u8
> active_pipes)
>  {
> -	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
>  	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
>  	enum pipe pipe = crtc->pipe;
>  
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 01/20] drm/i915: Handle some leftover s/intel_crtc/crtc/
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 01/20] drm/i915: Handle some leftover s/intel_crtc/crtc/ Ville Syrjala
@ 2020-02-26  9:29   ` Jani Nikula
  0 siblings, 0 replies; 55+ messages in thread
From: Jani Nikula @ 2020-02-26  9:29 UTC (permalink / raw)
  To: Ville Syrjala, intel-gfx

On Tue, 25 Feb 2020, Ville Syrjala <ville.syrjala@linux.intel.com> wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
>
> Switch to the preferred 'crtc' name for our crtc variables.
>
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>

Reviewed-by: Jani Nikula <jani.nikula@intel.com>

> ---
>  drivers/gpu/drm/i915/intel_pm.c | 23 +++++++++++------------
>  1 file changed, 11 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index 22aa205793e5..543634d3e10c 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -2776,7 +2776,7 @@ static bool ilk_validate_wm_level(int level,
>  }
>  
>  static void ilk_compute_wm_level(const struct drm_i915_private *dev_priv,
> -				 const struct intel_crtc *intel_crtc,
> +				 const struct intel_crtc *crtc,
>  				 int level,
>  				 struct intel_crtc_state *crtc_state,
>  				 const struct intel_plane_state *pristate,
> @@ -3107,7 +3107,7 @@ static bool ilk_validate_pipe_wm(const struct drm_i915_private *dev_priv,
>  static int ilk_compute_pipe_wm(struct intel_crtc_state *crtc_state)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
> -	struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
> +	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
>  	struct intel_pipe_wm *pipe_wm;
>  	struct intel_plane *plane;
>  	const struct intel_plane_state *plane_state;
> @@ -3147,7 +3147,7 @@ static int ilk_compute_pipe_wm(struct intel_crtc_state *crtc_state)
>  		usable_level = 0;
>  
>  	memset(&pipe_wm->wm, 0, sizeof(pipe_wm->wm));
> -	ilk_compute_wm_level(dev_priv, intel_crtc, 0, crtc_state,
> +	ilk_compute_wm_level(dev_priv, crtc, 0, crtc_state,
>  			     pristate, sprstate, curstate, &pipe_wm->wm[0]);
>  
>  	if (!ilk_validate_pipe_wm(dev_priv, pipe_wm))
> @@ -3158,7 +3158,7 @@ static int ilk_compute_pipe_wm(struct intel_crtc_state *crtc_state)
>  	for (level = 1; level <= usable_level; level++) {
>  		struct intel_wm_level *wm = &pipe_wm->wm[level];
>  
> -		ilk_compute_wm_level(dev_priv, intel_crtc, level, crtc_state,
> +		ilk_compute_wm_level(dev_priv, crtc, level, crtc_state,
>  				     pristate, sprstate, curstate, wm);
>  
>  		/*
> @@ -4549,9 +4549,8 @@ static int
>  skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
>  {
>  	struct drm_atomic_state *state = crtc_state->uapi.state;
> -	struct drm_crtc *crtc = crtc_state->uapi.crtc;
> -	struct drm_i915_private *dev_priv = to_i915(crtc->dev);
> -	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
> +	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
> +	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
>  	struct skl_ddb_entry *alloc = &crtc_state->wm.skl.ddb;
>  	u16 alloc_size, start = 0;
>  	u16 total[I915_MAX_PLANES] = {};
> @@ -4609,7 +4608,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
>  	 */
>  	for (level = ilk_wm_max_level(dev_priv); level >= 0; level--) {
>  		blocks = 0;
> -		for_each_plane_id_on_crtc(intel_crtc, plane_id) {
> +		for_each_plane_id_on_crtc(crtc, plane_id) {
>  			const struct skl_plane_wm *wm =
>  				&crtc_state->wm.skl.optimal.planes[plane_id];
>  
> @@ -4646,7 +4645,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
>  	 * watermark level, plus an extra share of the leftover blocks
>  	 * proportional to its relative data rate.
>  	 */
> -	for_each_plane_id_on_crtc(intel_crtc, plane_id) {
> +	for_each_plane_id_on_crtc(crtc, plane_id) {
>  		const struct skl_plane_wm *wm =
>  			&crtc_state->wm.skl.optimal.planes[plane_id];
>  		u64 rate;
> @@ -4685,7 +4684,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
>  
>  	/* Set the actual DDB start/end points for each plane */
>  	start = alloc->start;
> -	for_each_plane_id_on_crtc(intel_crtc, plane_id) {
> +	for_each_plane_id_on_crtc(crtc, plane_id) {
>  		struct skl_ddb_entry *plane_alloc =
>  			&crtc_state->wm.skl.plane_ddb_y[plane_id];
>  		struct skl_ddb_entry *uv_plane_alloc =
> @@ -4719,7 +4718,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
>  	 * that aren't actually possible.
>  	 */
>  	for (level++; level <= ilk_wm_max_level(dev_priv); level++) {
> -		for_each_plane_id_on_crtc(intel_crtc, plane_id) {
> +		for_each_plane_id_on_crtc(crtc, plane_id) {
>  			struct skl_plane_wm *wm =
>  				&crtc_state->wm.skl.optimal.planes[plane_id];
>  
> @@ -4756,7 +4755,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
>  	 * Go back and disable the transition watermark if it turns out we
>  	 * don't have enough DDB blocks for it.
>  	 */
> -	for_each_plane_id_on_crtc(intel_crtc, plane_id) {
> +	for_each_plane_id_on_crtc(crtc, plane_id) {
>  		struct skl_plane_wm *wm =
>  			&crtc_state->wm.skl.optimal.planes[plane_id];

-- 
Jani Nikula, Intel Open Source Graphics Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 02/20] drm/i915: Remove garbage WARNs
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 02/20] drm/i915: Remove garbage WARNs Ville Syrjala
@ 2020-02-26  9:30   ` Jani Nikula
  0 siblings, 0 replies; 55+ messages in thread
From: Jani Nikula @ 2020-02-26  9:30 UTC (permalink / raw)
  To: Ville Syrjala, intel-gfx

On Tue, 25 Feb 2020, Ville Syrjala <ville.syrjala@linux.intel.com> wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
>
> These things can never happen, and probably we'd have oopsed long ago
> if they did. Just get rid of this pointless noise in the code.
>
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>

Reviewed-by: Jani Nikula <jani.nikula@intel.com>

> ---
>  drivers/gpu/drm/i915/intel_pm.c | 11 -----------
>  1 file changed, 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index 543634d3e10c..59fc461bc454 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -4470,14 +4470,10 @@ skl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state,
>  				 u64 *plane_data_rate,
>  				 u64 *uv_plane_data_rate)
>  {
> -	struct drm_atomic_state *state = crtc_state->uapi.state;
>  	struct intel_plane *plane;
>  	const struct intel_plane_state *plane_state;
>  	u64 total_data_rate = 0;
>  
> -	if (WARN_ON(!state))
> -		return 0;
> -
>  	/* Calculate and cache data rate for each plane */
>  	intel_atomic_crtc_state_for_each_plane_state(plane, plane_state, crtc_state) {
>  		enum plane_id plane_id = plane->id;
> @@ -4505,9 +4501,6 @@ icl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state,
>  	const struct intel_plane_state *plane_state;
>  	u64 total_data_rate = 0;
>  
> -	if (WARN_ON(!crtc_state->uapi.state))
> -		return 0;
> -
>  	/* Calculate and cache data rate for each plane */
>  	intel_atomic_crtc_state_for_each_plane_state(plane, plane_state, crtc_state) {
>  		enum plane_id plane_id = plane->id;
> @@ -4548,7 +4541,6 @@ icl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state,
>  static int
>  skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
>  {
> -	struct drm_atomic_state *state = crtc_state->uapi.state;
>  	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
>  	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
>  	struct skl_ddb_entry *alloc = &crtc_state->wm.skl.ddb;
> @@ -4567,9 +4559,6 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
>  	memset(crtc_state->wm.skl.plane_ddb_y, 0, sizeof(crtc_state->wm.skl.plane_ddb_y));
>  	memset(crtc_state->wm.skl.plane_ddb_uv, 0, sizeof(crtc_state->wm.skl.plane_ddb_uv));
>  
> -	if (drm_WARN_ON(&dev_priv->drm, !state))
> -		return 0;
> -
>  	if (!crtc_state->hw.active) {
>  		alloc->start = alloc->end = 0;
>  		return 0;

-- 
Jani Nikula, Intel Open Source Graphics Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 03/20] drm/i915: Add missing commas to dbuf tables
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 03/20] drm/i915: Add missing commas to dbuf tables Ville Syrjala
@ 2020-02-26  9:30   ` Jani Nikula
  0 siblings, 0 replies; 55+ messages in thread
From: Jani Nikula @ 2020-02-26  9:30 UTC (permalink / raw)
  To: Ville Syrjala, intel-gfx

On Tue, 25 Feb 2020, Ville Syrjala <ville.syrjala@linux.intel.com> wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
>
> The preferred style is to sprinkle commas after each array and
> structure initialization, whether or not it happens to be the
> last element/member (only exception being sentinel entries which
> never have anything after them). This leads to much prettier
> diffs if/when new elements/members get added to the end of the
> initialization. We're not bound by some ancient silly mandate
> to omit the final comma.
>
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>

Reviewed-by: Jani Nikula <jani.nikula@intel.com>

> ---
>  drivers/gpu/drm/i915/intel_pm.c | 88 ++++++++++++++++-----------------
>  1 file changed, 44 insertions(+), 44 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index 59fc461bc454..abeb4b19071f 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -4184,49 +4184,49 @@ static const struct dbuf_slice_conf_entry icl_allowed_dbufs[] =
>  	{
>  		.active_pipes = BIT(PIPE_A),
>  		.dbuf_mask = {
> -			[PIPE_A] = BIT(DBUF_S1)
> -		}
> +			[PIPE_A] = BIT(DBUF_S1),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_B),
>  		.dbuf_mask = {
> -			[PIPE_B] = BIT(DBUF_S1)
> -		}
> +			[PIPE_B] = BIT(DBUF_S1),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_A) | BIT(PIPE_B),
>  		.dbuf_mask = {
>  			[PIPE_A] = BIT(DBUF_S1),
> -			[PIPE_B] = BIT(DBUF_S2)
> -		}
> +			[PIPE_B] = BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_C),
>  		.dbuf_mask = {
> -			[PIPE_C] = BIT(DBUF_S2)
> -		}
> +			[PIPE_C] = BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_A) | BIT(PIPE_C),
>  		.dbuf_mask = {
>  			[PIPE_A] = BIT(DBUF_S1),
> -			[PIPE_C] = BIT(DBUF_S2)
> -		}
> +			[PIPE_C] = BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_B) | BIT(PIPE_C),
>  		.dbuf_mask = {
>  			[PIPE_B] = BIT(DBUF_S1),
> -			[PIPE_C] = BIT(DBUF_S2)
> -		}
> +			[PIPE_C] = BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C),
>  		.dbuf_mask = {
>  			[PIPE_A] = BIT(DBUF_S1),
>  			[PIPE_B] = BIT(DBUF_S1),
> -			[PIPE_C] = BIT(DBUF_S2)
> -		}
> +			[PIPE_C] = BIT(DBUF_S2),
> +		},
>  	},
>  };
>  
> @@ -4246,100 +4246,100 @@ static const struct dbuf_slice_conf_entry tgl_allowed_dbufs[] =
>  	{
>  		.active_pipes = BIT(PIPE_A),
>  		.dbuf_mask = {
> -			[PIPE_A] = BIT(DBUF_S1) | BIT(DBUF_S2)
> -		}
> +			[PIPE_A] = BIT(DBUF_S1) | BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_B),
>  		.dbuf_mask = {
> -			[PIPE_B] = BIT(DBUF_S1) | BIT(DBUF_S2)
> -		}
> +			[PIPE_B] = BIT(DBUF_S1) | BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_A) | BIT(PIPE_B),
>  		.dbuf_mask = {
>  			[PIPE_A] = BIT(DBUF_S2),
> -			[PIPE_B] = BIT(DBUF_S1)
> -		}
> +			[PIPE_B] = BIT(DBUF_S1),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_C),
>  		.dbuf_mask = {
> -			[PIPE_C] = BIT(DBUF_S2) | BIT(DBUF_S1)
> -		}
> +			[PIPE_C] = BIT(DBUF_S2) | BIT(DBUF_S1),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_A) | BIT(PIPE_C),
>  		.dbuf_mask = {
>  			[PIPE_A] = BIT(DBUF_S1),
> -			[PIPE_C] = BIT(DBUF_S2)
> -		}
> +			[PIPE_C] = BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_B) | BIT(PIPE_C),
>  		.dbuf_mask = {
>  			[PIPE_B] = BIT(DBUF_S1),
> -			[PIPE_C] = BIT(DBUF_S2)
> -		}
> +			[PIPE_C] = BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C),
>  		.dbuf_mask = {
>  			[PIPE_A] = BIT(DBUF_S1),
>  			[PIPE_B] = BIT(DBUF_S1),
> -			[PIPE_C] = BIT(DBUF_S2)
> -		}
> +			[PIPE_C] = BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_D),
>  		.dbuf_mask = {
> -			[PIPE_D] = BIT(DBUF_S2) | BIT(DBUF_S1)
> -		}
> +			[PIPE_D] = BIT(DBUF_S2) | BIT(DBUF_S1),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_A) | BIT(PIPE_D),
>  		.dbuf_mask = {
>  			[PIPE_A] = BIT(DBUF_S1),
> -			[PIPE_D] = BIT(DBUF_S2)
> -		}
> +			[PIPE_D] = BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_B) | BIT(PIPE_D),
>  		.dbuf_mask = {
>  			[PIPE_B] = BIT(DBUF_S1),
> -			[PIPE_D] = BIT(DBUF_S2)
> -		}
> +			[PIPE_D] = BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_D),
>  		.dbuf_mask = {
>  			[PIPE_A] = BIT(DBUF_S1),
>  			[PIPE_B] = BIT(DBUF_S1),
> -			[PIPE_D] = BIT(DBUF_S2)
> -		}
> +			[PIPE_D] = BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_C) | BIT(PIPE_D),
>  		.dbuf_mask = {
>  			[PIPE_C] = BIT(DBUF_S1),
> -			[PIPE_D] = BIT(DBUF_S2)
> -		}
> +			[PIPE_D] = BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_A) | BIT(PIPE_C) | BIT(PIPE_D),
>  		.dbuf_mask = {
>  			[PIPE_A] = BIT(DBUF_S1),
>  			[PIPE_C] = BIT(DBUF_S2),
> -			[PIPE_D] = BIT(DBUF_S2)
> -		}
> +			[PIPE_D] = BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D),
>  		.dbuf_mask = {
>  			[PIPE_B] = BIT(DBUF_S1),
>  			[PIPE_C] = BIT(DBUF_S2),
> -			[PIPE_D] = BIT(DBUF_S2)
> -		}
> +			[PIPE_D] = BIT(DBUF_S2),
> +		},
>  	},
>  	{
>  		.active_pipes = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D),
> @@ -4347,8 +4347,8 @@ static const struct dbuf_slice_conf_entry tgl_allowed_dbufs[] =
>  			[PIPE_A] = BIT(DBUF_S1),
>  			[PIPE_B] = BIT(DBUF_S1),
>  			[PIPE_C] = BIT(DBUF_S2),
> -			[PIPE_D] = BIT(DBUF_S2)
> -		}
> +			[PIPE_D] = BIT(DBUF_S2),
> +		},
>  	},
>  };

-- 
Jani Nikula, Intel Open Source Graphics Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 04/20] drm/i915: Use a sentinel to terminate the dbuf slice arrays
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 04/20] drm/i915: Use a sentinel to terminate the dbuf slice arrays Ville Syrjala
@ 2020-02-26  9:32   ` Jani Nikula
  0 siblings, 0 replies; 55+ messages in thread
From: Jani Nikula @ 2020-02-26  9:32 UTC (permalink / raw)
  To: Ville Syrjala, intel-gfx

On Tue, 25 Feb 2020, Ville Syrjala <ville.syrjala@linux.intel.com> wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
>
> Make life a bit simpler by sticking a sentinel at the end of
> the dbuf slice arrays. This way we don't need to pass in the
> size. Also unify the types (u8 vs. u32) for active_pipes.
>
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>

Reviewed-by: Jani Nikula <jani.nikula@intel.com>

> ---
>  drivers/gpu/drm/i915/intel_pm.c | 34 +++++++++++++--------------------
>  1 file changed, 13 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index abeb4b19071f..a2e78969c0df 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -3843,7 +3843,7 @@ static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv)
>  }
>  
>  static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state,
> -				  u32 active_pipes);
> +				  u8 active_pipes);
>  
>  static void
>  skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
> @@ -4228,6 +4228,7 @@ static const struct dbuf_slice_conf_entry icl_allowed_dbufs[] =
>  			[PIPE_C] = BIT(DBUF_S2),
>  		},
>  	},
> +	{}
>  };
>  
>  /*
> @@ -4350,16 +4351,15 @@ static const struct dbuf_slice_conf_entry tgl_allowed_dbufs[] =
>  			[PIPE_D] = BIT(DBUF_S2),
>  		},
>  	},
> +	{}
>  };
>  
> -static u8 compute_dbuf_slices(enum pipe pipe,
> -			      u32 active_pipes,
> -			      const struct dbuf_slice_conf_entry *dbuf_slices,
> -			      int size)
> +static u8 compute_dbuf_slices(enum pipe pipe, u8 active_pipes,
> +			      const struct dbuf_slice_conf_entry *dbuf_slices)
>  {
>  	int i;
>  
> -	for (i = 0; i < size; i++) {
> +	for (i = 0; i < dbuf_slices[i].active_pipes; i++) {
>  		if (dbuf_slices[i].active_pipes == active_pipes)
>  			return dbuf_slices[i].dbuf_mask[pipe];
>  	}
> @@ -4371,8 +4371,7 @@ static u8 compute_dbuf_slices(enum pipe pipe,
>   * returns correspondent DBuf slice mask as stated in BSpec for particular
>   * platform.
>   */
> -static u32 icl_compute_dbuf_slices(enum pipe pipe,
> -				   u32 active_pipes)
> +static u8 icl_compute_dbuf_slices(enum pipe pipe, u8 active_pipes)
>  {
>  	/*
>  	 * FIXME: For ICL this is still a bit unclear as prev BSpec revision
> @@ -4386,32 +4385,25 @@ static u32 icl_compute_dbuf_slices(enum pipe pipe,
>  	 * still here - we will need it once those additional constraints
>  	 * pop up.
>  	 */
> -	return compute_dbuf_slices(pipe, active_pipes,
> -				   icl_allowed_dbufs,
> -				   ARRAY_SIZE(icl_allowed_dbufs));
> +	return compute_dbuf_slices(pipe, active_pipes, icl_allowed_dbufs);
>  }
>  
> -static u32 tgl_compute_dbuf_slices(enum pipe pipe,
> -				   u32 active_pipes)
> +static u8 tgl_compute_dbuf_slices(enum pipe pipe, u8 active_pipes)
>  {
> -	return compute_dbuf_slices(pipe, active_pipes,
> -				   tgl_allowed_dbufs,
> -				   ARRAY_SIZE(tgl_allowed_dbufs));
> +	return compute_dbuf_slices(pipe, active_pipes, tgl_allowed_dbufs);
>  }
>  
>  static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state,
> -				  u32 active_pipes)
> +				  u8 active_pipes)
>  {
>  	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
>  	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
>  	enum pipe pipe = crtc->pipe;
>  
>  	if (IS_GEN(dev_priv, 12))
> -		return tgl_compute_dbuf_slices(pipe,
> -					       active_pipes);
> +		return tgl_compute_dbuf_slices(pipe, active_pipes);
>  	else if (IS_GEN(dev_priv, 11))
> -		return icl_compute_dbuf_slices(pipe,
> -					       active_pipes);
> +		return icl_compute_dbuf_slices(pipe, active_pipes);
>  	/*
>  	 * For anything else just return one slice yet.
>  	 * Should be extended for other platforms.

-- 
Jani Nikula, Intel Open Source Graphics Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 11/20] drm/i915: Clean up dbuf debugs during .atomic_check()
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 11/20] drm/i915: Clean up dbuf debugs during .atomic_check() Ville Syrjala
@ 2020-02-26 11:32   ` Lisovskiy, Stanislav
  0 siblings, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-02-26 11:32 UTC (permalink / raw)
  To: ville.syrjala, intel-gfx

On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Combine the two per-pipe dbuf debugs into one, and use the canonical
> [CRTC:%d:%s] style to identify the crtc. Also use the same style as
> the plane code uses for the ddb start/end, and prefix bitmask
> properly
> with 0x to make it clear they are in fact bitmasks.
> 
> The "how many total slices we are going to use" debug we move to
> outside the crtc loop so it gets printed only once at the end.

Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>

> 
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/intel_pm.c | 26 +++++++++++++++++++-------
>  1 file changed, 19 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_pm.c
> b/drivers/gpu/drm/i915/intel_pm.c
> index de2822e5c62c..d2edfb820dd9 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -3910,10 +3910,6 @@ skl_ddb_get_pipe_allocation_limits(struct
> drm_i915_private *dev_priv,
>  	 */
>  	dbuf_slice_mask = skl_compute_dbuf_slices(crtc_state,
> active_pipes);
>  
> -	DRM_DEBUG_KMS("DBuf slice mask %x pipe %c active pipes %x\n",
> -		      dbuf_slice_mask,
> -		      pipe_name(for_pipe), active_pipes);
> -
>  	/*
>  	 * Figure out at which DBuf slice we start, i.e if we start at
> Dbuf S2
>  	 * and slice size is 1024, the offset would be 1024
> @@ -3996,8 +3992,10 @@ skl_ddb_get_pipe_allocation_limits(struct
> drm_i915_private *dev_priv,
>  	alloc->start = offset + start;
>  	alloc->end = offset + end;
>  
> -	DRM_DEBUG_KMS("Pipe %d ddb %d-%d\n", for_pipe,
> -		      alloc->start, alloc->end);
> +	drm_dbg_kms(&dev_priv->drm,
> +		    "[CRTC:%d:%s] dbuf slices 0x%x, ddb (%d - %d),
> active pipes 0x%x\n",
> +		    for_crtc->base.id, for_crtc->name,
> +		    dbuf_slice_mask, alloc->start, alloc->end,
> active_pipes);
>  
>  	return 0;
>  }
> @@ -5457,7 +5455,10 @@ skl_ddb_add_affected_planes(const struct
> intel_crtc_state *old_crtc_state,
>  static int
>  skl_compute_ddb(struct intel_atomic_state *state)
>  {
> -	struct intel_crtc_state *old_crtc_state;
> +	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> +	const struct intel_dbuf_state *old_dbuf_state;
> +	const struct intel_dbuf_state *new_dbuf_state;
> +	const struct intel_crtc_state *old_crtc_state;
>  	struct intel_crtc_state *new_crtc_state;
>  	struct intel_crtc *crtc;
>  	int ret, i;
> @@ -5474,6 +5475,17 @@ skl_compute_ddb(struct intel_atomic_state
> *state)
>  			return ret;
>  	}
>  
> +	old_dbuf_state = intel_atomic_get_old_dbuf_state(state);
> +	new_dbuf_state = intel_atomic_get_new_dbuf_state(state);
> +
> +	if (new_dbuf_state &&
> +	    new_dbuf_state->enabled_slices != old_dbuf_state-
> >enabled_slices)
> +		drm_dbg_kms(&dev_priv->drm,
> +			    "Enabled dbuf slices 0x%x -> 0x%x (out of
> %d dbuf slices)\n",
> +			    old_dbuf_state->enabled_slices,
> +			    new_dbuf_state->enabled_slices,
> +			    INTEL_INFO(dev_priv)-
> >num_supported_dbuf_slices);
> +
>  	return 0;
>  }
>  
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 10/20] drm/i915: Move the dbuf pre/post plane update
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 10/20] drm/i915: Move the dbuf pre/post plane update Ville Syrjala
@ 2020-02-26 11:38   ` Lisovskiy, Stanislav
  0 siblings, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-02-26 11:38 UTC (permalink / raw)
  To: ville.syrjala, intel-gfx

On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Encapsulate the dbuf state more by moving the pre/post
> plane functions out from intel_display.c. We stick them
> into intel_pm.c since that's where the rest of the code
> lives for now.
> 
> Eventually we should add a new file for this stuff at which
> point we also need to decide if it makes sense to even split
> the wm code from the ddb code, or to keep them together.

Yes, that definitely makes sense. May be we should one day,
add a separate file for wm/ddb/dbuf management, because intel_pm.c
seems to me a bit _overloaded_ with functionality right now.

Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>

> 
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_display.c | 41 +-----------------
> --
>  drivers/gpu/drm/i915/intel_pm.c              | 37 ++++++++++++++++++
>  drivers/gpu/drm/i915/intel_pm.h              |  2 +
>  3 files changed, 41 insertions(+), 39 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c
> b/drivers/gpu/drm/i915/display/intel_display.c
> index 659b952c8e2f..6e96756f9a69 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -15291,43 +15291,6 @@ static void
> intel_update_trans_port_sync_crtcs(struct intel_crtc *crtc,
>  				       state);
>  }
>  
> -static void icl_dbuf_slice_pre_update(struct intel_atomic_state
> *state)
> -{
> -	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> -	const struct intel_dbuf_state *new_dbuf_state =
> -		intel_atomic_get_new_dbuf_state(state);
> -	const struct intel_dbuf_state *old_dbuf_state =
> -		intel_atomic_get_old_dbuf_state(state);
> -
> -	if (!new_dbuf_state ||
> -	    new_dbuf_state->enabled_slices == old_dbuf_state-
> >enabled_slices)
> -		return;
> -
> -	WARN_ON(!new_dbuf_state->base.changed);
> -
> -	gen9_dbuf_slices_update(dev_priv,
> -				old_dbuf_state->enabled_slices |
> -				new_dbuf_state->enabled_slices);
> -}
> -
> -static void icl_dbuf_slice_post_update(struct intel_atomic_state
> *state)
> -{
> -	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> -	const struct intel_dbuf_state *new_dbuf_state =
> -		intel_atomic_get_new_dbuf_state(state);
> -	const struct intel_dbuf_state *old_dbuf_state =
> -		intel_atomic_get_old_dbuf_state(state);
> -
> -	if (!new_dbuf_state ||
> -	    new_dbuf_state->enabled_slices == old_dbuf_state-
> >enabled_slices)
> -		return;
> -
> -	WARN_ON(!new_dbuf_state->base.changed);
> -
> -	gen9_dbuf_slices_update(dev_priv,
> -				new_dbuf_state->enabled_slices);
> -}
> -
>  static void skl_commit_modeset_enables(struct intel_atomic_state
> *state)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> @@ -15580,7 +15543,7 @@ static void intel_atomic_commit_tail(struct
> intel_atomic_state *state)
>  	if (state->modeset)
>  		intel_encoders_update_prepare(state);
>  
> -	icl_dbuf_slice_pre_update(state);
> +	intel_dbuf_pre_plane_update(state);
>  
>  	/* Now enable the clocks, plane, pipe, and connectors that we
> set up. */
>  	dev_priv->display.commit_modeset_enables(state);
> @@ -15635,7 +15598,7 @@ static void intel_atomic_commit_tail(struct
> intel_atomic_state *state)
>  			dev_priv->display.optimize_watermarks(state,
> crtc);
>  	}
>  
> -	icl_dbuf_slice_post_update(state);
> +	intel_dbuf_post_plane_update(state);
>  
>  	for_each_oldnew_intel_crtc_in_state(state, crtc,
> old_crtc_state, new_crtc_state, i) {
>  		intel_post_plane_update(state, crtc);
> diff --git a/drivers/gpu/drm/i915/intel_pm.c
> b/drivers/gpu/drm/i915/intel_pm.c
> index 87f88ea6b7ae..de2822e5c62c 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -7553,3 +7553,40 @@ int intel_dbuf_init(struct drm_i915_private
> *dev_priv)
>  
>  	return 0;
>  }
> +
> +void intel_dbuf_pre_plane_update(struct intel_atomic_state *state)
> +{
> +	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> +	const struct intel_dbuf_state *new_dbuf_state =
> +		intel_atomic_get_new_dbuf_state(state);
> +	const struct intel_dbuf_state *old_dbuf_state =
> +		intel_atomic_get_old_dbuf_state(state);
> +
> +	if (!new_dbuf_state ||
> +	    new_dbuf_state->enabled_slices == old_dbuf_state-
> >enabled_slices)
> +		return;
> +
> +	WARN_ON(!new_dbuf_state->base.changed);
> +
> +	gen9_dbuf_slices_update(dev_priv,
> +				old_dbuf_state->enabled_slices |
> +				new_dbuf_state->enabled_slices);
> +}
> +
> +void intel_dbuf_post_plane_update(struct intel_atomic_state *state)
> +{
> +	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> +	const struct intel_dbuf_state *new_dbuf_state =
> +		intel_atomic_get_new_dbuf_state(state);
> +	const struct intel_dbuf_state *old_dbuf_state =
> +		intel_atomic_get_old_dbuf_state(state);
> +
> +	if (!new_dbuf_state ||
> +	    new_dbuf_state->enabled_slices == old_dbuf_state-
> >enabled_slices)
> +		return;
> +
> +	WARN_ON(!new_dbuf_state->base.changed);
> +
> +	gen9_dbuf_slices_update(dev_priv,
> +				new_dbuf_state->enabled_slices);
> +}
> diff --git a/drivers/gpu/drm/i915/intel_pm.h
> b/drivers/gpu/drm/i915/intel_pm.h
> index 1054a0ab1e40..8204d6a5526c 100644
> --- a/drivers/gpu/drm/i915/intel_pm.h
> +++ b/drivers/gpu/drm/i915/intel_pm.h
> @@ -79,5 +79,7 @@ intel_atomic_get_dbuf_state(struct
> intel_atomic_state *state);
>  	to_intel_dbuf_state(intel_atomic_get_new_global_obj_state(state
> , &to_i915(state->base.dev)->dbuf.obj))
>  
>  int intel_dbuf_init(struct drm_i915_private *dev_priv);
> +void intel_dbuf_pre_plane_update(struct intel_atomic_state *state);
> +void intel_dbuf_post_plane_update(struct intel_atomic_state *state);
>  
>  #endif /* __INTEL_PM_H__ */
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 09/20] drm/i915: Nuke skl_ddb_get_hw_state()
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 09/20] drm/i915: Nuke skl_ddb_get_hw_state() Ville Syrjala
@ 2020-02-26 11:40   ` Lisovskiy, Stanislav
  0 siblings, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-02-26 11:40 UTC (permalink / raw)
  To: ville.syrjala, intel-gfx

On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> skl_ddb_get_hw_state() is redundant and kinda called in thw wrong
> spot anyway. Just kill it.

Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>

> 
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/intel_pm.c | 7 -------
>  drivers/gpu/drm/i915/intel_pm.h | 1 -
>  2 files changed, 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_pm.c
> b/drivers/gpu/drm/i915/intel_pm.c
> index d4730d9b4e1b..87f88ea6b7ae 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -4117,12 +4117,6 @@ void skl_pipe_ddb_get_hw_state(struct
> intel_crtc *crtc,
>  	intel_display_power_put(dev_priv, power_domain, wakeref);
>  }
>  
> -void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv)
> -{
> -	dev_priv->dbuf.enabled_slices =
> -		intel_enabled_dbuf_slices_mask(dev_priv);
> -}
> -
>  /*
>   * Determines the downscale amount of a plane for the purposes of
> watermark calculations.
>   * The bspec defines downscale amount as:
> @@ -5910,7 +5904,6 @@ void skl_wm_get_hw_state(struct
> drm_i915_private *dev_priv)
>  	struct intel_crtc *crtc;
>  	struct intel_crtc_state *crtc_state;
>  
> -	skl_ddb_get_hw_state(dev_priv);
>  	for_each_intel_crtc(&dev_priv->drm, crtc) {
>  		crtc_state = to_intel_crtc_state(crtc->base.state);
>  
> diff --git a/drivers/gpu/drm/i915/intel_pm.h
> b/drivers/gpu/drm/i915/intel_pm.h
> index fadf7cbc44c4..1054a0ab1e40 100644
> --- a/drivers/gpu/drm/i915/intel_pm.h
> +++ b/drivers/gpu/drm/i915/intel_pm.h
> @@ -38,7 +38,6 @@ u8 intel_enabled_dbuf_slices_mask(struct
> drm_i915_private *dev_priv);
>  void skl_pipe_ddb_get_hw_state(struct intel_crtc *crtc,
>  			       struct skl_ddb_entry *ddb_y,
>  			       struct skl_ddb_entry *ddb_uv);
> -void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv);
>  void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
>  			      struct skl_pipe_wm *out);
>  void g4x_wm_sanitize(struct drm_i915_private *dev_priv);
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BUILD: failure for drm/i915: Proper dbuf global state (rev2)
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (19 preceding siblings ...)
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 20/20] drm/i915: Check slice mask for holes Ville Syrjala
@ 2020-02-26 18:04 ` Patchwork
  2020-02-27 20:21 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Proper dbuf global state (rev3) Patchwork
                   ` (2 subsequent siblings)
  23 siblings, 0 replies; 55+ messages in thread
From: Patchwork @ 2020-02-26 18:04 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Proper dbuf global state (rev2)
URL   : https://patchwork.freedesktop.org/series/73421/
State : failure

== Summary ==

Applying: drm/i915: Handle some leftover s/intel_crtc/crtc/
Applying: drm/i915: Remove garbage WARNs
Applying: drm/i915: Add missing commas to dbuf tables
Applying: drm/i915: Use a sentinel to terminate the dbuf slice arrays
Applying: drm/i915: Make skl_compute_dbuf_slices() behave consistently for all platforms
Applying: drm/i915: Polish some dbuf debugs
Applying: drm/i915: Unify the low level dbuf code
Applying: drm/i915: Introduce proper dbuf state
Applying: drm/i915: Nuke skl_ddb_get_hw_state()
Applying: drm/i915: Move the dbuf pre/post plane update
Applying: drm/i915: Clean up dbuf debugs during .atomic_check()
Applying: drm/i915: Extract intel_crtc_ddb_weight()
Applying: drm/i915: Pass the crtc to skl_compute_dbuf_slices()
Applying: drm/i915: Introduce intel_dbuf_slice_size()
Applying: drm/i915: Introduce skl_ddb_entry_for_slices()
Applying: drm/i915: Move pipe ddb entries into the dbuf state
error: sha1 information is lacking or useless (drivers/gpu/drm/i915/display/intel_display.c).
error: could not build fake ancestor
hint: Use 'git am --show-current-patch' to see the failed patch
Patch failed at 0016 drm/i915: Move pipe ddb entries into the dbuf state
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [Intel-gfx] [PATCH v2 16/20] drm/i915: Move pipe ddb entries into the dbuf state
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 16/20] drm/i915: Move pipe ddb entries into the dbuf state Ville Syrjala
@ 2020-02-27 16:50   ` Ville Syrjala
  0 siblings, 0 replies; 55+ messages in thread
From: Ville Syrjala @ 2020-02-27 16:50 UTC (permalink / raw)
  To: intel-gfx

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

The dbuf state will be where we collect all the inter-pipe dbuf
allocation stuff. Start by moving the actual per-pipe ddb entries
there.

v2: Rebase

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c  | 28 +++++++++++--------
 .../drm/i915/display/intel_display_types.h    |  1 -
 drivers/gpu/drm/i915/intel_pm.c               | 16 ++++-------
 drivers/gpu/drm/i915/intel_pm.h               |  4 +++
 4 files changed, 27 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index a185b9e25cc3..9c6b9cebe8b7 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -15303,6 +15303,10 @@ static void intel_update_trans_port_sync_crtcs(struct intel_crtc *crtc,
 static void skl_commit_modeset_enables(struct intel_atomic_state *state)
 {
 	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
+	const struct intel_dbuf_state *old_dbuf_state =
+		intel_atomic_get_old_dbuf_state(state);
+	const struct intel_dbuf_state *new_dbuf_state =
+		intel_atomic_get_new_dbuf_state(state);
 	struct intel_crtc *crtc;
 	struct intel_crtc_state *old_crtc_state, *new_crtc_state;
 	struct skl_ddb_entry entries[I915_MAX_PIPES] = {};
@@ -15317,7 +15321,7 @@ static void skl_commit_modeset_enables(struct intel_atomic_state *state)
 
 		/* ignore allocations for crtc's that have been turned off. */
 		if (!needs_modeset(new_crtc_state)) {
-			entries[pipe] = old_crtc_state->wm.skl.ddb;
+			entries[pipe] = old_dbuf_state->ddb[pipe];
 			update_pipes |= BIT(pipe);
 		} else {
 			modeset_pipes |= BIT(pipe);
@@ -15341,11 +15345,11 @@ static void skl_commit_modeset_enables(struct intel_atomic_state *state)
 			if ((update_pipes & BIT(pipe)) == 0)
 				continue;
 
-			if (skl_ddb_allocation_overlaps(&new_crtc_state->wm.skl.ddb,
+			if (skl_ddb_allocation_overlaps(&new_dbuf_state->ddb[pipe],
 							entries, I915_MAX_PIPES, pipe))
 				continue;
 
-			entries[pipe] = new_crtc_state->wm.skl.ddb;
+			entries[pipe] = new_dbuf_state->ddb[pipe];
 			update_pipes &= ~BIT(pipe);
 
 			intel_update_crtc(crtc, state, old_crtc_state,
@@ -15357,8 +15361,8 @@ static void skl_commit_modeset_enables(struct intel_atomic_state *state)
 			 * then we need to wait for a vblank to pass for the
 			 * new ddb allocation to take effect.
 			 */
-			if (!skl_ddb_entry_equal(&new_crtc_state->wm.skl.ddb,
-						 &old_crtc_state->wm.skl.ddb) &&
+			if (!skl_ddb_entry_equal(&new_dbuf_state->ddb[pipe],
+						 &old_dbuf_state->ddb[pipe]) &&
 			    (update_pipes | modeset_pipes))
 				intel_wait_for_vblank(dev_priv, pipe);
 		}
@@ -15379,10 +15383,11 @@ static void skl_commit_modeset_enables(struct intel_atomic_state *state)
 		    is_trans_port_sync_slave(new_crtc_state))
 			continue;
 
-		drm_WARN_ON(&dev_priv->drm, skl_ddb_allocation_overlaps(&new_crtc_state->wm.skl.ddb,
-									entries, I915_MAX_PIPES, pipe));
+		drm_WARN_ON(&dev_priv->drm,
+			    skl_ddb_allocation_overlaps(&new_dbuf_state->ddb[pipe],
+							entries, I915_MAX_PIPES, pipe));
 
-		entries[pipe] = new_crtc_state->wm.skl.ddb;
+		entries[pipe] = new_dbuf_state->ddb[pipe];
 		modeset_pipes &= ~BIT(pipe);
 
 		if (is_trans_port_sync_mode(new_crtc_state)) {
@@ -15414,10 +15419,11 @@ static void skl_commit_modeset_enables(struct intel_atomic_state *state)
 		if ((modeset_pipes & BIT(pipe)) == 0)
 			continue;
 
-		drm_WARN_ON(&dev_priv->drm, skl_ddb_allocation_overlaps(&new_crtc_state->wm.skl.ddb,
-									entries, I915_MAX_PIPES, pipe));
+		drm_WARN_ON(&dev_priv->drm,
+			    skl_ddb_allocation_overlaps(&new_dbuf_state->ddb[pipe],
+							entries, I915_MAX_PIPES, pipe));
 
-		entries[pipe] = new_crtc_state->wm.skl.ddb;
+		entries[pipe] = new_dbuf_state->ddb[pipe];
 		modeset_pipes &= ~BIT(pipe);
 
 		intel_update_crtc(crtc, state, old_crtc_state, new_crtc_state);
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
index b51aef2ec770..e35aae029cd4 100644
--- a/drivers/gpu/drm/i915/display/intel_display_types.h
+++ b/drivers/gpu/drm/i915/display/intel_display_types.h
@@ -703,7 +703,6 @@ struct intel_crtc_wm_state {
 		struct {
 			/* gen9+ only needs 1-step wm programming */
 			struct skl_pipe_wm optimal;
-			struct skl_ddb_entry ddb;
 			struct skl_ddb_entry plane_ddb_y[I915_MAX_PLANES];
 			struct skl_ddb_entry plane_ddb_uv[I915_MAX_PLANES];
 		} skl;
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 94847225c84f..b33d99a30116 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3911,16 +3911,8 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
 	 * grab _all_ crtc locks, including the one we currently hold.
 	 */
 	if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes &&
-	    !dev_priv->wm.distrust_bios_wm) {
-		/*
-		 * alloc may be cleared by clear_intel_crtc_state,
-		 * copy from old state to be sure
-		 *
-		 * FIXME get rid of this mess
-		 */
-		*alloc = to_intel_crtc_state(for_crtc->base.state)->wm.skl.ddb;
+	    !dev_priv->wm.distrust_bios_wm)
 		return 0;
-	}
 
 	/*
 	 * Get allowed DBuf slices for correspondent pipe and platform.
@@ -4528,7 +4520,11 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
 {
 	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
-	struct skl_ddb_entry *alloc = &crtc_state->wm.skl.ddb;
+	struct intel_atomic_state *state =
+		to_intel_atomic_state(crtc_state->uapi.state);
+	struct intel_dbuf_state *dbuf_state =
+		intel_atomic_get_new_dbuf_state(state);
+	struct skl_ddb_entry *alloc = &dbuf_state->ddb[crtc->pipe];
 	u16 alloc_size, start = 0;
 	u16 total[I915_MAX_PLANES] = {};
 	u16 uv_total[I915_MAX_PLANES] = {};
diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h
index 8204d6a5526c..d9f84d93280d 100644
--- a/drivers/gpu/drm/i915/intel_pm.h
+++ b/drivers/gpu/drm/i915/intel_pm.h
@@ -8,8 +8,10 @@
 
 #include <linux/types.h>
 
+#include "display/intel_display.h"
 #include "display/intel_global_state.h"
 
+#include "i915_drv.h"
 #include "i915_reg.h"
 
 struct drm_device;
@@ -63,6 +65,8 @@ bool intel_set_memory_cxsr(struct drm_i915_private *dev_priv, bool enable);
 struct intel_dbuf_state {
 	struct intel_global_state base;
 
+	struct skl_ddb_entry ddb[I915_MAX_PIPES];
+
 	u8 enabled_slices;
 	u8 active_pipes;
 };
-- 
2.24.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Proper dbuf global state (rev3)
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (20 preceding siblings ...)
  2020-02-26 18:04 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for drm/i915: Proper dbuf global state (rev2) Patchwork
@ 2020-02-27 20:21 ` Patchwork
  2020-02-27 20:43 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
  2020-02-29  2:40 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
  23 siblings, 0 replies; 55+ messages in thread
From: Patchwork @ 2020-02-27 20:21 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Proper dbuf global state (rev3)
URL   : https://patchwork.freedesktop.org/series/73421/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
3643e15727b1 drm/i915: Handle some leftover s/intel_crtc/crtc/
c2f2161d6505 drm/i915: Remove garbage WARNs
2786de40d65c drm/i915: Add missing commas to dbuf tables
73430ed2f955 drm/i915: Use a sentinel to terminate the dbuf slice arrays
047f693c4cfa drm/i915: Make skl_compute_dbuf_slices() behave consistently for all platforms
391eab3b022c drm/i915: Polish some dbuf debugs
a684f63e9f43 drm/i915: Unify the low level dbuf code
e3028c82388d drm/i915: Introduce proper dbuf state
-:175: CHECK:MULTIPLE_ASSIGNMENTS: multiple assignments should be avoided
#175: FILE: drivers/gpu/drm/i915/display/intel_display.c:18381:
+	dev_priv->active_pipes = cdclk_state->active_pipes =

-:623: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'state' - possible side-effects?
#623: FILE: drivers/gpu/drm/i915/intel_pm.h:77:
+#define intel_atomic_get_old_dbuf_state(state) \
+	to_intel_dbuf_state(intel_atomic_get_old_global_obj_state(state, &to_i915(state->base.dev)->dbuf.obj))

-:624: WARNING:LONG_LINE: line over 100 characters
#624: FILE: drivers/gpu/drm/i915/intel_pm.h:78:
+	to_intel_dbuf_state(intel_atomic_get_old_global_obj_state(state, &to_i915(state->base.dev)->dbuf.obj))

-:625: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'state' - possible side-effects?
#625: FILE: drivers/gpu/drm/i915/intel_pm.h:79:
+#define intel_atomic_get_new_dbuf_state(state) \
+	to_intel_dbuf_state(intel_atomic_get_new_global_obj_state(state, &to_i915(state->base.dev)->dbuf.obj))

-:626: WARNING:LONG_LINE: line over 100 characters
#626: FILE: drivers/gpu/drm/i915/intel_pm.h:80:
+	to_intel_dbuf_state(intel_atomic_get_new_global_obj_state(state, &to_i915(state->base.dev)->dbuf.obj))

total: 0 errors, 2 warnings, 3 checks, 555 lines checked
071b62e2fe20 drm/i915: Nuke skl_ddb_get_hw_state()
2b355bb11548 drm/i915: Move the dbuf pre/post plane update
35835c92bd7f drm/i915: Clean up dbuf debugs during .atomic_check()
7f9190b0dc0b drm/i915: Extract intel_crtc_ddb_weight()
45c49c9f3b73 drm/i915: Pass the crtc to skl_compute_dbuf_slices()
3f877cd02847 drm/i915: Introduce intel_dbuf_slice_size()
373ddcca596a drm/i915: Introduce skl_ddb_entry_for_slices()
aa3deb4ba4f5 drm/i915: Move pipe ddb entries into the dbuf state
80543eb34ce4 drm/i915: Extract intel_crtc_dbuf_weights()
-:137: WARNING:LINE_SPACING: Missing a blank line after declarations
#137: FILE: drivers/gpu/drm/i915/intel_pm.c:3960:
+				   struct skl_ddb_entry *alloc, /* out */
+				   int *num_active /* out */)

total: 0 errors, 1 warnings, 0 checks, 176 lines checked
5d4ec45f7e49 drm/i915: Encapsulate dbuf state handling harder
1fc35297be7f drm/i915: Do a bit more initial readout for dbuf
9cba9c99fb78 drm/i915: Check slice mask for holes

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915: Proper dbuf global state (rev3)
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (21 preceding siblings ...)
  2020-02-27 20:21 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Proper dbuf global state (rev3) Patchwork
@ 2020-02-27 20:43 ` Patchwork
  2020-02-29  2:40 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
  23 siblings, 0 replies; 55+ messages in thread
From: Patchwork @ 2020-02-27 20:43 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Proper dbuf global state (rev3)
URL   : https://patchwork.freedesktop.org/series/73421/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8021 -> Patchwork_16739
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/index.html

Known issues
------------

  Here are the changes found in Patchwork_16739 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_chamelium@hdmi-edid-read:
    - fi-icl-u2:          [PASS][1] -> [FAIL][2] ([i915#217])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/fi-icl-u2/igt@kms_chamelium@hdmi-edid-read.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/fi-icl-u2/igt@kms_chamelium@hdmi-edid-read.html

  
  [i915#217]: https://gitlab.freedesktop.org/drm/intel/issues/217


Participating hosts (49 -> 41)
------------------------------

  Additional (2): fi-skl-lmem fi-tgl-dsi 
  Missing    (10): fi-ilk-m540 fi-tgl-u fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-ctg-p8600 fi-kbl-8809g fi-icl-y fi-byt-clapper fi-bdw-samus 


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8021 -> Patchwork_16739

  CI-20190529: 20190529
  CI_DRM_8021: 98e43281da271731d056080d696c143ca7e07e35 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5473: d22b3507ff2678a05d69d47c0ddf6f0e72ee7ffd @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_16739: 9cba9c99fb782757a79cbec8dd06e5a207d1220e @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

9cba9c99fb78 drm/i915: Check slice mask for holes
1fc35297be7f drm/i915: Do a bit more initial readout for dbuf
5d4ec45f7e49 drm/i915: Encapsulate dbuf state handling harder
80543eb34ce4 drm/i915: Extract intel_crtc_dbuf_weights()
aa3deb4ba4f5 drm/i915: Move pipe ddb entries into the dbuf state
373ddcca596a drm/i915: Introduce skl_ddb_entry_for_slices()
3f877cd02847 drm/i915: Introduce intel_dbuf_slice_size()
45c49c9f3b73 drm/i915: Pass the crtc to skl_compute_dbuf_slices()
7f9190b0dc0b drm/i915: Extract intel_crtc_ddb_weight()
35835c92bd7f drm/i915: Clean up dbuf debugs during .atomic_check()
2b355bb11548 drm/i915: Move the dbuf pre/post plane update
071b62e2fe20 drm/i915: Nuke skl_ddb_get_hw_state()
e3028c82388d drm/i915: Introduce proper dbuf state
a684f63e9f43 drm/i915: Unify the low level dbuf code
391eab3b022c drm/i915: Polish some dbuf debugs
047f693c4cfa drm/i915: Make skl_compute_dbuf_slices() behave consistently for all platforms
73430ed2f955 drm/i915: Use a sentinel to terminate the dbuf slice arrays
2786de40d65c drm/i915: Add missing commas to dbuf tables
c2f2161d6505 drm/i915: Remove garbage WARNs
3643e15727b1 drm/i915: Handle some leftover s/intel_crtc/crtc/

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for drm/i915: Proper dbuf global state (rev3)
  2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
                   ` (22 preceding siblings ...)
  2020-02-27 20:43 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
@ 2020-02-29  2:40 ` Patchwork
  23 siblings, 0 replies; 55+ messages in thread
From: Patchwork @ 2020-02-29  2:40 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: intel-gfx

== Series Details ==

Series: drm/i915: Proper dbuf global state (rev3)
URL   : https://patchwork.freedesktop.org/series/73421/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_8021_full -> Patchwork_16739_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

New tests
---------

  New tests have been introduced between CI_DRM_8021_full and Patchwork_16739_full:

### New IGT tests (1) ###

  * igt@i915_selftest@mock:
    - Statuses :
    - Exec time: [None] s

  

Known issues
------------

  Here are the changes found in Patchwork_16739_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_persistence@engines-mixed-process@bcs0:
    - shard-skl:          [PASS][1] -> [INCOMPLETE][2] ([i915#1197] / [i915#1239])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-skl3/igt@gem_ctx_persistence@engines-mixed-process@bcs0.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-skl10/igt@gem_ctx_persistence@engines-mixed-process@bcs0.html

  * igt@gem_ctx_persistence@engines-mixed-process@rcs0:
    - shard-skl:          [PASS][3] -> [FAIL][4] ([i915#679])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-skl3/igt@gem_ctx_persistence@engines-mixed-process@rcs0.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-skl10/igt@gem_ctx_persistence@engines-mixed-process@rcs0.html

  * igt@gem_ctx_shared@exec-single-timeline-bsd:
    - shard-iclb:         [PASS][5] -> [SKIP][6] ([fdo#110841])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb3/igt@gem_ctx_shared@exec-single-timeline-bsd.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb2/igt@gem_ctx_shared@exec-single-timeline-bsd.html

  * igt@gem_exec_balancer@smoke:
    - shard-iclb:         [PASS][7] -> [SKIP][8] ([fdo#110854])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb4/igt@gem_exec_balancer@smoke.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb6/igt@gem_exec_balancer@smoke.html

  * igt@gem_exec_parallel@vcs1:
    - shard-iclb:         [PASS][9] -> [SKIP][10] ([fdo#112080]) +7 similar issues
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb1/igt@gem_exec_parallel@vcs1.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb3/igt@gem_exec_parallel@vcs1.html

  * igt@gem_exec_schedule@implicit-read-write-bsd2:
    - shard-iclb:         [PASS][11] -> [SKIP][12] ([fdo#109276] / [i915#677])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb1/igt@gem_exec_schedule@implicit-read-write-bsd2.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb3/igt@gem_exec_schedule@implicit-read-write-bsd2.html

  * igt@gem_exec_schedule@pi-shared-iova-bsd:
    - shard-iclb:         [PASS][13] -> [SKIP][14] ([i915#677]) +1 similar issue
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb6/igt@gem_exec_schedule@pi-shared-iova-bsd.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb1/igt@gem_exec_schedule@pi-shared-iova-bsd.html

  * igt@gem_exec_schedule@preempt-queue-chain-bsd2:
    - shard-iclb:         [PASS][15] -> [SKIP][16] ([fdo#109276]) +10 similar issues
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb4/igt@gem_exec_schedule@preempt-queue-chain-bsd2.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb7/igt@gem_exec_schedule@preempt-queue-chain-bsd2.html

  * igt@gem_exec_schedule@reorder-wide-bsd:
    - shard-iclb:         [PASS][17] -> [SKIP][18] ([fdo#112146]) +1 similar issue
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb5/igt@gem_exec_schedule@reorder-wide-bsd.html
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb1/igt@gem_exec_schedule@reorder-wide-bsd.html

  * igt@gem_exec_whisper@basic-queues-forked:
    - shard-glk:          [PASS][19] -> [INCOMPLETE][20] ([i915#58] / [k.org#198133])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-glk3/igt@gem_exec_whisper@basic-queues-forked.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-glk5/igt@gem_exec_whisper@basic-queues-forked.html

  * igt@gem_ppgtt@flink-and-close-vma-leak:
    - shard-glk:          [PASS][21] -> [FAIL][22] ([i915#644])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-glk5/igt@gem_ppgtt@flink-and-close-vma-leak.html
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-glk4/igt@gem_ppgtt@flink-and-close-vma-leak.html

  * igt@i915_pm_rps@waitboost:
    - shard-iclb:         [PASS][23] -> [FAIL][24] ([i915#413])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb2/igt@i915_pm_rps@waitboost.html
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb2/igt@i915_pm_rps@waitboost.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-skl:          [PASS][25] -> [FAIL][26] ([i915#1188])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-skl9/igt@kms_hdr@bpc-switch-suspend.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-skl9/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b:
    - shard-kbl:          [PASS][27] -> [DMESG-WARN][28] ([i915#180])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-kbl1/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b.html
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-kbl7/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-b.html

  * igt@kms_plane@plane-panning-bottom-right-suspend-pipe-c-planes:
    - shard-skl:          [PASS][29] -> [INCOMPLETE][30] ([i915#69])
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-skl5/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-c-planes.html
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-skl2/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-c-planes.html

  * igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min:
    - shard-skl:          [PASS][31] -> [FAIL][32] ([fdo#108145]) +1 similar issue
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-skl5/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-skl3/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html

  * igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
    - shard-skl:          [PASS][33] -> [FAIL][34] ([fdo#108145] / [i915#265])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-skl7/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-skl7/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html

  * igt@kms_psr@no_drrs:
    - shard-iclb:         [PASS][35] -> [FAIL][36] ([i915#173])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb8/igt@kms_psr@no_drrs.html
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb1/igt@kms_psr@no_drrs.html

  * igt@kms_psr@psr2_no_drrs:
    - shard-iclb:         [PASS][37] -> [SKIP][38] ([fdo#109441])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb2/igt@kms_psr@psr2_no_drrs.html
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb4/igt@kms_psr@psr2_no_drrs.html

  * igt@kms_vblank@pipe-a-ts-continuation-suspend:
    - shard-apl:          [PASS][39] -> [DMESG-WARN][40] ([i915#180]) +1 similar issue
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-apl3/igt@kms_vblank@pipe-a-ts-continuation-suspend.html
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-apl1/igt@kms_vblank@pipe-a-ts-continuation-suspend.html

  * igt@perf@short-reads:
    - shard-glk:          [PASS][41] -> [FAIL][42] ([i915#51])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-glk9/igt@perf@short-reads.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-glk5/igt@perf@short-reads.html

  
#### Possible fixes ####

  * igt@gem_exec_balancer@hang:
    - shard-tglb:         [FAIL][43] ([i915#1277]) -> [PASS][44]
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-tglb8/igt@gem_exec_balancer@hang.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-tglb2/igt@gem_exec_balancer@hang.html

  * igt@gem_exec_schedule@fifo-bsd1:
    - shard-iclb:         [SKIP][45] ([fdo#109276]) -> [PASS][46] +19 similar issues
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb8/igt@gem_exec_schedule@fifo-bsd1.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb1/igt@gem_exec_schedule@fifo-bsd1.html

  * igt@gem_exec_schedule@implicit-read-write-bsd1:
    - shard-iclb:         [SKIP][47] ([fdo#109276] / [i915#677]) -> [PASS][48] +2 similar issues
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb8/igt@gem_exec_schedule@implicit-read-write-bsd1.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb1/igt@gem_exec_schedule@implicit-read-write-bsd1.html

  * igt@gem_exec_schedule@pi-common-bsd:
    - shard-iclb:         [SKIP][49] ([i915#677]) -> [PASS][50]
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb1/igt@gem_exec_schedule@pi-common-bsd.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb3/igt@gem_exec_schedule@pi-common-bsd.html

  * igt@gem_exec_schedule@preemptive-hang-bsd:
    - shard-iclb:         [SKIP][51] ([fdo#112146]) -> [PASS][52] +7 similar issues
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb1/igt@gem_exec_schedule@preemptive-hang-bsd.html
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb3/igt@gem_exec_schedule@preemptive-hang-bsd.html

  * igt@gem_mmap_offset@clear:
    - shard-kbl:          [SKIP][53] ([fdo#109271]) -> [PASS][54] +7 similar issues
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-kbl3/igt@gem_mmap_offset@clear.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-kbl4/igt@gem_mmap_offset@clear.html

  * igt@i915_pm_rpm@modeset-lpsp-stress:
    - shard-iclb:         [INCOMPLETE][55] ([i915#189]) -> [PASS][56]
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb2/igt@i915_pm_rpm@modeset-lpsp-stress.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb1/igt@i915_pm_rpm@modeset-lpsp-stress.html

  * igt@i915_selftest@live@gem_contexts:
    - shard-kbl:          [INCOMPLETE][57] ([fdo#103665] / [i915#504]) -> [PASS][58]
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-kbl3/igt@i915_selftest@live@gem_contexts.html
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-kbl4/igt@i915_selftest@live@gem_contexts.html

  * igt@i915_selftest@live@gtt:
    - shard-kbl:          [DMESG-FAIL][59] -> [PASS][60]
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-kbl3/igt@i915_selftest@live@gtt.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-kbl4/igt@i915_selftest@live@gtt.html

  * igt@kms_cursor_crc@pipe-b-cursor-suspend:
    - shard-skl:          [INCOMPLETE][61] ([i915#300]) -> [PASS][62]
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-skl3/igt@kms_cursor_crc@pipe-b-cursor-suspend.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-skl9/igt@kms_cursor_crc@pipe-b-cursor-suspend.html
    - shard-apl:          [DMESG-WARN][63] ([i915#180]) -> [PASS][64] +4 similar issues
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-apl4/igt@kms_cursor_crc@pipe-b-cursor-suspend.html
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-apl3/igt@kms_cursor_crc@pipe-b-cursor-suspend.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size:
    - shard-hsw:          [SKIP][65] ([fdo#109271]) -> [PASS][66]
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-hsw1/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-hsw7/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size.html

  * igt@kms_flip@flip-vs-suspend-interruptible:
    - shard-kbl:          [DMESG-WARN][67] ([i915#180]) -> [PASS][68] +2 similar issues
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-kbl7/igt@kms_flip@flip-vs-suspend-interruptible.html
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-kbl3/igt@kms_flip@flip-vs-suspend-interruptible.html

  * igt@kms_hdr@bpc-switch:
    - shard-skl:          [FAIL][69] ([i915#1188]) -> [PASS][70]
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-skl9/igt@kms_hdr@bpc-switch.html
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-skl9/igt@kms_hdr@bpc-switch.html

  * igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min:
    - shard-skl:          [FAIL][71] ([fdo#108145]) -> [PASS][72]
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-skl6/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-skl1/igt@kms_plane_alpha_blend@pipe-b-constant-alpha-min.html

  * igt@kms_psr@psr2_cursor_render:
    - shard-iclb:         [SKIP][73] ([fdo#109441]) -> [PASS][74] +1 similar issue
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb4/igt@kms_psr@psr2_cursor_render.html
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb2/igt@kms_psr@psr2_cursor_render.html

  * igt@kms_setmode@basic:
    - shard-apl:          [FAIL][75] ([i915#31]) -> [PASS][76]
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-apl6/igt@kms_setmode@basic.html
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-apl6/igt@kms_setmode@basic.html
    - shard-hsw:          [FAIL][77] ([i915#31]) -> [PASS][78]
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-hsw1/igt@kms_setmode@basic.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-hsw7/igt@kms_setmode@basic.html

  * igt@perf_pmu@busy-accuracy-50-vcs1:
    - shard-kbl:          [SKIP][79] ([fdo#109271] / [fdo#112080]) -> [PASS][80]
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-kbl3/igt@perf_pmu@busy-accuracy-50-vcs1.html
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-kbl4/igt@perf_pmu@busy-accuracy-50-vcs1.html

  * igt@perf_pmu@busy-no-semaphores-vcs1:
    - shard-iclb:         [SKIP][81] ([fdo#112080]) -> [PASS][82] +12 similar issues
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb7/igt@perf_pmu@busy-no-semaphores-vcs1.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb4/igt@perf_pmu@busy-no-semaphores-vcs1.html

  
#### Warnings ####

  * igt@gem_ctx_isolation@vcs1-nonpriv:
    - shard-iclb:         [SKIP][83] ([fdo#112080]) -> [FAIL][84] ([IGT#28]) +1 similar issue
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-iclb8/igt@gem_ctx_isolation@vcs1-nonpriv.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-iclb1/igt@gem_ctx_isolation@vcs1-nonpriv.html

  * igt@gen9_exec_parse@allowed-all:
    - shard-glk:          [INCOMPLETE][85] ([i915#58] / [k.org#198133]) -> [DMESG-WARN][86] ([i915#716])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-glk2/igt@gen9_exec_parse@allowed-all.html
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-glk1/igt@gen9_exec_parse@allowed-all.html

  * igt@i915_selftest@live@gt_lrc:
    - shard-tglb:         [DMESG-FAIL][87] ([i915#1233]) -> [INCOMPLETE][88] ([i915#1233])
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-tglb3/igt@i915_selftest@live@gt_lrc.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-tglb7/igt@i915_selftest@live@gt_lrc.html

  * igt@kms_big_fb@x-tiled-32bpp-rotate-0:
    - shard-hsw:          [DMESG-WARN][89] -> [DMESG-WARN][90] ([i915#478])
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-hsw1/igt@kms_big_fb@x-tiled-32bpp-rotate-0.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-hsw4/igt@kms_big_fb@x-tiled-32bpp-rotate-0.html

  * igt@kms_chamelium@vga-hpd-after-suspend:
    - shard-kbl:          [SKIP][91] ([fdo#109271]) -> [SKIP][92] ([fdo#109271] / [fdo#111827]) +1 similar issue
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8021/shard-kbl3/igt@kms_chamelium@vga-hpd-after-suspend.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/shard-kbl4/igt@kms_chamelium@vga-hpd-after-suspend.html

  
  [IGT#28]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/28
  [fdo#103665]: https://bugs.freedesktop.org/show_bug.cgi?id=103665
  [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109276]: https://bugs.freedesktop.org/show_bug.cgi?id=109276
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#110841]: https://bugs.freedesktop.org/show_bug.cgi?id=110841
  [fdo#110854]: https://bugs.freedesktop.org/show_bug.cgi?id=110854
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [fdo#112080]: https://bugs.freedesktop.org/show_bug.cgi?id=112080
  [fdo#112146]: https://bugs.freedesktop.org/show_bug.cgi?id=112146
  [i915#1188]: https://gitlab.freedesktop.org/drm/intel/issues/1188
  [i915#1197]: https://gitlab.freedesktop.org/drm/intel/issues/1197
  [i915#1233]: https://gitlab.freedesktop.org/drm/intel/issues/1233
  [i915#1239]: https://gitlab.freedesktop.org/drm/intel/issues/1239
  [i915#1277]: https://gitlab.freedesktop.org/drm/intel/issues/1277
  [i915#173]: https://gitlab.freedesktop.org/drm/intel/issues/173
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#189]: https://gitlab.freedesktop.org/drm/intel/issues/189
  [i915#265]: https://gitlab.freedesktop.org/drm/intel/issues/265
  [i915#300]: https://gitlab.freedesktop.org/drm/intel/issues/300
  [i915#31]: https://gitlab.freedesktop.org/drm/intel/issues/31
  [i915#413]: https://gitlab.freedesktop.org/drm/intel/issues/413
  [i915#478]: https://gitlab.freedesktop.org/drm/intel/issues/478
  [i915#504]: https://gitlab.freedesktop.org/drm/intel/issues/504
  [i915#51]: https://gitlab.freedesktop.org/drm/intel/issues/51
  [i915#58]: https://gitlab.freedesktop.org/drm/intel/issues/58
  [i915#644]: https://gitlab.freedesktop.org/drm/intel/issues/644
  [i915#677]: https://gitlab.freedesktop.org/drm/intel/issues/677
  [i915#679]: https://gitlab.freedesktop.org/drm/intel/issues/679
  [i915#69]: https://gitlab.freedesktop.org/drm/intel/issues/69
  [i915#716]: https://gitlab.freedesktop.org/drm/intel/issues/716
  [k.org#198133]: https://bugzilla.kernel.org/show_bug.cgi?id=198133


Participating hosts (10 -> 10)
------------------------------

  No changes in participating hosts


Build changes
-------------

  * CI: CI-20190529 -> None
  * Linux: CI_DRM_8021 -> Patchwork_16739

  CI-20190529: 20190529
  CI_DRM_8021: 98e43281da271731d056080d696c143ca7e07e35 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_5473: d22b3507ff2678a05d69d47c0ddf6f0e72ee7ffd @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
  Patchwork_16739: 9cba9c99fb782757a79cbec8dd06e5a207d1220e @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_16739/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 05/20] drm/i915: Make skl_compute_dbuf_slices() behave consistently for all platforms
  2020-02-25 17:30   ` Lisovskiy, Stanislav
@ 2020-03-02 14:50     ` Ville Syrjälä
  2020-03-02 15:50       ` Lisovskiy, Stanislav
  2020-04-01  7:52       ` Lisovskiy, Stanislav
  0 siblings, 2 replies; 55+ messages in thread
From: Ville Syrjälä @ 2020-03-02 14:50 UTC (permalink / raw)
  To: Lisovskiy, Stanislav; +Cc: intel-gfx

On Tue, Feb 25, 2020 at 05:30:57PM +0000, Lisovskiy, Stanislav wrote:
> On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > Currently skl_compute_dbuf_slices() returns 0 for any inactive pipe
> > on
> > icl+, but returns BIT(S1) on pre-icl for any pipe (whether it's
> > active or
> > not). Let's make the behaviour consistent and always return 0 for any
> > inactive pipe.
> > 
> > Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > ---
> >  drivers/gpu/drm/i915/intel_pm.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/intel_pm.c
> > b/drivers/gpu/drm/i915/intel_pm.c
> > index a2e78969c0df..640f4c4fd508 100644
> > --- a/drivers/gpu/drm/i915/intel_pm.c
> > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > @@ -4408,7 +4408,7 @@ static u8 skl_compute_dbuf_slices(const struct
> > intel_crtc_state *crtc_state,
> >  	 * For anything else just return one slice yet.
> >  	 * Should be extended for other platforms.
> >  	 */
> > -	return BIT(DBUF_S1);
> > +	return active_pipes & BIT(pipe) ? BIT(DBUF_S1) : 0;
> 
> I think the initial idea was this won't be even called if there 
> are no active pipes at all - skl_ddb_get_pipe_allocation_limits would
> bail out immediately. If there were some active pipes - then we will
> have to use slice S1 anyway - because there were simply no other slices
> available. If some pipes were inactive - they are currently skipped by
> !crtc_state->hw.active check - so I would just keep it simple and don't
> call this function for non-active pipes at all.

That's just going to make the caller more messy by forcing it to
check for active_pipes 0 vs. not. Ie. we'd be splitting the
responsibility of computing the dbuf slices for this pipe between
skl_compute_dbuf_slices() and its caller. Not a good idea IMO.

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 05/20] drm/i915: Make skl_compute_dbuf_slices() behave consistently for all platforms
  2020-03-02 14:50     ` Ville Syrjälä
@ 2020-03-02 15:50       ` Lisovskiy, Stanislav
  2020-04-01  7:52       ` Lisovskiy, Stanislav
  1 sibling, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-03-02 15:50 UTC (permalink / raw)
  To: ville.syrjala; +Cc: intel-gfx

On Mon, 2020-03-02 at 16:50 +0200, Ville Syrjälä wrote:
> On Tue, Feb 25, 2020 at 05:30:57PM +0000, Lisovskiy, Stanislav wrote:
> > On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > 
> > > Currently skl_compute_dbuf_slices() returns 0 for any inactive
> > > pipe
> > > on
> > > icl+, but returns BIT(S1) on pre-icl for any pipe (whether it's
> > > active or
> > > not). Let's make the behaviour consistent and always return 0 for
> > > any
> > > inactive pipe.
> > > 
> > > Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/intel_pm.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/intel_pm.c
> > > b/drivers/gpu/drm/i915/intel_pm.c
> > > index a2e78969c0df..640f4c4fd508 100644
> > > --- a/drivers/gpu/drm/i915/intel_pm.c
> > > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > > @@ -4408,7 +4408,7 @@ static u8 skl_compute_dbuf_slices(const
> > > struct
> > > intel_crtc_state *crtc_state,
> > >  	 * For anything else just return one slice yet.
> > >  	 * Should be extended for other platforms.
> > >  	 */
> > > -	return BIT(DBUF_S1);
> > > +	return active_pipes & BIT(pipe) ? BIT(DBUF_S1) : 0;
> > 
> > I think the initial idea was this won't be even called if there 
> > are no active pipes at all - skl_ddb_get_pipe_allocation_limits
> > would
> > bail out immediately. If there were some active pipes - then we
> > will
> > have to use slice S1 anyway - because there were simply no other
> > slices
> > available. If some pipes were inactive - they are currently skipped
> > by
> > !crtc_state->hw.active check - so I would just keep it simple and
> > don't
> > call this function for non-active pipes at all.
> 
> That's just going to make the caller more messy by forcing it to
> check for active_pipes 0 vs. not. Ie. we'd be splitting the
> responsibility of computing the dbuf slices for this pipe between
> skl_compute_dbuf_slices() and its caller. Not a good idea IMO.

Well, in that sense I agree. Currently it is just that this check is
anyway there when you get ddb allocation limits. 

Could this be actually even nicer to get one more very simple table for
everything "before-gen11"? We would have it then in a quite unified
looking way.

Stan


> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 06/20] drm/i915: Polish some dbuf debugs
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 06/20] drm/i915: Polish some dbuf debugs Ville Syrjala
@ 2020-03-04 16:29   ` Lisovskiy, Stanislav
  2020-03-04 18:26     ` Ville Syrjälä
  0 siblings, 1 reply; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-03-04 16:29 UTC (permalink / raw)
  To: ville.syrjala, intel-gfx

On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Polish some of the dbuf code to give more meaningful debug
> messages and whatnot. Also we can switch over to the per-device
> debugs/warns at the same time.
> 
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  .../drm/i915/display/intel_display_power.c    | 40 +++++++++------
> ----
>  1 file changed, 19 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c
> b/drivers/gpu/drm/i915/display/intel_display_power.c
> index 6e25a1317161..e81e561e8ac0 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> @@ -4433,11 +4433,12 @@ static void
> intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
>  	mutex_unlock(&power_domains->lock);
>  }
>  
> -static inline
> -bool intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
> -			  i915_reg_t reg, bool enable)
> +static void intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
> +				 enum dbuf_slice slice, bool enable)
>  {
> -	u32 val, status;
> +	i915_reg_t reg = DBUF_CTL_S(slice);
> +	bool state;
> +	u32 val;
>  
>  	val = intel_de_read(dev_priv, reg);
>  	val = enable ? (val | DBUF_POWER_REQUEST) : (val &
> ~DBUF_POWER_REQUEST);
> @@ -4445,13 +4446,10 @@ bool intel_dbuf_slice_set(struct
> drm_i915_private *dev_priv,
>  	intel_de_posting_read(dev_priv, reg);
>  	udelay(10);
>  
> -	status = intel_de_read(dev_priv, reg) & DBUF_POWER_STATE;
> -	if ((enable && !status) || (!enable && status)) {
> -		drm_err(&dev_priv->drm, "DBus power %s timeout!\n",
> -			enable ? "enable" : "disable");
> -		return false;
> -	}
> -	return true;
> +	state = intel_de_read(dev_priv, reg) & DBUF_POWER_STATE;
> +	drm_WARN(&dev_priv->drm, enable != state,
> +		 "DBuf slice %d power %s timeout!\n",
> +		 slice, enable ? "enable" : "disable");
>  }
>  
>  static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
> @@ -4467,14 +4465,16 @@ static void gen9_dbuf_disable(struct
> drm_i915_private *dev_priv)
>  void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
>  			    u8 req_slices)
>  {
> -	int i;
> -	int max_slices = INTEL_INFO(dev_priv)-
> >num_supported_dbuf_slices;
> +	int num_slices = INTEL_INFO(dev_priv)-
> >num_supported_dbuf_slices;
>  	struct i915_power_domains *power_domains = &dev_priv-
> >power_domains;
> +	enum dbuf_slice slice;
>  
> -	drm_WARN(&dev_priv->drm, hweight8(req_slices) > max_slices,
> -		 "Invalid number of dbuf slices requested\n");
> +	drm_WARN(&dev_priv->drm, req_slices & ~(BIT(num_slices) - 1),
> +		 "Invalid set of dbuf slices (0x%x) requested (num dbuf
> slices %d)\n",
> +		 req_slices, num_slices);
>  
> -	DRM_DEBUG_KMS("Updating dbuf slices to 0x%x\n", req_slices);
> +	drm_dbg_kms(&dev_priv->drm,
> +		    "Updating dbuf slices to 0x%x\n", req_slices);
>  
>  	/*
>  	 * Might be running this in parallel to
> gen9_dc_off_power_well_enable
> @@ -4485,11 +4485,9 @@ void icl_dbuf_slices_update(struct
> drm_i915_private *dev_priv,
>  	 */
>  	mutex_lock(&power_domains->lock);
>  
> -	for (i = 0; i < max_slices; i++) {
> -		intel_dbuf_slice_set(dev_priv,
> -				     DBUF_CTL_S(i),
> -				     (req_slices & BIT(i)) != 0);
> -	}
> +	for (slice = DBUF_S1; slice < num_slices; slice++)
> +		intel_dbuf_slice_set(dev_priv, slice,
> +				     req_slices & BIT(slice));

Would be cool to completely get rid of any magic numbers or
definitions, 0 in a sense is more universal here than DBUF_S1.

If we are counting slices as numbers it seems logical that we 
iterate [0..num_slices) range. If you want to name the first slice
explicitly then it probably has to be something like iterator
logic, i.e for (slice = FIRST_SLICE; slice != LAST_SLICE; slice++).

But trying to name it at the same time with comparing to total _amount_
looks a bit confusing.

Stan

>  
>  	dev_priv->enabled_dbuf_slices_mask = req_slices;
>  
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code Ville Syrjala
@ 2020-03-04 17:14   ` Lisovskiy, Stanislav
  2020-03-04 17:23   ` Lisovskiy, Stanislav
  2020-03-05  8:46   ` Lisovskiy, Stanislav
  2 siblings, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-03-04 17:14 UTC (permalink / raw)
  To: Ville Syrjala, intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 8854 bytes --]

>-       /* If 2nd DBuf slice required, enable it here */
>        if (INTEL_GEN(dev_priv) >= 11 && slices_union != hw_enabled_slices)
>-               icl_dbuf_slices_update(dev_priv, slices_union);
>+               gen9_dbuf_slices_update(dev_priv, slices_union);
>}

> static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
>@@ -15307,9 +15306,8 @@ static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
>        u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
>        u8 required_slices = state->enabled_dbuf_slices_mask;

>-       /* If 2nd DBuf slice is no more required disable it */
>         if (INTEL_GEN(dev_priv) >= 11 && required_slices != hw_enabled_slices)
>-               icl_dbuf_slices_update(dev_priv, required_slices);
>+               gen9_dbuf_slices_update(dev_priv, required_slices);


Doesn't make much sense. Just look - previously we were checking if INTEL_GEN is >= than 11(which _is_ ICL)

and now we still _do_ check if INTEL_GEN is >= 11, but... call now function renamed to gen9


I guess you either need to change INTEL_GEN check to be >=9 to at least look somewhat consistent

or leave it as is. Or at least rename icl_ prefix to gen11_ otherwise that looks inconsistent, i.e

you are now checking that gen is >= 11 and then OK - now let's call gen 9! :)


Stan
________________________________
From: Ville Syrjala <ville.syrjala@linux.intel.com>
Sent: Tuesday, February 25, 2020 7:11:12 PM
To: intel-gfx@lists.freedesktop.org
Cc: Lisovskiy, Stanislav
Subject: [PATCH v2 07/20] drm/i915: Unify the low level dbuf code

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

The low level dbuf slice code is rather inconsitent with its
functiona naming and organization. Make it more consistent.

Also share the enable/disable functions between all platforms
since the same code works just fine for all of them.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c  |  6 +--
 .../drm/i915/display/intel_display_power.c    | 44 ++++++++-----------
 .../drm/i915/display/intel_display_power.h    |  6 +--
 3 files changed, 24 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 3031e64ee518..6952c398cc43 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -15296,9 +15296,8 @@ static void icl_dbuf_slice_pre_update(struct intel_atomic_state *state)
         u8 required_slices = state->enabled_dbuf_slices_mask;
         u8 slices_union = hw_enabled_slices | required_slices;

-       /* If 2nd DBuf slice required, enable it here */
         if (INTEL_GEN(dev_priv) >= 11 && slices_union != hw_enabled_slices)
-               icl_dbuf_slices_update(dev_priv, slices_union);
+               gen9_dbuf_slices_update(dev_priv, slices_union);
 }

 static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
@@ -15307,9 +15306,8 @@ static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
         u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
         u8 required_slices = state->enabled_dbuf_slices_mask;

-       /* If 2nd DBuf slice is no more required disable it */
         if (INTEL_GEN(dev_priv) >= 11 && required_slices != hw_enabled_slices)
-               icl_dbuf_slices_update(dev_priv, required_slices);
+               gen9_dbuf_slices_update(dev_priv, required_slices);
 }

 static void skl_commit_modeset_enables(struct intel_atomic_state *state)
diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
index e81e561e8ac0..ce3bbc4c7a27 100644
--- a/drivers/gpu/drm/i915/display/intel_display_power.c
+++ b/drivers/gpu/drm/i915/display/intel_display_power.c
@@ -4433,15 +4433,18 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
         mutex_unlock(&power_domains->lock);
 }

-static void intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
-                                enum dbuf_slice slice, bool enable)
+static void gen9_dbuf_slice_set(struct drm_i915_private *dev_priv,
+                               enum dbuf_slice slice, bool enable)
 {
         i915_reg_t reg = DBUF_CTL_S(slice);
         bool state;
         u32 val;

         val = intel_de_read(dev_priv, reg);
-       val = enable ? (val | DBUF_POWER_REQUEST) : (val & ~DBUF_POWER_REQUEST);
+       if (enable)
+               val |= DBUF_POWER_REQUEST;
+       else
+               val &= ~DBUF_POWER_REQUEST;
         intel_de_write(dev_priv, reg, val);
         intel_de_posting_read(dev_priv, reg);
         udelay(10);
@@ -4452,18 +4455,8 @@ static void intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
                  slice, enable ? "enable" : "disable");
 }

-static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
-{
-       icl_dbuf_slices_update(dev_priv, BIT(DBUF_S1));
-}
-
-static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
-{
-       icl_dbuf_slices_update(dev_priv, 0);
-}
-
-void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
-                           u8 req_slices)
+void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
+                            u8 req_slices)
 {
         int num_slices = INTEL_INFO(dev_priv)->num_supported_dbuf_slices;
         struct i915_power_domains *power_domains = &dev_priv->power_domains;
@@ -4486,28 +4479,29 @@ void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
         mutex_lock(&power_domains->lock);

         for (slice = DBUF_S1; slice < num_slices; slice++)
-               intel_dbuf_slice_set(dev_priv, slice,
-                                    req_slices & BIT(slice));
+               gen9_dbuf_slice_set(dev_priv, slice, req_slices & BIT(slice));

         dev_priv->enabled_dbuf_slices_mask = req_slices;

         mutex_unlock(&power_domains->lock);
 }

-static void icl_dbuf_enable(struct drm_i915_private *dev_priv)
+static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
 {
-       skl_ddb_get_hw_state(dev_priv);
+       dev_priv->enabled_dbuf_slices_mask =
+               intel_enabled_dbuf_slices_mask(dev_priv);
+
         /*
          * Just power up at least 1 slice, we will
          * figure out later which slices we have and what we need.
          */
-       icl_dbuf_slices_update(dev_priv, dev_priv->enabled_dbuf_slices_mask |
-                              BIT(DBUF_S1));
+       gen9_dbuf_slices_update(dev_priv, BIT(DBUF_S1) |
+                               dev_priv->enabled_dbuf_slices_mask);
 }

-static void icl_dbuf_disable(struct drm_i915_private *dev_priv)
+static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
 {
-       icl_dbuf_slices_update(dev_priv, 0);
+       gen9_dbuf_slices_update(dev_priv, 0);
 }

 static void icl_mbus_init(struct drm_i915_private *dev_priv)
@@ -5067,7 +5061,7 @@ static void icl_display_core_init(struct drm_i915_private *dev_priv,
         intel_cdclk_init_hw(dev_priv);

         /* 5. Enable DBUF. */
-       icl_dbuf_enable(dev_priv);
+       gen9_dbuf_enable(dev_priv);

         /* 6. Setup MBUS. */
         icl_mbus_init(dev_priv);
@@ -5090,7 +5084,7 @@ static void icl_display_core_uninit(struct drm_i915_private *dev_priv)
         /* 1. Disable all display engine functions -> aready done */

         /* 2. Disable DBUF */
-       icl_dbuf_disable(dev_priv);
+       gen9_dbuf_disable(dev_priv);

         /* 3. Disable CD clock */
         intel_cdclk_uninit_hw(dev_priv);
diff --git a/drivers/gpu/drm/i915/display/intel_display_power.h b/drivers/gpu/drm/i915/display/intel_display_power.h
index 601e000ffd0d..1a275611241e 100644
--- a/drivers/gpu/drm/i915/display/intel_display_power.h
+++ b/drivers/gpu/drm/i915/display/intel_display_power.h
@@ -312,13 +312,13 @@ enum dbuf_slice {
         DBUF_S2,
 };

+void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
+                            u8 req_slices);
+
 #define with_intel_display_power(i915, domain, wf) \
         for ((wf) = intel_display_power_get((i915), (domain)); (wf); \
              intel_display_power_put_async((i915), (domain), (wf)), (wf) = 0)

-void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
-                           u8 req_slices);
-
 void chv_phy_powergate_lanes(struct intel_encoder *encoder,
                              bool override, unsigned int mask);
 bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum dpio_phy phy,
--
2.24.1


[-- Attachment #1.2: Type: text/html, Size: 22168 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code Ville Syrjala
  2020-03-04 17:14   ` Lisovskiy, Stanislav
@ 2020-03-04 17:23   ` Lisovskiy, Stanislav
  2020-03-04 18:30     ` Ville Syrjälä
  2020-03-05  8:46   ` Lisovskiy, Stanislav
  2 siblings, 1 reply; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-03-04 17:23 UTC (permalink / raw)
  To: Ville Syrjala, intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 8858 bytes --]


>-       /* If 2nd DBuf slice required, enable it here */
>        if (INTEL_GEN(dev_priv) >= 11 && slices_union != hw_enabled_slices)
>-               icl_dbuf_slices_update(dev_priv, slices_union);
>+               gen9_dbuf_slices_update(dev_priv, slices_union);
>}

> static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
>@@ -15307,9 +15306,8 @@ static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
>        u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
>        u8 required_slices = state->enabled_dbuf_slices_mask;

>-       /* If 2nd DBuf slice is no more required disable it */
>         if (INTEL_GEN(dev_priv) >= 11 && required_slices != hw_enabled_slices)
>-               icl_dbuf_slices_update(dev_priv, required_slices);
>+               gen9_dbuf_slices_update(dev_priv, required_slices);


Doesn't make much sense. Just look - previously we were checking if INTEL_GEN is >= than 11(which _is_ ICL)

and now we still _do_ check if INTEL_GEN is >= 11, but... call now function renamed to gen9


I guess you either need to change INTEL_GEN check to be >=9 to at least look somewhat consistent

or leave it as is. Or at least rename icl_ prefix to gen11_ otherwise that looks inconsistent, i.e

you are now checking that gen is >= 11 and then OK - now let's call gen 9! :)


Stan

________________________________
From: Ville Syrjala <ville.syrjala@linux.intel.com>
Sent: Tuesday, February 25, 2020 7:11:12 PM
To: intel-gfx@lists.freedesktop.org
Cc: Lisovskiy, Stanislav
Subject: [PATCH v2 07/20] drm/i915: Unify the low level dbuf code

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

The low level dbuf slice code is rather inconsitent with its
functiona naming and organization. Make it more consistent.

Also share the enable/disable functions between all platforms
since the same code works just fine for all of them.

Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/display/intel_display.c  |  6 +--
 .../drm/i915/display/intel_display_power.c    | 44 ++++++++-----------
 .../drm/i915/display/intel_display_power.h    |  6 +--
 3 files changed, 24 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 3031e64ee518..6952c398cc43 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -15296,9 +15296,8 @@ static void icl_dbuf_slice_pre_update(struct intel_atomic_state *state)
         u8 required_slices = state->enabled_dbuf_slices_mask;
         u8 slices_union = hw_enabled_slices | required_slices;

-       /* If 2nd DBuf slice required, enable it here */
         if (INTEL_GEN(dev_priv) >= 11 && slices_union != hw_enabled_slices)
-               icl_dbuf_slices_update(dev_priv, slices_union);
+               gen9_dbuf_slices_update(dev_priv, slices_union);
 }

 static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
@@ -15307,9 +15306,8 @@ static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
         u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
         u8 required_slices = state->enabled_dbuf_slices_mask;

-       /* If 2nd DBuf slice is no more required disable it */
         if (INTEL_GEN(dev_priv) >= 11 && required_slices != hw_enabled_slices)
-               icl_dbuf_slices_update(dev_priv, required_slices);
+               gen9_dbuf_slices_update(dev_priv, required_slices);
 }

 static void skl_commit_modeset_enables(struct intel_atomic_state *state)
diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
index e81e561e8ac0..ce3bbc4c7a27 100644
--- a/drivers/gpu/drm/i915/display/intel_display_power.c
+++ b/drivers/gpu/drm/i915/display/intel_display_power.c
@@ -4433,15 +4433,18 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
         mutex_unlock(&power_domains->lock);
 }

-static void intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
-                                enum dbuf_slice slice, bool enable)
+static void gen9_dbuf_slice_set(struct drm_i915_private *dev_priv,
+                               enum dbuf_slice slice, bool enable)
 {
         i915_reg_t reg = DBUF_CTL_S(slice);
         bool state;
         u32 val;

         val = intel_de_read(dev_priv, reg);
-       val = enable ? (val | DBUF_POWER_REQUEST) : (val & ~DBUF_POWER_REQUEST);
+       if (enable)
+               val |= DBUF_POWER_REQUEST;
+       else
+               val &= ~DBUF_POWER_REQUEST;
         intel_de_write(dev_priv, reg, val);
         intel_de_posting_read(dev_priv, reg);
         udelay(10);
@@ -4452,18 +4455,8 @@ static void intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
                  slice, enable ? "enable" : "disable");
 }

-static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
-{
-       icl_dbuf_slices_update(dev_priv, BIT(DBUF_S1));
-}
-
-static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
-{
-       icl_dbuf_slices_update(dev_priv, 0);
-}
-
-void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
-                           u8 req_slices)
+void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
+                            u8 req_slices)
 {
         int num_slices = INTEL_INFO(dev_priv)->num_supported_dbuf_slices;
         struct i915_power_domains *power_domains = &dev_priv->power_domains;
@@ -4486,28 +4479,29 @@ void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
         mutex_lock(&power_domains->lock);

         for (slice = DBUF_S1; slice < num_slices; slice++)
-               intel_dbuf_slice_set(dev_priv, slice,
-                                    req_slices & BIT(slice));
+               gen9_dbuf_slice_set(dev_priv, slice, req_slices & BIT(slice));

         dev_priv->enabled_dbuf_slices_mask = req_slices;

         mutex_unlock(&power_domains->lock);
 }

-static void icl_dbuf_enable(struct drm_i915_private *dev_priv)
+static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
 {
-       skl_ddb_get_hw_state(dev_priv);
+       dev_priv->enabled_dbuf_slices_mask =
+               intel_enabled_dbuf_slices_mask(dev_priv);
+
         /*
          * Just power up at least 1 slice, we will
          * figure out later which slices we have and what we need.
          */
-       icl_dbuf_slices_update(dev_priv, dev_priv->enabled_dbuf_slices_mask |
-                              BIT(DBUF_S1));
+       gen9_dbuf_slices_update(dev_priv, BIT(DBUF_S1) |
+                               dev_priv->enabled_dbuf_slices_mask);
 }

-static void icl_dbuf_disable(struct drm_i915_private *dev_priv)
+static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
 {
-       icl_dbuf_slices_update(dev_priv, 0);
+       gen9_dbuf_slices_update(dev_priv, 0);
 }

 static void icl_mbus_init(struct drm_i915_private *dev_priv)
@@ -5067,7 +5061,7 @@ static void icl_display_core_init(struct drm_i915_private *dev_priv,
         intel_cdclk_init_hw(dev_priv);

         /* 5. Enable DBUF. */
-       icl_dbuf_enable(dev_priv);
+       gen9_dbuf_enable(dev_priv);

         /* 6. Setup MBUS. */
         icl_mbus_init(dev_priv);
@@ -5090,7 +5084,7 @@ static void icl_display_core_uninit(struct drm_i915_private *dev_priv)
         /* 1. Disable all display engine functions -> aready done */

         /* 2. Disable DBUF */
-       icl_dbuf_disable(dev_priv);
+       gen9_dbuf_disable(dev_priv);

         /* 3. Disable CD clock */
         intel_cdclk_uninit_hw(dev_priv);
diff --git a/drivers/gpu/drm/i915/display/intel_display_power.h b/drivers/gpu/drm/i915/display/intel_display_power.h
index 601e000ffd0d..1a275611241e 100644
--- a/drivers/gpu/drm/i915/display/intel_display_power.h
+++ b/drivers/gpu/drm/i915/display/intel_display_power.h
@@ -312,13 +312,13 @@ enum dbuf_slice {
         DBUF_S2,
 };

+void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
+                            u8 req_slices);
+
 #define with_intel_display_power(i915, domain, wf) \
         for ((wf) = intel_display_power_get((i915), (domain)); (wf); \
              intel_display_power_put_async((i915), (domain), (wf)), (wf) = 0)

-void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
-                           u8 req_slices);
-
 void chv_phy_powergate_lanes(struct intel_encoder *encoder,
                              bool override, unsigned int mask);
 bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum dpio_phy phy,
--
2.24.1


[-- Attachment #1.2: Type: text/html, Size: 24467 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 06/20] drm/i915: Polish some dbuf debugs
  2020-03-04 16:29   ` Lisovskiy, Stanislav
@ 2020-03-04 18:26     ` Ville Syrjälä
  2020-03-05  9:53       ` Lisovskiy, Stanislav
  0 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjälä @ 2020-03-04 18:26 UTC (permalink / raw)
  To: Lisovskiy, Stanislav; +Cc: intel-gfx

On Wed, Mar 04, 2020 at 04:29:47PM +0000, Lisovskiy, Stanislav wrote:
> On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > Polish some of the dbuf code to give more meaningful debug
> > messages and whatnot. Also we can switch over to the per-device
> > debugs/warns at the same time.
> > 
> > Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > ---
> >  .../drm/i915/display/intel_display_power.c    | 40 +++++++++------
> > ----
> >  1 file changed, 19 insertions(+), 21 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c
> > b/drivers/gpu/drm/i915/display/intel_display_power.c
> > index 6e25a1317161..e81e561e8ac0 100644
> > --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> > +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> > @@ -4433,11 +4433,12 @@ static void
> > intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
> >  	mutex_unlock(&power_domains->lock);
> >  }
> >  
> > -static inline
> > -bool intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
> > -			  i915_reg_t reg, bool enable)
> > +static void intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
> > +				 enum dbuf_slice slice, bool enable)
> >  {
> > -	u32 val, status;
> > +	i915_reg_t reg = DBUF_CTL_S(slice);
> > +	bool state;
> > +	u32 val;
> >  
> >  	val = intel_de_read(dev_priv, reg);
> >  	val = enable ? (val | DBUF_POWER_REQUEST) : (val &
> > ~DBUF_POWER_REQUEST);
> > @@ -4445,13 +4446,10 @@ bool intel_dbuf_slice_set(struct
> > drm_i915_private *dev_priv,
> >  	intel_de_posting_read(dev_priv, reg);
> >  	udelay(10);
> >  
> > -	status = intel_de_read(dev_priv, reg) & DBUF_POWER_STATE;
> > -	if ((enable && !status) || (!enable && status)) {
> > -		drm_err(&dev_priv->drm, "DBus power %s timeout!\n",
> > -			enable ? "enable" : "disable");
> > -		return false;
> > -	}
> > -	return true;
> > +	state = intel_de_read(dev_priv, reg) & DBUF_POWER_STATE;
> > +	drm_WARN(&dev_priv->drm, enable != state,
> > +		 "DBuf slice %d power %s timeout!\n",
> > +		 slice, enable ? "enable" : "disable");
> >  }
> >  
> >  static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
> > @@ -4467,14 +4465,16 @@ static void gen9_dbuf_disable(struct
> > drm_i915_private *dev_priv)
> >  void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
> >  			    u8 req_slices)
> >  {
> > -	int i;
> > -	int max_slices = INTEL_INFO(dev_priv)-
> > >num_supported_dbuf_slices;
> > +	int num_slices = INTEL_INFO(dev_priv)-
> > >num_supported_dbuf_slices;
> >  	struct i915_power_domains *power_domains = &dev_priv-
> > >power_domains;
> > +	enum dbuf_slice slice;
> >  
> > -	drm_WARN(&dev_priv->drm, hweight8(req_slices) > max_slices,
> > -		 "Invalid number of dbuf slices requested\n");
> > +	drm_WARN(&dev_priv->drm, req_slices & ~(BIT(num_slices) - 1),
> > +		 "Invalid set of dbuf slices (0x%x) requested (num dbuf
> > slices %d)\n",
> > +		 req_slices, num_slices);
> >  
> > -	DRM_DEBUG_KMS("Updating dbuf slices to 0x%x\n", req_slices);
> > +	drm_dbg_kms(&dev_priv->drm,
> > +		    "Updating dbuf slices to 0x%x\n", req_slices);
> >  
> >  	/*
> >  	 * Might be running this in parallel to
> > gen9_dc_off_power_well_enable
> > @@ -4485,11 +4485,9 @@ void icl_dbuf_slices_update(struct
> > drm_i915_private *dev_priv,
> >  	 */
> >  	mutex_lock(&power_domains->lock);
> >  
> > -	for (i = 0; i < max_slices; i++) {
> > -		intel_dbuf_slice_set(dev_priv,
> > -				     DBUF_CTL_S(i),
> > -				     (req_slices & BIT(i)) != 0);
> > -	}
> > +	for (slice = DBUF_S1; slice < num_slices; slice++)
> > +		intel_dbuf_slice_set(dev_priv, slice,
> > +				     req_slices & BIT(slice));
> 
> Would be cool to completely get rid of any magic numbers or
> definitions, 0 in a sense is more universal here than DBUF_S1.
> 
> If we are counting slices as numbers it seems logical that we 
> iterate [0..num_slices) range. If you want to name the first slice
> explicitly then it probably has to be something like iterator
> logic, i.e for (slice = FIRST_SLICE; slice != LAST_SLICE; slice++).
> 
> But trying to name it at the same time with comparing to total _amount_
> looks a bit confusing.

This is the standard pattern used all over the driver.

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code
  2020-03-04 17:23   ` Lisovskiy, Stanislav
@ 2020-03-04 18:30     ` Ville Syrjälä
  2020-03-05  8:28       ` Lisovskiy, Stanislav
  0 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjälä @ 2020-03-04 18:30 UTC (permalink / raw)
  To: Lisovskiy, Stanislav; +Cc: intel-gfx

On Wed, Mar 04, 2020 at 05:23:05PM +0000, Lisovskiy, Stanislav wrote:
> 
> >-       /* If 2nd DBuf slice required, enable it here */
> >        if (INTEL_GEN(dev_priv) >= 11 && slices_union != hw_enabled_slices)
> >-               icl_dbuf_slices_update(dev_priv, slices_union);
> >+               gen9_dbuf_slices_update(dev_priv, slices_union);
> >}
> 
> > static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
> >@@ -15307,9 +15306,8 @@ static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
> >        u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
> >        u8 required_slices = state->enabled_dbuf_slices_mask;
> 
> >-       /* If 2nd DBuf slice is no more required disable it */
> >         if (INTEL_GEN(dev_priv) >= 11 && required_slices != hw_enabled_slices)
> >-               icl_dbuf_slices_update(dev_priv, required_slices);
> >+               gen9_dbuf_slices_update(dev_priv, required_slices);
> 
> 
> Doesn't make much sense. Just look - previously we were checking if INTEL_GEN is >= than 11(which _is_ ICL)
> 
> and now we still _do_ check if INTEL_GEN is >= 11, but... call now function renamed to gen9
> 
> 
> I guess you either need to change INTEL_GEN check to be >=9 to at least look somewhat consistent
> 
> or leave it as is. Or at least rename icl_ prefix to gen11_ otherwise that looks inconsistent, i.e
> 
> you are now checking that gen is >= 11 and then OK - now let's call gen 9! :)

The standard practice is to name things based on the oldest platform
that introduced the thing.

> 
> 
> Stan
> 
> ________________________________
> From: Ville Syrjala <ville.syrjala@linux.intel.com>
> Sent: Tuesday, February 25, 2020 7:11:12 PM
> To: intel-gfx@lists.freedesktop.org
> Cc: Lisovskiy, Stanislav
> Subject: [PATCH v2 07/20] drm/i915: Unify the low level dbuf code
> 
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> The low level dbuf slice code is rather inconsitent with its
> functiona naming and organization. Make it more consistent.
> 
> Also share the enable/disable functions between all platforms
> since the same code works just fine for all of them.
> 
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_display.c  |  6 +--
>  .../drm/i915/display/intel_display_power.c    | 44 ++++++++-----------
>  .../drm/i915/display/intel_display_power.h    |  6 +--
>  3 files changed, 24 insertions(+), 32 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> index 3031e64ee518..6952c398cc43 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -15296,9 +15296,8 @@ static void icl_dbuf_slice_pre_update(struct intel_atomic_state *state)
>          u8 required_slices = state->enabled_dbuf_slices_mask;
>          u8 slices_union = hw_enabled_slices | required_slices;
> 
> -       /* If 2nd DBuf slice required, enable it here */
>          if (INTEL_GEN(dev_priv) >= 11 && slices_union != hw_enabled_slices)
> -               icl_dbuf_slices_update(dev_priv, slices_union);
> +               gen9_dbuf_slices_update(dev_priv, slices_union);
>  }
> 
>  static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
> @@ -15307,9 +15306,8 @@ static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
>          u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
>          u8 required_slices = state->enabled_dbuf_slices_mask;
> 
> -       /* If 2nd DBuf slice is no more required disable it */
>          if (INTEL_GEN(dev_priv) >= 11 && required_slices != hw_enabled_slices)
> -               icl_dbuf_slices_update(dev_priv, required_slices);
> +               gen9_dbuf_slices_update(dev_priv, required_slices);
>  }
> 
>  static void skl_commit_modeset_enables(struct intel_atomic_state *state)
> diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
> index e81e561e8ac0..ce3bbc4c7a27 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> @@ -4433,15 +4433,18 @@ static void intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
>          mutex_unlock(&power_domains->lock);
>  }
> 
> -static void intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
> -                                enum dbuf_slice slice, bool enable)
> +static void gen9_dbuf_slice_set(struct drm_i915_private *dev_priv,
> +                               enum dbuf_slice slice, bool enable)
>  {
>          i915_reg_t reg = DBUF_CTL_S(slice);
>          bool state;
>          u32 val;
> 
>          val = intel_de_read(dev_priv, reg);
> -       val = enable ? (val | DBUF_POWER_REQUEST) : (val & ~DBUF_POWER_REQUEST);
> +       if (enable)
> +               val |= DBUF_POWER_REQUEST;
> +       else
> +               val &= ~DBUF_POWER_REQUEST;
>          intel_de_write(dev_priv, reg, val);
>          intel_de_posting_read(dev_priv, reg);
>          udelay(10);
> @@ -4452,18 +4455,8 @@ static void intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
>                   slice, enable ? "enable" : "disable");
>  }
> 
> -static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
> -{
> -       icl_dbuf_slices_update(dev_priv, BIT(DBUF_S1));
> -}
> -
> -static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
> -{
> -       icl_dbuf_slices_update(dev_priv, 0);
> -}
> -
> -void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
> -                           u8 req_slices)
> +void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
> +                            u8 req_slices)
>  {
>          int num_slices = INTEL_INFO(dev_priv)->num_supported_dbuf_slices;
>          struct i915_power_domains *power_domains = &dev_priv->power_domains;
> @@ -4486,28 +4479,29 @@ void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
>          mutex_lock(&power_domains->lock);
> 
>          for (slice = DBUF_S1; slice < num_slices; slice++)
> -               intel_dbuf_slice_set(dev_priv, slice,
> -                                    req_slices & BIT(slice));
> +               gen9_dbuf_slice_set(dev_priv, slice, req_slices & BIT(slice));
> 
>          dev_priv->enabled_dbuf_slices_mask = req_slices;
> 
>          mutex_unlock(&power_domains->lock);
>  }
> 
> -static void icl_dbuf_enable(struct drm_i915_private *dev_priv)
> +static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
>  {
> -       skl_ddb_get_hw_state(dev_priv);
> +       dev_priv->enabled_dbuf_slices_mask =
> +               intel_enabled_dbuf_slices_mask(dev_priv);
> +
>          /*
>           * Just power up at least 1 slice, we will
>           * figure out later which slices we have and what we need.
>           */
> -       icl_dbuf_slices_update(dev_priv, dev_priv->enabled_dbuf_slices_mask |
> -                              BIT(DBUF_S1));
> +       gen9_dbuf_slices_update(dev_priv, BIT(DBUF_S1) |
> +                               dev_priv->enabled_dbuf_slices_mask);
>  }
> 
> -static void icl_dbuf_disable(struct drm_i915_private *dev_priv)
> +static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
>  {
> -       icl_dbuf_slices_update(dev_priv, 0);
> +       gen9_dbuf_slices_update(dev_priv, 0);
>  }
> 
>  static void icl_mbus_init(struct drm_i915_private *dev_priv)
> @@ -5067,7 +5061,7 @@ static void icl_display_core_init(struct drm_i915_private *dev_priv,
>          intel_cdclk_init_hw(dev_priv);
> 
>          /* 5. Enable DBUF. */
> -       icl_dbuf_enable(dev_priv);
> +       gen9_dbuf_enable(dev_priv);
> 
>          /* 6. Setup MBUS. */
>          icl_mbus_init(dev_priv);
> @@ -5090,7 +5084,7 @@ static void icl_display_core_uninit(struct drm_i915_private *dev_priv)
>          /* 1. Disable all display engine functions -> aready done */
> 
>          /* 2. Disable DBUF */
> -       icl_dbuf_disable(dev_priv);
> +       gen9_dbuf_disable(dev_priv);
> 
>          /* 3. Disable CD clock */
>          intel_cdclk_uninit_hw(dev_priv);
> diff --git a/drivers/gpu/drm/i915/display/intel_display_power.h b/drivers/gpu/drm/i915/display/intel_display_power.h
> index 601e000ffd0d..1a275611241e 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_power.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_power.h
> @@ -312,13 +312,13 @@ enum dbuf_slice {
>          DBUF_S2,
>  };
> 
> +void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
> +                            u8 req_slices);
> +
>  #define with_intel_display_power(i915, domain, wf) \
>          for ((wf) = intel_display_power_get((i915), (domain)); (wf); \
>               intel_display_power_put_async((i915), (domain), (wf)), (wf) = 0)
> 
> -void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
> -                           u8 req_slices);
> -
>  void chv_phy_powergate_lanes(struct intel_encoder *encoder,
>                               bool override, unsigned int mask);
>  bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum dpio_phy phy,
> --
> 2.24.1
> 

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code
  2020-03-04 18:30     ` Ville Syrjälä
@ 2020-03-05  8:28       ` Lisovskiy, Stanislav
  2020-03-05 13:37         ` Ville Syrjälä
  0 siblings, 1 reply; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-03-05  8:28 UTC (permalink / raw)
  To: ville.syrjala; +Cc: intel-gfx

On Wed, 2020-03-04 at 20:30 +0200, Ville Syrjälä wrote:
> On Wed, Mar 04, 2020 at 05:23:05PM +0000, Lisovskiy, Stanislav wrote:
> > 
> > > -       /* If 2nd DBuf slice required, enable it here */
> > >        if (INTEL_GEN(dev_priv) >= 11 && slices_union !=
> > > hw_enabled_slices)
> > > -               icl_dbuf_slices_update(dev_priv, slices_union);
> > > +               gen9_dbuf_slices_update(dev_priv, slices_union);
> > > }
> > > static void icl_dbuf_slice_post_update(struct intel_atomic_state
> > > *state)
> > > @@ -15307,9 +15306,8 @@ static void
> > > icl_dbuf_slice_post_update(struct intel_atomic_state *state)
> > >        u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
> > >        u8 required_slices = state->enabled_dbuf_slices_mask;
> > > -       /* If 2nd DBuf slice is no more required disable it */
> > >         if (INTEL_GEN(dev_priv) >= 11 && required_slices !=
> > > hw_enabled_slices)
> > > -               icl_dbuf_slices_update(dev_priv,
> > > required_slices);
> > > +               gen9_dbuf_slices_update(dev_priv,
> > > required_slices);
> > 
> > 
> > Doesn't make much sense. Just look - previously we were checking if
> > INTEL_GEN is >= than 11(which _is_ ICL)
> > 
> > and now we still _do_ check if INTEL_GEN is >= 11, but... call now
> > function renamed to gen9
> > 
> > 
> > I guess you either need to change INTEL_GEN check to be >=9 to at
> > least look somewhat consistent
> > 
> > or leave it as is. Or at least rename icl_ prefix to gen11_
> > otherwise that looks inconsistent, i.e
> > 
> > you are now checking that gen is >= 11 and then OK - now let's call
> > gen 9! :)
> 
> The standard practice is to name things based on the oldest platform
> that introduced the thing.

And that is fine - but then you need to change the check above from 
INTEL_GEN >= 11 to INTEL_GEN >= 9, right - if you gen9 is the oldest
platform. 

-       /* If 2nd DBuf slice required, enable it here */
> > >        if (INTEL_GEN(dev_priv) >= 11 && slices_union !=
> > > hw_enabled_slices)
> > > -               icl_dbuf_slices_update(dev_priv, slices_union);
> > > +               gen9_dbuf_slices_update(dev_priv, slices_union);
> > > }

I mean previously we were checking INTEL_GEN to be at least 11 and
called function prefixed with icl_ - which was consistent and logical.

Now you changed this to gen9(oldest platform which introduced the
thing), however then the check above makes no sense - it should be
changed to INTEL_GEN >= 9 as well. Otherwise this
"gen9_dbuf_slices_update" function will not be actually ever called for
gen9.

Or do you want function prefixed as gen9_ to be only called for gen 11,
why we then prefix it..

Stan

> 
> > 
> > 
> > Stan
> > 
> > ________________________________
> > From: Ville Syrjala <ville.syrjala@linux.intel.com>
> > Sent: Tuesday, February 25, 2020 7:11:12 PM
> > To: intel-gfx@lists.freedesktop.org
> > Cc: Lisovskiy, Stanislav
> > Subject: [PATCH v2 07/20] drm/i915: Unify the low level dbuf code
> > 
> > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > 
> > The low level dbuf slice code is rather inconsitent with its
> > functiona naming and organization. Make it more consistent.
> > 
> > Also share the enable/disable functions between all platforms
> > since the same code works just fine for all of them.
> > 
> > Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_display.c  |  6 +--
> >  .../drm/i915/display/intel_display_power.c    | 44 ++++++++-------
> > ----
> >  .../drm/i915/display/intel_display_power.h    |  6 +--
> >  3 files changed, 24 insertions(+), 32 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_display.c
> > b/drivers/gpu/drm/i915/display/intel_display.c
> > index 3031e64ee518..6952c398cc43 100644
> > --- a/drivers/gpu/drm/i915/display/intel_display.c
> > +++ b/drivers/gpu/drm/i915/display/intel_display.c
> > @@ -15296,9 +15296,8 @@ static void
> > icl_dbuf_slice_pre_update(struct intel_atomic_state *state)
> >          u8 required_slices = state->enabled_dbuf_slices_mask;
> >          u8 slices_union = hw_enabled_slices | required_slices;
> > 
> > -       /* If 2nd DBuf slice required, enable it here */
> >          if (INTEL_GEN(dev_priv) >= 11 && slices_union !=
> > hw_enabled_slices)
> > -               icl_dbuf_slices_update(dev_priv, slices_union);
> > +               gen9_dbuf_slices_update(dev_priv, slices_union);
> >  }
> > 
> >  static void icl_dbuf_slice_post_update(struct intel_atomic_state
> > *state)
> > @@ -15307,9 +15306,8 @@ static void
> > icl_dbuf_slice_post_update(struct intel_atomic_state *state)
> >          u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
> >          u8 required_slices = state->enabled_dbuf_slices_mask;
> > 
> > -       /* If 2nd DBuf slice is no more required disable it */
> >          if (INTEL_GEN(dev_priv) >= 11 && required_slices !=
> > hw_enabled_slices)
> > -               icl_dbuf_slices_update(dev_priv, required_slices);
> > +               gen9_dbuf_slices_update(dev_priv, required_slices);
> >  }
> > 
> >  static void skl_commit_modeset_enables(struct intel_atomic_state
> > *state)
> > diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c
> > b/drivers/gpu/drm/i915/display/intel_display_power.c
> > index e81e561e8ac0..ce3bbc4c7a27 100644
> > --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> > +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> > @@ -4433,15 +4433,18 @@ static void
> > intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
> >          mutex_unlock(&power_domains->lock);
> >  }
> > 
> > -static void intel_dbuf_slice_set(struct drm_i915_private
> > *dev_priv,
> > -                                enum dbuf_slice slice, bool
> > enable)
> > +static void gen9_dbuf_slice_set(struct drm_i915_private *dev_priv,
> > +                               enum dbuf_slice slice, bool enable)
> >  {
> >          i915_reg_t reg = DBUF_CTL_S(slice);
> >          bool state;
> >          u32 val;
> > 
> >          val = intel_de_read(dev_priv, reg);
> > -       val = enable ? (val | DBUF_POWER_REQUEST) : (val &
> > ~DBUF_POWER_REQUEST);
> > +       if (enable)
> > +               val |= DBUF_POWER_REQUEST;
> > +       else
> > +               val &= ~DBUF_POWER_REQUEST;
> >          intel_de_write(dev_priv, reg, val);
> >          intel_de_posting_read(dev_priv, reg);
> >          udelay(10);
> > @@ -4452,18 +4455,8 @@ static void intel_dbuf_slice_set(struct
> > drm_i915_private *dev_priv,
> >                   slice, enable ? "enable" : "disable");
> >  }
> > 
> > -static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
> > -{
> > -       icl_dbuf_slices_update(dev_priv, BIT(DBUF_S1));
> > -}
> > -
> > -static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
> > -{
> > -       icl_dbuf_slices_update(dev_priv, 0);
> > -}
> > -
> > -void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
> > -                           u8 req_slices)
> > +void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
> > +                            u8 req_slices)
> >  {
> >          int num_slices = INTEL_INFO(dev_priv)-
> > >num_supported_dbuf_slices;
> >          struct i915_power_domains *power_domains = &dev_priv-
> > >power_domains;
> > @@ -4486,28 +4479,29 @@ void icl_dbuf_slices_update(struct
> > drm_i915_private *dev_priv,
> >          mutex_lock(&power_domains->lock);
> > 
> >          for (slice = DBUF_S1; slice < num_slices; slice++)
> > -               intel_dbuf_slice_set(dev_priv, slice,
> > -                                    req_slices & BIT(slice));
> > +               gen9_dbuf_slice_set(dev_priv, slice, req_slices &
> > BIT(slice));
> > 
> >          dev_priv->enabled_dbuf_slices_mask = req_slices;
> > 
> >          mutex_unlock(&power_domains->lock);
> >  }
> > 
> > -static void icl_dbuf_enable(struct drm_i915_private *dev_priv)
> > +static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
> >  {
> > -       skl_ddb_get_hw_state(dev_priv);
> > +       dev_priv->enabled_dbuf_slices_mask =
> > +               intel_enabled_dbuf_slices_mask(dev_priv);
> > +
> >          /*
> >           * Just power up at least 1 slice, we will
> >           * figure out later which slices we have and what we need.
> >           */
> > -       icl_dbuf_slices_update(dev_priv, dev_priv-
> > >enabled_dbuf_slices_mask |
> > -                              BIT(DBUF_S1));
> > +       gen9_dbuf_slices_update(dev_priv, BIT(DBUF_S1) |
> > +                               dev_priv-
> > >enabled_dbuf_slices_mask);
> >  }
> > 
> > -static void icl_dbuf_disable(struct drm_i915_private *dev_priv)
> > +static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
> >  {
> > -       icl_dbuf_slices_update(dev_priv, 0);
> > +       gen9_dbuf_slices_update(dev_priv, 0);
> >  }
> > 
> >  static void icl_mbus_init(struct drm_i915_private *dev_priv)
> > @@ -5067,7 +5061,7 @@ static void icl_display_core_init(struct
> > drm_i915_private *dev_priv,
> >          intel_cdclk_init_hw(dev_priv);
> > 
> >          /* 5. Enable DBUF. */
> > -       icl_dbuf_enable(dev_priv);
> > +       gen9_dbuf_enable(dev_priv);
> > 
> >          /* 6. Setup MBUS. */
> >          icl_mbus_init(dev_priv);
> > @@ -5090,7 +5084,7 @@ static void icl_display_core_uninit(struct
> > drm_i915_private *dev_priv)
> >          /* 1. Disable all display engine functions -> aready done
> > */
> > 
> >          /* 2. Disable DBUF */
> > -       icl_dbuf_disable(dev_priv);
> > +       gen9_dbuf_disable(dev_priv);
> > 
> >          /* 3. Disable CD clock */
> >          intel_cdclk_uninit_hw(dev_priv);
> > diff --git a/drivers/gpu/drm/i915/display/intel_display_power.h
> > b/drivers/gpu/drm/i915/display/intel_display_power.h
> > index 601e000ffd0d..1a275611241e 100644
> > --- a/drivers/gpu/drm/i915/display/intel_display_power.h
> > +++ b/drivers/gpu/drm/i915/display/intel_display_power.h
> > @@ -312,13 +312,13 @@ enum dbuf_slice {
> >          DBUF_S2,
> >  };
> > 
> > +void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
> > +                            u8 req_slices);
> > +
> >  #define with_intel_display_power(i915, domain, wf) \
> >          for ((wf) = intel_display_power_get((i915), (domain));
> > (wf); \
> >               intel_display_power_put_async((i915), (domain),
> > (wf)), (wf) = 0)
> > 
> > -void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
> > -                           u8 req_slices);
> > -
> >  void chv_phy_powergate_lanes(struct intel_encoder *encoder,
> >                               bool override, unsigned int mask);
> >  bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum
> > dpio_phy phy,
> > --
> > 2.24.1
> > 
> 
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code Ville Syrjala
  2020-03-04 17:14   ` Lisovskiy, Stanislav
  2020-03-04 17:23   ` Lisovskiy, Stanislav
@ 2020-03-05  8:46   ` Lisovskiy, Stanislav
  2 siblings, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-03-05  8:46 UTC (permalink / raw)
  To: ville.syrjala, intel-gfx

On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> The low level dbuf slice code is rather inconsitent with its
> functiona naming and organization. Make it more consistent.
> 
> Also share the enable/disable functions between all platforms
> since the same code works just fine for all of them.
> 
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_display.c  |  6 +--
>  .../drm/i915/display/intel_display_power.c    | 44 ++++++++---------
> --
>  .../drm/i915/display/intel_display_power.h    |  6 +--
>  3 files changed, 24 insertions(+), 32 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c
> b/drivers/gpu/drm/i915/display/intel_display.c
> index 3031e64ee518..6952c398cc43 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c

> > On Wed, 2020-03-04 at 20:30 +0200, Ville Syrjälä wrote:
> > > On Wed, Mar 04, 2020 at 05:23:05PM +0000, Lisovskiy, Stanislav
> > wrote:
> > > > 
> > > > > -       /* If 2nd DBuf slice required, enable it here */
> > > > >        if (INTEL_GEN(dev_priv) >= 11 && slices_union !=
> > > > > hw_enabled_slices)
> > > > > -               icl_dbuf_slices_update(dev_priv,
> > slices_union);
> > > > > +               gen9_dbuf_slices_update(dev_priv,
> > slices_union);
> > > > > }
> > > > > static void icl_dbuf_slice_post_update(struct
> > intel_atomic_state
> > > > > *state)
> > > > > @@ -15307,9 +15306,8 @@ static void
> > > > > icl_dbuf_slice_post_update(struct intel_atomic_state *state)
> > > > >        u8 hw_enabled_slices = dev_priv-
> > >enabled_dbuf_slices_mask;
> > > > >        u8 required_slices = state->enabled_dbuf_slices_mask;
> > > > > -       /* If 2nd DBuf slice is no more required disable it
> > */
> > > > >         if (INTEL_GEN(dev_priv) >= 11 && required_slices !=
> > > > > hw_enabled_slices)
> > > > > -               icl_dbuf_slices_update(dev_priv,
> > > > > required_slices);
> > > > > +               gen9_dbuf_slices_update(dev_priv,
> > > > > required_slices);
> > > > 
> > > > 
> > > > Doesn't make much sense. Just look - previously we were
> > checking if
> > > > INTEL_GEN is >= than 11(which _is_ ICL)
> > > > 
> > > > and now we still _do_ check if INTEL_GEN is >= 11, but... call
> > now
> > > > function renamed to gen9
> > > > 
> > > > 
> > > > I guess you either need to change INTEL_GEN check to be >=9 to
> > at
> > > > least look somewhat consistent
> > > > 
> > > > or leave it as is. Or at least rename icl_ prefix to gen11_
> > > > otherwise that looks inconsistent, i.e
> > > > 
> > > > you are now checking that gen is >= 11 and then OK - now let's
> > call
> > > > gen 9! :)
> > > 
> > > The standard practice is to name things based on the oldest
> > platform
> > > that introduced the thing.
> > 

And that is fine - but then you need to change the check above from 
INTEL_GEN >= 11 to INTEL_GEN >= 9, right - if you gen9 is the oldest
platform. 

> > > -       /* If 2nd DBuf slice required, enable it here */
> > >        if (INTEL_GEN(dev_priv) >= 11 && slices_union !=
> > > hw_enabled_slices)
> > > -               icl_dbuf_slices_update(dev_priv, slices_union);
> > > +               gen9_dbuf_slices_update(dev_priv, slices_union);
> > > }

I mean previously we were checking INTEL_GEN to be at least 11 and
called function prefixed with icl_ - which was consistent and logical.

Now you changed this to gen9(oldest platform which introduced the
thing), however then the check above makes no sense - it should be
changed to INTEL_GEN >= 9 as well. Otherwise this
"gen9_dbuf_slices_update" function will not be actually ever called for
gen9.

Or do you want function prefixed as gen9_ to be only called for gen 11,
why we then prefix it..

Stan

> 
>  static void skl_commit_modeset_enables(struct intel_atomic_state
> *state)
> diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c
> b/drivers/gpu/drm/i915/display/intel_display_power.c
> index e81e561e8ac0..ce3bbc4c7a27 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> @@ -4433,15 +4433,18 @@ static void
> intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
>  	mutex_unlock(&power_domains->lock);
>  }
>  
> -static void intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
> -				 enum dbuf_slice slice, bool enable)
> +static void gen9_dbuf_slice_set(struct drm_i915_private *dev_priv,
> +				enum dbuf_slice slice, bool enable)
>  {
>  	i915_reg_t reg = DBUF_CTL_S(slice);
>  	bool state;
>  	u32 val;
>  
>  	val = intel_de_read(dev_priv, reg);
> -	val = enable ? (val | DBUF_POWER_REQUEST) : (val &
> ~DBUF_POWER_REQUEST);
> +	if (enable)
> +		val |= DBUF_POWER_REQUEST;
> +	else
> +		val &= ~DBUF_POWER_REQUEST;
>  	intel_de_write(dev_priv, reg, val);
>  	intel_de_posting_read(dev_priv, reg);
>  	udelay(10);
> @@ -4452,18 +4455,8 @@ static void intel_dbuf_slice_set(struct
> drm_i915_private *dev_priv,
>  		 slice, enable ? "enable" : "disable");
>  }
>  
> -static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
> -{
> -	icl_dbuf_slices_update(dev_priv, BIT(DBUF_S1));
> -}
> -
> -static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
> -{
> -	icl_dbuf_slices_update(dev_priv, 0);
> -}
> -
> -void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
> -			    u8 req_slices)
> +void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
> +			     u8 req_slices)
>  {
>  	int num_slices = INTEL_INFO(dev_priv)-
> >num_supported_dbuf_slices;
>  	struct i915_power_domains *power_domains = &dev_priv-
> >power_domains;
> @@ -4486,28 +4479,29 @@ void icl_dbuf_slices_update(struct
> drm_i915_private *dev_priv,
>  	mutex_lock(&power_domains->lock);
>  
>  	for (slice = DBUF_S1; slice < num_slices; slice++)
> -		intel_dbuf_slice_set(dev_priv, slice,
> -				     req_slices & BIT(slice));
> +		gen9_dbuf_slice_set(dev_priv, slice, req_slices &
> BIT(slice));
>  
>  	dev_priv->enabled_dbuf_slices_mask = req_slices;
>  
>  	mutex_unlock(&power_domains->lock);
>  }
>  
> -static void icl_dbuf_enable(struct drm_i915_private *dev_priv)
> +static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
>  {
> -	skl_ddb_get_hw_state(dev_priv);
> +	dev_priv->enabled_dbuf_slices_mask =
> +		intel_enabled_dbuf_slices_mask(dev_priv);
> +
>  	/*
>  	 * Just power up at least 1 slice, we will
>  	 * figure out later which slices we have and what we need.
>  	 */
> -	icl_dbuf_slices_update(dev_priv, dev_priv-
> >enabled_dbuf_slices_mask |
> -			       BIT(DBUF_S1));
> +	gen9_dbuf_slices_update(dev_priv, BIT(DBUF_S1) |
> +				dev_priv->enabled_dbuf_slices_mask);
>  }
>  
> -static void icl_dbuf_disable(struct drm_i915_private *dev_priv)
> +static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
>  {
> -	icl_dbuf_slices_update(dev_priv, 0);
> +	gen9_dbuf_slices_update(dev_priv, 0);
>  }
>  
>  static void icl_mbus_init(struct drm_i915_private *dev_priv)
> @@ -5067,7 +5061,7 @@ static void icl_display_core_init(struct
> drm_i915_private *dev_priv,
>  	intel_cdclk_init_hw(dev_priv);
>  
>  	/* 5. Enable DBUF. */
> -	icl_dbuf_enable(dev_priv);
> +	gen9_dbuf_enable(dev_priv);
>  
>  	/* 6. Setup MBUS. */
>  	icl_mbus_init(dev_priv);
> @@ -5090,7 +5084,7 @@ static void icl_display_core_uninit(struct
> drm_i915_private *dev_priv)
>  	/* 1. Disable all display engine functions -> aready done */
>  
>  	/* 2. Disable DBUF */
> -	icl_dbuf_disable(dev_priv);
> +	gen9_dbuf_disable(dev_priv);
>  
>  	/* 3. Disable CD clock */
>  	intel_cdclk_uninit_hw(dev_priv);
> diff --git a/drivers/gpu/drm/i915/display/intel_display_power.h
> b/drivers/gpu/drm/i915/display/intel_display_power.h
> index 601e000ffd0d..1a275611241e 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_power.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_power.h
> @@ -312,13 +312,13 @@ enum dbuf_slice {
>  	DBUF_S2,
>  };
>  
> +void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
> +			     u8 req_slices);
> +
>  #define with_intel_display_power(i915, domain, wf) \
>  	for ((wf) = intel_display_power_get((i915), (domain)); (wf); \
>  	     intel_display_power_put_async((i915), (domain), (wf)),
> (wf) = 0)
>  
> -void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
> -			    u8 req_slices);
> -
>  void chv_phy_powergate_lanes(struct intel_encoder *encoder,
>  			     bool override, unsigned int mask);
>  bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum
> dpio_phy phy,
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 06/20] drm/i915: Polish some dbuf debugs
  2020-03-04 18:26     ` Ville Syrjälä
@ 2020-03-05  9:53       ` Lisovskiy, Stanislav
  2020-03-05 13:46         ` Ville Syrjälä
  0 siblings, 1 reply; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-03-05  9:53 UTC (permalink / raw)
  To: ville.syrjala; +Cc: intel-gfx

On Wed, 2020-03-04 at 20:26 +0200, Ville Syrjälä wrote:
> On Wed, Mar 04, 2020 at 04:29:47PM +0000, Lisovskiy, Stanislav wrote:
> > On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > 
> > > Polish some of the dbuf code to give more meaningful debug
> > > messages and whatnot. Also we can switch over to the per-device
> > > debugs/warns at the same time.
> > > 
> > > Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > ---
> > >  .../drm/i915/display/intel_display_power.c    | 40 +++++++++--
> > > ----
> > > ----
> > >  1 file changed, 19 insertions(+), 21 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c
> > > b/drivers/gpu/drm/i915/display/intel_display_power.c
> > > index 6e25a1317161..e81e561e8ac0 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> > > @@ -4433,11 +4433,12 @@ static void
> > > intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
> > >  	mutex_unlock(&power_domains->lock);
> > >  }
> > >  
> > > -static inline
> > > -bool intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
> > > -			  i915_reg_t reg, bool enable)
> > > +static void intel_dbuf_slice_set(struct drm_i915_private
> > > *dev_priv,
> > > +				 enum dbuf_slice slice, bool enable)
> > >  {
> > > -	u32 val, status;
> > > +	i915_reg_t reg = DBUF_CTL_S(slice);
> > > +	bool state;
> > > +	u32 val;
> > >  
> > >  	val = intel_de_read(dev_priv, reg);
> > >  	val = enable ? (val | DBUF_POWER_REQUEST) : (val &
> > > ~DBUF_POWER_REQUEST);
> > > @@ -4445,13 +4446,10 @@ bool intel_dbuf_slice_set(struct
> > > drm_i915_private *dev_priv,
> > >  	intel_de_posting_read(dev_priv, reg);
> > >  	udelay(10);
> > >  
> > > -	status = intel_de_read(dev_priv, reg) & DBUF_POWER_STATE;
> > > -	if ((enable && !status) || (!enable && status)) {
> > > -		drm_err(&dev_priv->drm, "DBus power %s timeout!\n",
> > > -			enable ? "enable" : "disable");
> > > -		return false;
> > > -	}
> > > -	return true;
> > > +	state = intel_de_read(dev_priv, reg) & DBUF_POWER_STATE;
> > > +	drm_WARN(&dev_priv->drm, enable != state,
> > > +		 "DBuf slice %d power %s timeout!\n",
> > > +		 slice, enable ? "enable" : "disable");
> > >  }
> > >  
> > >  static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
> > > @@ -4467,14 +4465,16 @@ static void gen9_dbuf_disable(struct
> > > drm_i915_private *dev_priv)
> > >  void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
> > >  			    u8 req_slices)
> > >  {
> > > -	int i;
> > > -	int max_slices = INTEL_INFO(dev_priv)-
> > > > num_supported_dbuf_slices;
> > > 
> > > +	int num_slices = INTEL_INFO(dev_priv)-
> > > > num_supported_dbuf_slices;
> > > 
> > >  	struct i915_power_domains *power_domains = &dev_priv-
> > > > power_domains;
> > > 
> > > +	enum dbuf_slice slice;
> > >  
> > > -	drm_WARN(&dev_priv->drm, hweight8(req_slices) > max_slices,
> > > -		 "Invalid number of dbuf slices requested\n");
> > > +	drm_WARN(&dev_priv->drm, req_slices & ~(BIT(num_slices) - 1),
> > > +		 "Invalid set of dbuf slices (0x%x) requested (num dbuf
> > > slices %d)\n",
> > > +		 req_slices, num_slices);
> > >  
> > > -	DRM_DEBUG_KMS("Updating dbuf slices to 0x%x\n", req_slices);
> > > +	drm_dbg_kms(&dev_priv->drm,
> > > +		    "Updating dbuf slices to 0x%x\n", req_slices);
> > >  
> > >  	/*
> > >  	 * Might be running this in parallel to
> > > gen9_dc_off_power_well_enable
> > > @@ -4485,11 +4485,9 @@ void icl_dbuf_slices_update(struct
> > > drm_i915_private *dev_priv,
> > >  	 */
> > >  	mutex_lock(&power_domains->lock);
> > >  
> > > -	for (i = 0; i < max_slices; i++) {
> > > -		intel_dbuf_slice_set(dev_priv,
> > > -				     DBUF_CTL_S(i),
> > > -				     (req_slices & BIT(i)) != 0);
> > > -	}
> > > +	for (slice = DBUF_S1; slice < num_slices; slice++)
> > > +		intel_dbuf_slice_set(dev_priv, slice,
> > > +				     req_slices & BIT(slice));
> > 
> > Would be cool to completely get rid of any magic numbers or
> > definitions, 0 in a sense is more universal here than DBUF_S1.
> > 
> > If we are counting slices as numbers it seems logical that we 
> > iterate [0..num_slices) range. If you want to name the first slice
> > explicitly then it probably has to be something like iterator
> > logic, i.e for (slice = FIRST_SLICE; slice != LAST_SLICE; slice++).
> > 
> > But trying to name it at the same time with comparing to total
> > _amount_
> > looks a bit confusing.
> 
> This is the standard pattern used all over the driver.

Well, you can enumerate objects using their qualitative or quantative
characteristics, for instance if you take alphabet you would be
either enumerating letters like first is A and count until it becomes
Z, or
you take indexes and say start from index 0 and count until it becomes
26.

What happens here is mixing those: i.e take letter A and count until it
becomes 26, i.e mixing a name of an object with it's index, so
hopefully DBUF_S1 will always be defined as 0 :D

Anyways, 

Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>


> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code
  2020-03-05  8:28       ` Lisovskiy, Stanislav
@ 2020-03-05 13:37         ` Ville Syrjälä
  2020-03-05 14:01           ` Lisovskiy, Stanislav
  0 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjälä @ 2020-03-05 13:37 UTC (permalink / raw)
  To: Lisovskiy, Stanislav; +Cc: intel-gfx

On Thu, Mar 05, 2020 at 08:28:30AM +0000, Lisovskiy, Stanislav wrote:
> On Wed, 2020-03-04 at 20:30 +0200, Ville Syrjälä wrote:
> > On Wed, Mar 04, 2020 at 05:23:05PM +0000, Lisovskiy, Stanislav wrote:
> > > 
> > > > -       /* If 2nd DBuf slice required, enable it here */
> > > >        if (INTEL_GEN(dev_priv) >= 11 && slices_union !=
> > > > hw_enabled_slices)
> > > > -               icl_dbuf_slices_update(dev_priv, slices_union);
> > > > +               gen9_dbuf_slices_update(dev_priv, slices_union);
> > > > }
> > > > static void icl_dbuf_slice_post_update(struct intel_atomic_state
> > > > *state)
> > > > @@ -15307,9 +15306,8 @@ static void
> > > > icl_dbuf_slice_post_update(struct intel_atomic_state *state)
> > > >        u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
> > > >        u8 required_slices = state->enabled_dbuf_slices_mask;
> > > > -       /* If 2nd DBuf slice is no more required disable it */
> > > >         if (INTEL_GEN(dev_priv) >= 11 && required_slices !=
> > > > hw_enabled_slices)
> > > > -               icl_dbuf_slices_update(dev_priv,
> > > > required_slices);
> > > > +               gen9_dbuf_slices_update(dev_priv,
> > > > required_slices);
> > > 
> > > 
> > > Doesn't make much sense. Just look - previously we were checking if
> > > INTEL_GEN is >= than 11(which _is_ ICL)
> > > 
> > > and now we still _do_ check if INTEL_GEN is >= 11, but... call now
> > > function renamed to gen9
> > > 
> > > 
> > > I guess you either need to change INTEL_GEN check to be >=9 to at
> > > least look somewhat consistent
> > > 
> > > or leave it as is. Or at least rename icl_ prefix to gen11_
> > > otherwise that looks inconsistent, i.e
> > > 
> > > you are now checking that gen is >= 11 and then OK - now let's call
> > > gen 9! :)
> > 
> > The standard practice is to name things based on the oldest platform
> > that introduced the thing.
> 
> And that is fine - but then you need to change the check above from 
> INTEL_GEN >= 11 to INTEL_GEN >= 9, right - if you gen9 is the oldest
> platform. 

No, the function works just fine for all skl+ but no real requirement
that it gets called on all of them.  It's just part of the standard set
of gen9_dbuf (which should really be skl_dbuf since this is about
display stuff).

Anyways, IIRC this check is going away in a later patch, so the
discussion is a bit moot.

> 
> -       /* If 2nd DBuf slice required, enable it here */
> > > >        if (INTEL_GEN(dev_priv) >= 11 && slices_union !=
> > > > hw_enabled_slices)
> > > > -               icl_dbuf_slices_update(dev_priv, slices_union);
> > > > +               gen9_dbuf_slices_update(dev_priv, slices_union);
> > > > }
> 
> I mean previously we were checking INTEL_GEN to be at least 11 and
> called function prefixed with icl_ - which was consistent and logical.
> 
> Now you changed this to gen9(oldest platform which introduced the
> thing), however then the check above makes no sense - it should be
> changed to INTEL_GEN >= 9 as well. Otherwise this
> "gen9_dbuf_slices_update" function will not be actually ever called for
> gen9.
> 
> Or do you want function prefixed as gen9_ to be only called for gen 11,
> why we then prefix it..
> 
> Stan
> 
> > 
> > > 
> > > 
> > > Stan
> > > 
> > > ________________________________
> > > From: Ville Syrjala <ville.syrjala@linux.intel.com>
> > > Sent: Tuesday, February 25, 2020 7:11:12 PM
> > > To: intel-gfx@lists.freedesktop.org
> > > Cc: Lisovskiy, Stanislav
> > > Subject: [PATCH v2 07/20] drm/i915: Unify the low level dbuf code
> > > 
> > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > 
> > > The low level dbuf slice code is rather inconsitent with its
> > > functiona naming and organization. Make it more consistent.
> > > 
> > > Also share the enable/disable functions between all platforms
> > > since the same code works just fine for all of them.
> > > 
> > > Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/display/intel_display.c  |  6 +--
> > >  .../drm/i915/display/intel_display_power.c    | 44 ++++++++-------
> > > ----
> > >  .../drm/i915/display/intel_display_power.h    |  6 +--
> > >  3 files changed, 24 insertions(+), 32 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/display/intel_display.c
> > > b/drivers/gpu/drm/i915/display/intel_display.c
> > > index 3031e64ee518..6952c398cc43 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_display.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_display.c
> > > @@ -15296,9 +15296,8 @@ static void
> > > icl_dbuf_slice_pre_update(struct intel_atomic_state *state)
> > >          u8 required_slices = state->enabled_dbuf_slices_mask;
> > >          u8 slices_union = hw_enabled_slices | required_slices;
> > > 
> > > -       /* If 2nd DBuf slice required, enable it here */
> > >          if (INTEL_GEN(dev_priv) >= 11 && slices_union !=
> > > hw_enabled_slices)
> > > -               icl_dbuf_slices_update(dev_priv, slices_union);
> > > +               gen9_dbuf_slices_update(dev_priv, slices_union);
> > >  }
> > > 
> > >  static void icl_dbuf_slice_post_update(struct intel_atomic_state
> > > *state)
> > > @@ -15307,9 +15306,8 @@ static void
> > > icl_dbuf_slice_post_update(struct intel_atomic_state *state)
> > >          u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
> > >          u8 required_slices = state->enabled_dbuf_slices_mask;
> > > 
> > > -       /* If 2nd DBuf slice is no more required disable it */
> > >          if (INTEL_GEN(dev_priv) >= 11 && required_slices !=
> > > hw_enabled_slices)
> > > -               icl_dbuf_slices_update(dev_priv, required_slices);
> > > +               gen9_dbuf_slices_update(dev_priv, required_slices);
> > >  }
> > > 
> > >  static void skl_commit_modeset_enables(struct intel_atomic_state
> > > *state)
> > > diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c
> > > b/drivers/gpu/drm/i915/display/intel_display_power.c
> > > index e81e561e8ac0..ce3bbc4c7a27 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> > > @@ -4433,15 +4433,18 @@ static void
> > > intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
> > >          mutex_unlock(&power_domains->lock);
> > >  }
> > > 
> > > -static void intel_dbuf_slice_set(struct drm_i915_private
> > > *dev_priv,
> > > -                                enum dbuf_slice slice, bool
> > > enable)
> > > +static void gen9_dbuf_slice_set(struct drm_i915_private *dev_priv,
> > > +                               enum dbuf_slice slice, bool enable)
> > >  {
> > >          i915_reg_t reg = DBUF_CTL_S(slice);
> > >          bool state;
> > >          u32 val;
> > > 
> > >          val = intel_de_read(dev_priv, reg);
> > > -       val = enable ? (val | DBUF_POWER_REQUEST) : (val &
> > > ~DBUF_POWER_REQUEST);
> > > +       if (enable)
> > > +               val |= DBUF_POWER_REQUEST;
> > > +       else
> > > +               val &= ~DBUF_POWER_REQUEST;
> > >          intel_de_write(dev_priv, reg, val);
> > >          intel_de_posting_read(dev_priv, reg);
> > >          udelay(10);
> > > @@ -4452,18 +4455,8 @@ static void intel_dbuf_slice_set(struct
> > > drm_i915_private *dev_priv,
> > >                   slice, enable ? "enable" : "disable");
> > >  }
> > > 
> > > -static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
> > > -{
> > > -       icl_dbuf_slices_update(dev_priv, BIT(DBUF_S1));
> > > -}
> > > -
> > > -static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
> > > -{
> > > -       icl_dbuf_slices_update(dev_priv, 0);
> > > -}
> > > -
> > > -void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
> > > -                           u8 req_slices)
> > > +void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
> > > +                            u8 req_slices)
> > >  {
> > >          int num_slices = INTEL_INFO(dev_priv)-
> > > >num_supported_dbuf_slices;
> > >          struct i915_power_domains *power_domains = &dev_priv-
> > > >power_domains;
> > > @@ -4486,28 +4479,29 @@ void icl_dbuf_slices_update(struct
> > > drm_i915_private *dev_priv,
> > >          mutex_lock(&power_domains->lock);
> > > 
> > >          for (slice = DBUF_S1; slice < num_slices; slice++)
> > > -               intel_dbuf_slice_set(dev_priv, slice,
> > > -                                    req_slices & BIT(slice));
> > > +               gen9_dbuf_slice_set(dev_priv, slice, req_slices &
> > > BIT(slice));
> > > 
> > >          dev_priv->enabled_dbuf_slices_mask = req_slices;
> > > 
> > >          mutex_unlock(&power_domains->lock);
> > >  }
> > > 
> > > -static void icl_dbuf_enable(struct drm_i915_private *dev_priv)
> > > +static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
> > >  {
> > > -       skl_ddb_get_hw_state(dev_priv);
> > > +       dev_priv->enabled_dbuf_slices_mask =
> > > +               intel_enabled_dbuf_slices_mask(dev_priv);
> > > +
> > >          /*
> > >           * Just power up at least 1 slice, we will
> > >           * figure out later which slices we have and what we need.
> > >           */
> > > -       icl_dbuf_slices_update(dev_priv, dev_priv-
> > > >enabled_dbuf_slices_mask |
> > > -                              BIT(DBUF_S1));
> > > +       gen9_dbuf_slices_update(dev_priv, BIT(DBUF_S1) |
> > > +                               dev_priv-
> > > >enabled_dbuf_slices_mask);
> > >  }
> > > 
> > > -static void icl_dbuf_disable(struct drm_i915_private *dev_priv)
> > > +static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
> > >  {
> > > -       icl_dbuf_slices_update(dev_priv, 0);
> > > +       gen9_dbuf_slices_update(dev_priv, 0);
> > >  }
> > > 
> > >  static void icl_mbus_init(struct drm_i915_private *dev_priv)
> > > @@ -5067,7 +5061,7 @@ static void icl_display_core_init(struct
> > > drm_i915_private *dev_priv,
> > >          intel_cdclk_init_hw(dev_priv);
> > > 
> > >          /* 5. Enable DBUF. */
> > > -       icl_dbuf_enable(dev_priv);
> > > +       gen9_dbuf_enable(dev_priv);
> > > 
> > >          /* 6. Setup MBUS. */
> > >          icl_mbus_init(dev_priv);
> > > @@ -5090,7 +5084,7 @@ static void icl_display_core_uninit(struct
> > > drm_i915_private *dev_priv)
> > >          /* 1. Disable all display engine functions -> aready done
> > > */
> > > 
> > >          /* 2. Disable DBUF */
> > > -       icl_dbuf_disable(dev_priv);
> > > +       gen9_dbuf_disable(dev_priv);
> > > 
> > >          /* 3. Disable CD clock */
> > >          intel_cdclk_uninit_hw(dev_priv);
> > > diff --git a/drivers/gpu/drm/i915/display/intel_display_power.h
> > > b/drivers/gpu/drm/i915/display/intel_display_power.h
> > > index 601e000ffd0d..1a275611241e 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_display_power.h
> > > +++ b/drivers/gpu/drm/i915/display/intel_display_power.h
> > > @@ -312,13 +312,13 @@ enum dbuf_slice {
> > >          DBUF_S2,
> > >  };
> > > 
> > > +void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
> > > +                            u8 req_slices);
> > > +
> > >  #define with_intel_display_power(i915, domain, wf) \
> > >          for ((wf) = intel_display_power_get((i915), (domain));
> > > (wf); \
> > >               intel_display_power_put_async((i915), (domain),
> > > (wf)), (wf) = 0)
> > > 
> > > -void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
> > > -                           u8 req_slices);
> > > -
> > >  void chv_phy_powergate_lanes(struct intel_encoder *encoder,
> > >                               bool override, unsigned int mask);
> > >  bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv, enum
> > > dpio_phy phy,
> > > --
> > > 2.24.1
> > > 
> > 
> > 

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 06/20] drm/i915: Polish some dbuf debugs
  2020-03-05  9:53       ` Lisovskiy, Stanislav
@ 2020-03-05 13:46         ` Ville Syrjälä
  2020-03-05 14:56           ` Lisovskiy, Stanislav
  0 siblings, 1 reply; 55+ messages in thread
From: Ville Syrjälä @ 2020-03-05 13:46 UTC (permalink / raw)
  To: Lisovskiy, Stanislav; +Cc: intel-gfx

On Thu, Mar 05, 2020 at 09:53:34AM +0000, Lisovskiy, Stanislav wrote:
> On Wed, 2020-03-04 at 20:26 +0200, Ville Syrjälä wrote:
> > On Wed, Mar 04, 2020 at 04:29:47PM +0000, Lisovskiy, Stanislav wrote:
> > > On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > 
> > > > Polish some of the dbuf code to give more meaningful debug
> > > > messages and whatnot. Also we can switch over to the per-device
> > > > debugs/warns at the same time.
> > > > 
> > > > Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > ---
> > > >  .../drm/i915/display/intel_display_power.c    | 40 +++++++++--
> > > > ----
> > > > ----
> > > >  1 file changed, 19 insertions(+), 21 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c
> > > > b/drivers/gpu/drm/i915/display/intel_display_power.c
> > > > index 6e25a1317161..e81e561e8ac0 100644
> > > > --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> > > > +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> > > > @@ -4433,11 +4433,12 @@ static void
> > > > intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
> > > >  	mutex_unlock(&power_domains->lock);
> > > >  }
> > > >  
> > > > -static inline
> > > > -bool intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
> > > > -			  i915_reg_t reg, bool enable)
> > > > +static void intel_dbuf_slice_set(struct drm_i915_private
> > > > *dev_priv,
> > > > +				 enum dbuf_slice slice, bool enable)
> > > >  {
> > > > -	u32 val, status;
> > > > +	i915_reg_t reg = DBUF_CTL_S(slice);
> > > > +	bool state;
> > > > +	u32 val;
> > > >  
> > > >  	val = intel_de_read(dev_priv, reg);
> > > >  	val = enable ? (val | DBUF_POWER_REQUEST) : (val &
> > > > ~DBUF_POWER_REQUEST);
> > > > @@ -4445,13 +4446,10 @@ bool intel_dbuf_slice_set(struct
> > > > drm_i915_private *dev_priv,
> > > >  	intel_de_posting_read(dev_priv, reg);
> > > >  	udelay(10);
> > > >  
> > > > -	status = intel_de_read(dev_priv, reg) & DBUF_POWER_STATE;
> > > > -	if ((enable && !status) || (!enable && status)) {
> > > > -		drm_err(&dev_priv->drm, "DBus power %s timeout!\n",
> > > > -			enable ? "enable" : "disable");
> > > > -		return false;
> > > > -	}
> > > > -	return true;
> > > > +	state = intel_de_read(dev_priv, reg) & DBUF_POWER_STATE;
> > > > +	drm_WARN(&dev_priv->drm, enable != state,
> > > > +		 "DBuf slice %d power %s timeout!\n",
> > > > +		 slice, enable ? "enable" : "disable");
> > > >  }
> > > >  
> > > >  static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
> > > > @@ -4467,14 +4465,16 @@ static void gen9_dbuf_disable(struct
> > > > drm_i915_private *dev_priv)
> > > >  void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
> > > >  			    u8 req_slices)
> > > >  {
> > > > -	int i;
> > > > -	int max_slices = INTEL_INFO(dev_priv)-
> > > > > num_supported_dbuf_slices;
> > > > 
> > > > +	int num_slices = INTEL_INFO(dev_priv)-
> > > > > num_supported_dbuf_slices;
> > > > 
> > > >  	struct i915_power_domains *power_domains = &dev_priv-
> > > > > power_domains;
> > > > 
> > > > +	enum dbuf_slice slice;
> > > >  
> > > > -	drm_WARN(&dev_priv->drm, hweight8(req_slices) > max_slices,
> > > > -		 "Invalid number of dbuf slices requested\n");
> > > > +	drm_WARN(&dev_priv->drm, req_slices & ~(BIT(num_slices) - 1),
> > > > +		 "Invalid set of dbuf slices (0x%x) requested (num dbuf
> > > > slices %d)\n",
> > > > +		 req_slices, num_slices);
> > > >  
> > > > -	DRM_DEBUG_KMS("Updating dbuf slices to 0x%x\n", req_slices);
> > > > +	drm_dbg_kms(&dev_priv->drm,
> > > > +		    "Updating dbuf slices to 0x%x\n", req_slices);
> > > >  
> > > >  	/*
> > > >  	 * Might be running this in parallel to
> > > > gen9_dc_off_power_well_enable
> > > > @@ -4485,11 +4485,9 @@ void icl_dbuf_slices_update(struct
> > > > drm_i915_private *dev_priv,
> > > >  	 */
> > > >  	mutex_lock(&power_domains->lock);
> > > >  
> > > > -	for (i = 0; i < max_slices; i++) {
> > > > -		intel_dbuf_slice_set(dev_priv,
> > > > -				     DBUF_CTL_S(i),
> > > > -				     (req_slices & BIT(i)) != 0);
> > > > -	}
> > > > +	for (slice = DBUF_S1; slice < num_slices; slice++)
> > > > +		intel_dbuf_slice_set(dev_priv, slice,
> > > > +				     req_slices & BIT(slice));
> > > 
> > > Would be cool to completely get rid of any magic numbers or
> > > definitions, 0 in a sense is more universal here than DBUF_S1.
> > > 
> > > If we are counting slices as numbers it seems logical that we 
> > > iterate [0..num_slices) range. If you want to name the first slice
> > > explicitly then it probably has to be something like iterator
> > > logic, i.e for (slice = FIRST_SLICE; slice != LAST_SLICE; slice++).
> > > 
> > > But trying to name it at the same time with comparing to total
> > > _amount_
> > > looks a bit confusing.
> > 
> > This is the standard pattern used all over the driver.
> 
> Well, you can enumerate objects using their qualitative or quantative
> characteristics, for instance if you take alphabet you would be
> either enumerating letters like first is A and count until it becomes
> Z, or
> you take indexes and say start from index 0 and count until it becomes
> 26.
> 
> What happens here is mixing those: i.e take letter A and count until it
> becomes 26, i.e mixing a name of an object with it's index, so
> hopefully DBUF_S1 will always be defined as 0 :D

The old code assumed DBUF_S1==0 for the purposes of passing it to
DBUF_CTL(), and for the purposes of the BIT(int).

The new code assumes DBUF_S1 == 0 for the purposes of terminating
the iteration.

Suo siellä, vetelä täällä.

We may actually need to change the device info to contain a dbuf slice
mask instead of just the number of slices. That's in case some hw has
some slices fused off (not sure that's actually a thing but maybe,
need to check the spec at some point for this). At that point we
probably want to stash the whole thing into a for_each_dbuf_slice().

-- 
Ville Syrjälä
Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code
  2020-03-05 13:37         ` Ville Syrjälä
@ 2020-03-05 14:01           ` Lisovskiy, Stanislav
  0 siblings, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-03-05 14:01 UTC (permalink / raw)
  To: ville.syrjala; +Cc: intel-gfx

On Thu, 2020-03-05 at 15:37 +0200, Ville Syrjälä wrote:
> On Thu, Mar 05, 2020 at 08:28:30AM +0000, Lisovskiy, Stanislav wrote:
> > On Wed, 2020-03-04 at 20:30 +0200, Ville Syrjälä wrote:
> > > On Wed, Mar 04, 2020 at 05:23:05PM +0000, Lisovskiy, Stanislav
> > > wrote:
> > > > 
> > > > > -       /* If 2nd DBuf slice required, enable it here */
> > > > >        if (INTEL_GEN(dev_priv) >= 11 && slices_union !=
> > > > > hw_enabled_slices)
> > > > > -               icl_dbuf_slices_update(dev_priv,
> > > > > slices_union);
> > > > > +               gen9_dbuf_slices_update(dev_priv,
> > > > > slices_union);
> > > > > }
> > > > > static void icl_dbuf_slice_post_update(struct
> > > > > intel_atomic_state
> > > > > *state)
> > > > > @@ -15307,9 +15306,8 @@ static void
> > > > > icl_dbuf_slice_post_update(struct intel_atomic_state *state)
> > > > >        u8 hw_enabled_slices = dev_priv-
> > > > > >enabled_dbuf_slices_mask;
> > > > >        u8 required_slices = state->enabled_dbuf_slices_mask;
> > > > > -       /* If 2nd DBuf slice is no more required disable it
> > > > > */
> > > > >         if (INTEL_GEN(dev_priv) >= 11 && required_slices !=
> > > > > hw_enabled_slices)
> > > > > -               icl_dbuf_slices_update(dev_priv,
> > > > > required_slices);
> > > > > +               gen9_dbuf_slices_update(dev_priv,
> > > > > required_slices);
> > > > 
> > > > 
> > > > Doesn't make much sense. Just look - previously we were
> > > > checking if
> > > > INTEL_GEN is >= than 11(which _is_ ICL)
> > > > 
> > > > and now we still _do_ check if INTEL_GEN is >= 11, but... call
> > > > now
> > > > function renamed to gen9
> > > > 
> > > > 
> > > > I guess you either need to change INTEL_GEN check to be >=9 to
> > > > at
> > > > least look somewhat consistent
> > > > 
> > > > or leave it as is. Or at least rename icl_ prefix to gen11_
> > > > otherwise that looks inconsistent, i.e
> > > > 
> > > > you are now checking that gen is >= 11 and then OK - now let's
> > > > call
> > > > gen 9! :)
> > > 
> > > The standard practice is to name things based on the oldest
> > > platform
> > > that introduced the thing.
> > 
> > And that is fine - but then you need to change the check above
> > from 
> > INTEL_GEN >= 11 to INTEL_GEN >= 9, right - if you gen9 is the
> > oldest
> > platform. 
> 
> No, the function works just fine for all skl+ but no real requirement
> that it gets called on all of them.  It's just part of the standard
> set
> of gen9_dbuf (which should really be skl_dbuf since this is about
> display stuff).
> 
> Anyways, IIRC this check is going away in a later patch, so the
> discussion is a bit moot.

Ahh, I didn't simply get to that patch yet - no questions then :)

Would just remove it here right away though, but whatever.

Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>

> 
> > 
> > -       /* If 2nd DBuf slice required, enable it here */
> > > > >        if (INTEL_GEN(dev_priv) >= 11 && slices_union !=
> > > > > hw_enabled_slices)
> > > > > -               icl_dbuf_slices_update(dev_priv,
> > > > > slices_union);
> > > > > +               gen9_dbuf_slices_update(dev_priv,
> > > > > slices_union);
> > > > > }
> > 
> > I mean previously we were checking INTEL_GEN to be at least 11 and
> > called function prefixed with icl_ - which was consistent and
> > logical.
> > 
> > Now you changed this to gen9(oldest platform which introduced the
> > thing), however then the check above makes no sense - it should be
> > changed to INTEL_GEN >= 9 as well. Otherwise this
> > "gen9_dbuf_slices_update" function will not be actually ever called
> > for
> > gen9.
> > 
> > Or do you want function prefixed as gen9_ to be only called for gen
> > 11,
> > why we then prefix it..
> > 
> > Stan
> > 
> > > 
> > > > 
> > > > 
> > > > Stan
> > > > 
> > > > ________________________________
> > > > From: Ville Syrjala <ville.syrjala@linux.intel.com>
> > > > Sent: Tuesday, February 25, 2020 7:11:12 PM
> > > > To: intel-gfx@lists.freedesktop.org
> > > > Cc: Lisovskiy, Stanislav
> > > > Subject: [PATCH v2 07/20] drm/i915: Unify the low level dbuf
> > > > code
> > > > 
> > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > 
> > > > The low level dbuf slice code is rather inconsitent with its
> > > > functiona naming and organization. Make it more consistent.
> > > > 
> > > > Also share the enable/disable functions between all platforms
> > > > since the same code works just fine for all of them.
> > > > 
> > > > Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > ---
> > > >  drivers/gpu/drm/i915/display/intel_display.c  |  6 +--
> > > >  .../drm/i915/display/intel_display_power.c    | 44 ++++++++---
> > > > ----
> > > > ----
> > > >  .../drm/i915/display/intel_display_power.h    |  6 +--
> > > >  3 files changed, 24 insertions(+), 32 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/display/intel_display.c
> > > > b/drivers/gpu/drm/i915/display/intel_display.c
> > > > index 3031e64ee518..6952c398cc43 100644
> > > > --- a/drivers/gpu/drm/i915/display/intel_display.c
> > > > +++ b/drivers/gpu/drm/i915/display/intel_display.c
> > > > @@ -15296,9 +15296,8 @@ static void
> > > > icl_dbuf_slice_pre_update(struct intel_atomic_state *state)
> > > >          u8 required_slices = state->enabled_dbuf_slices_mask;
> > > >          u8 slices_union = hw_enabled_slices | required_slices;
> > > > 
> > > > -       /* If 2nd DBuf slice required, enable it here */
> > > >          if (INTEL_GEN(dev_priv) >= 11 && slices_union !=
> > > > hw_enabled_slices)
> > > > -               icl_dbuf_slices_update(dev_priv, slices_union);
> > > > +               gen9_dbuf_slices_update(dev_priv,
> > > > slices_union);
> > > >  }
> > > > 
> > > >  static void icl_dbuf_slice_post_update(struct
> > > > intel_atomic_state
> > > > *state)
> > > > @@ -15307,9 +15306,8 @@ static void
> > > > icl_dbuf_slice_post_update(struct intel_atomic_state *state)
> > > >          u8 hw_enabled_slices = dev_priv-
> > > > >enabled_dbuf_slices_mask;
> > > >          u8 required_slices = state->enabled_dbuf_slices_mask;
> > > > 
> > > > -       /* If 2nd DBuf slice is no more required disable it */
> > > >          if (INTEL_GEN(dev_priv) >= 11 && required_slices !=
> > > > hw_enabled_slices)
> > > > -               icl_dbuf_slices_update(dev_priv,
> > > > required_slices);
> > > > +               gen9_dbuf_slices_update(dev_priv,
> > > > required_slices);
> > > >  }
> > > > 
> > > >  static void skl_commit_modeset_enables(struct
> > > > intel_atomic_state
> > > > *state)
> > > > diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c
> > > > b/drivers/gpu/drm/i915/display/intel_display_power.c
> > > > index e81e561e8ac0..ce3bbc4c7a27 100644
> > > > --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> > > > +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> > > > @@ -4433,15 +4433,18 @@ static void
> > > > intel_power_domains_sync_hw(struct drm_i915_private *dev_priv)
> > > >          mutex_unlock(&power_domains->lock);
> > > >  }
> > > > 
> > > > -static void intel_dbuf_slice_set(struct drm_i915_private
> > > > *dev_priv,
> > > > -                                enum dbuf_slice slice, bool
> > > > enable)
> > > > +static void gen9_dbuf_slice_set(struct drm_i915_private
> > > > *dev_priv,
> > > > +                               enum dbuf_slice slice, bool
> > > > enable)
> > > >  {
> > > >          i915_reg_t reg = DBUF_CTL_S(slice);
> > > >          bool state;
> > > >          u32 val;
> > > > 
> > > >          val = intel_de_read(dev_priv, reg);
> > > > -       val = enable ? (val | DBUF_POWER_REQUEST) : (val &
> > > > ~DBUF_POWER_REQUEST);
> > > > +       if (enable)
> > > > +               val |= DBUF_POWER_REQUEST;
> > > > +       else
> > > > +               val &= ~DBUF_POWER_REQUEST;
> > > >          intel_de_write(dev_priv, reg, val);
> > > >          intel_de_posting_read(dev_priv, reg);
> > > >          udelay(10);
> > > > @@ -4452,18 +4455,8 @@ static void intel_dbuf_slice_set(struct
> > > > drm_i915_private *dev_priv,
> > > >                   slice, enable ? "enable" : "disable");
> > > >  }
> > > > 
> > > > -static void gen9_dbuf_enable(struct drm_i915_private
> > > > *dev_priv)
> > > > -{
> > > > -       icl_dbuf_slices_update(dev_priv, BIT(DBUF_S1));
> > > > -}
> > > > -
> > > > -static void gen9_dbuf_disable(struct drm_i915_private
> > > > *dev_priv)
> > > > -{
> > > > -       icl_dbuf_slices_update(dev_priv, 0);
> > > > -}
> > > > -
> > > > -void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
> > > > -                           u8 req_slices)
> > > > +void gen9_dbuf_slices_update(struct drm_i915_private
> > > > *dev_priv,
> > > > +                            u8 req_slices)
> > > >  {
> > > >          int num_slices = INTEL_INFO(dev_priv)-
> > > > > num_supported_dbuf_slices;
> > > > 
> > > >          struct i915_power_domains *power_domains = &dev_priv-
> > > > > power_domains;
> > > > 
> > > > @@ -4486,28 +4479,29 @@ void icl_dbuf_slices_update(struct
> > > > drm_i915_private *dev_priv,
> > > >          mutex_lock(&power_domains->lock);
> > > > 
> > > >          for (slice = DBUF_S1; slice < num_slices; slice++)
> > > > -               intel_dbuf_slice_set(dev_priv, slice,
> > > > -                                    req_slices & BIT(slice));
> > > > +               gen9_dbuf_slice_set(dev_priv, slice, req_slices
> > > > &
> > > > BIT(slice));
> > > > 
> > > >          dev_priv->enabled_dbuf_slices_mask = req_slices;
> > > > 
> > > >          mutex_unlock(&power_domains->lock);
> > > >  }
> > > > 
> > > > -static void icl_dbuf_enable(struct drm_i915_private *dev_priv)
> > > > +static void gen9_dbuf_enable(struct drm_i915_private
> > > > *dev_priv)
> > > >  {
> > > > -       skl_ddb_get_hw_state(dev_priv);
> > > > +       dev_priv->enabled_dbuf_slices_mask =
> > > > +               intel_enabled_dbuf_slices_mask(dev_priv);
> > > > +
> > > >          /*
> > > >           * Just power up at least 1 slice, we will
> > > >           * figure out later which slices we have and what we
> > > > need.
> > > >           */
> > > > -       icl_dbuf_slices_update(dev_priv, dev_priv-
> > > > > enabled_dbuf_slices_mask |
> > > > 
> > > > -                              BIT(DBUF_S1));
> > > > +       gen9_dbuf_slices_update(dev_priv, BIT(DBUF_S1) |
> > > > +                               dev_priv-
> > > > > enabled_dbuf_slices_mask);
> > > > 
> > > >  }
> > > > 
> > > > -static void icl_dbuf_disable(struct drm_i915_private
> > > > *dev_priv)
> > > > +static void gen9_dbuf_disable(struct drm_i915_private
> > > > *dev_priv)
> > > >  {
> > > > -       icl_dbuf_slices_update(dev_priv, 0);
> > > > +       gen9_dbuf_slices_update(dev_priv, 0);
> > > >  }
> > > > 
> > > >  static void icl_mbus_init(struct drm_i915_private *dev_priv)
> > > > @@ -5067,7 +5061,7 @@ static void icl_display_core_init(struct
> > > > drm_i915_private *dev_priv,
> > > >          intel_cdclk_init_hw(dev_priv);
> > > > 
> > > >          /* 5. Enable DBUF. */
> > > > -       icl_dbuf_enable(dev_priv);
> > > > +       gen9_dbuf_enable(dev_priv);
> > > > 
> > > >          /* 6. Setup MBUS. */
> > > >          icl_mbus_init(dev_priv);
> > > > @@ -5090,7 +5084,7 @@ static void
> > > > icl_display_core_uninit(struct
> > > > drm_i915_private *dev_priv)
> > > >          /* 1. Disable all display engine functions -> aready
> > > > done
> > > > */
> > > > 
> > > >          /* 2. Disable DBUF */
> > > > -       icl_dbuf_disable(dev_priv);
> > > > +       gen9_dbuf_disable(dev_priv);
> > > > 
> > > >          /* 3. Disable CD clock */
> > > >          intel_cdclk_uninit_hw(dev_priv);
> > > > diff --git a/drivers/gpu/drm/i915/display/intel_display_power.h
> > > > b/drivers/gpu/drm/i915/display/intel_display_power.h
> > > > index 601e000ffd0d..1a275611241e 100644
> > > > --- a/drivers/gpu/drm/i915/display/intel_display_power.h
> > > > +++ b/drivers/gpu/drm/i915/display/intel_display_power.h
> > > > @@ -312,13 +312,13 @@ enum dbuf_slice {
> > > >          DBUF_S2,
> > > >  };
> > > > 
> > > > +void gen9_dbuf_slices_update(struct drm_i915_private
> > > > *dev_priv,
> > > > +                            u8 req_slices);
> > > > +
> > > >  #define with_intel_display_power(i915, domain, wf) \
> > > >          for ((wf) = intel_display_power_get((i915), (domain));
> > > > (wf); \
> > > >               intel_display_power_put_async((i915), (domain),
> > > > (wf)), (wf) = 0)
> > > > 
> > > > -void icl_dbuf_slices_update(struct drm_i915_private *dev_priv,
> > > > -                           u8 req_slices);
> > > > -
> > > >  void chv_phy_powergate_lanes(struct intel_encoder *encoder,
> > > >                               bool override, unsigned int
> > > > mask);
> > > >  bool chv_phy_powergate_ch(struct drm_i915_private *dev_priv,
> > > > enum
> > > > dpio_phy phy,
> > > > --
> > > > 2.24.1
> > > > 
> > > 
> > > 
> 
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 06/20] drm/i915: Polish some dbuf debugs
  2020-03-05 13:46         ` Ville Syrjälä
@ 2020-03-05 14:56           ` Lisovskiy, Stanislav
  0 siblings, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-03-05 14:56 UTC (permalink / raw)
  To: ville.syrjala; +Cc: intel-gfx

On Thu, 2020-03-05 at 15:46 +0200, Ville Syrjälä wrote:
> On Thu, Mar 05, 2020 at 09:53:34AM +0000, Lisovskiy, Stanislav wrote:
> > On Wed, 2020-03-04 at 20:26 +0200, Ville Syrjälä wrote:
> > > On Wed, Mar 04, 2020 at 04:29:47PM +0000, Lisovskiy, Stanislav
> > > wrote:
> > > > On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> > > > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > 
> > > > > Polish some of the dbuf code to give more meaningful debug
> > > > > messages and whatnot. Also we can switch over to the per-
> > > > > device
> > > > > debugs/warns at the same time.
> > > > > 
> > > > > Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > > > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > > > ---
> > > > >  .../drm/i915/display/intel_display_power.c    | 40
> > > > > +++++++++--
> > > > > ----
> > > > > ----
> > > > >  1 file changed, 19 insertions(+), 21 deletions(-)
> > > > > 
> > > > > diff --git
> > > > > a/drivers/gpu/drm/i915/display/intel_display_power.c
> > > > > b/drivers/gpu/drm/i915/display/intel_display_power.c
> > > > > index 6e25a1317161..e81e561e8ac0 100644
> > > > > --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> > > > > +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> > > > > @@ -4433,11 +4433,12 @@ static void
> > > > > intel_power_domains_sync_hw(struct drm_i915_private
> > > > > *dev_priv)
> > > > >  	mutex_unlock(&power_domains->lock);
> > > > >  }
> > > > >  
> > > > > -static inline
> > > > > -bool intel_dbuf_slice_set(struct drm_i915_private *dev_priv,
> > > > > -			  i915_reg_t reg, bool enable)
> > > > > +static void intel_dbuf_slice_set(struct drm_i915_private
> > > > > *dev_priv,
> > > > > +				 enum dbuf_slice slice, bool
> > > > > enable)
> > > > >  {
> > > > > -	u32 val, status;
> > > > > +	i915_reg_t reg = DBUF_CTL_S(slice);
> > > > > +	bool state;
> > > > > +	u32 val;
> > > > >  
> > > > >  	val = intel_de_read(dev_priv, reg);
> > > > >  	val = enable ? (val | DBUF_POWER_REQUEST) : (val &
> > > > > ~DBUF_POWER_REQUEST);
> > > > > @@ -4445,13 +4446,10 @@ bool intel_dbuf_slice_set(struct
> > > > > drm_i915_private *dev_priv,
> > > > >  	intel_de_posting_read(dev_priv, reg);
> > > > >  	udelay(10);
> > > > >  
> > > > > -	status = intel_de_read(dev_priv, reg) &
> > > > > DBUF_POWER_STATE;
> > > > > -	if ((enable && !status) || (!enable && status)) {
> > > > > -		drm_err(&dev_priv->drm, "DBus power %s
> > > > > timeout!\n",
> > > > > -			enable ? "enable" : "disable");
> > > > > -		return false;
> > > > > -	}
> > > > > -	return true;
> > > > > +	state = intel_de_read(dev_priv, reg) &
> > > > > DBUF_POWER_STATE;
> > > > > +	drm_WARN(&dev_priv->drm, enable != state,
> > > > > +		 "DBuf slice %d power %s timeout!\n",
> > > > > +		 slice, enable ? "enable" : "disable");
> > > > >  }
> > > > >  
> > > > >  static void gen9_dbuf_enable(struct drm_i915_private
> > > > > *dev_priv)
> > > > > @@ -4467,14 +4465,16 @@ static void gen9_dbuf_disable(struct
> > > > > drm_i915_private *dev_priv)
> > > > >  void icl_dbuf_slices_update(struct drm_i915_private
> > > > > *dev_priv,
> > > > >  			    u8 req_slices)
> > > > >  {
> > > > > -	int i;
> > > > > -	int max_slices = INTEL_INFO(dev_priv)-
> > > > > > num_supported_dbuf_slices;
> > > > > 
> > > > > +	int num_slices = INTEL_INFO(dev_priv)-
> > > > > > num_supported_dbuf_slices;
> > > > > 
> > > > >  	struct i915_power_domains *power_domains = &dev_priv-
> > > > > > power_domains;
> > > > > 
> > > > > +	enum dbuf_slice slice;
> > > > >  
> > > > > -	drm_WARN(&dev_priv->drm, hweight8(req_slices) >
> > > > > max_slices,
> > > > > -		 "Invalid number of dbuf slices requested\n");
> > > > > +	drm_WARN(&dev_priv->drm, req_slices & ~(BIT(num_slices)
> > > > > - 1),
> > > > > +		 "Invalid set of dbuf slices (0x%x) requested
> > > > > (num dbuf
> > > > > slices %d)\n",
> > > > > +		 req_slices, num_slices);
> > > > >  
> > > > > -	DRM_DEBUG_KMS("Updating dbuf slices to 0x%x\n",
> > > > > req_slices);
> > > > > +	drm_dbg_kms(&dev_priv->drm,
> > > > > +		    "Updating dbuf slices to 0x%x\n",
> > > > > req_slices);
> > > > >  
> > > > >  	/*
> > > > >  	 * Might be running this in parallel to
> > > > > gen9_dc_off_power_well_enable
> > > > > @@ -4485,11 +4485,9 @@ void icl_dbuf_slices_update(struct
> > > > > drm_i915_private *dev_priv,
> > > > >  	 */
> > > > >  	mutex_lock(&power_domains->lock);
> > > > >  
> > > > > -	for (i = 0; i < max_slices; i++) {
> > > > > -		intel_dbuf_slice_set(dev_priv,
> > > > > -				     DBUF_CTL_S(i),
> > > > > -				     (req_slices & BIT(i)) !=
> > > > > 0);
> > > > > -	}
> > > > > +	for (slice = DBUF_S1; slice < num_slices; slice++)
> > > > > +		intel_dbuf_slice_set(dev_priv, slice,
> > > > > +				     req_slices & BIT(slice));
> > > > 
> > > > Would be cool to completely get rid of any magic numbers or
> > > > definitions, 0 in a sense is more universal here than DBUF_S1.
> > > > 
> > > > If we are counting slices as numbers it seems logical that we 
> > > > iterate [0..num_slices) range. If you want to name the first
> > > > slice
> > > > explicitly then it probably has to be something like iterator
> > > > logic, i.e for (slice = FIRST_SLICE; slice != LAST_SLICE;
> > > > slice++).
> > > > 
> > > > But trying to name it at the same time with comparing to total
> > > > _amount_
> > > > looks a bit confusing.
> > > 
> > > This is the standard pattern used all over the driver.
> > 
> > Well, you can enumerate objects using their qualitative or
> > quantative
> > characteristics, for instance if you take alphabet you would be
> > either enumerating letters like first is A and count until it
> > becomes
> > Z, or
> > you take indexes and say start from index 0 and count until it
> > becomes
> > 26.
> > 
> > What happens here is mixing those: i.e take letter A and count
> > until it
> > becomes 26, i.e mixing a name of an object with it's index, so
> > hopefully DBUF_S1 will always be defined as 0 :D
> 
> The old code assumed DBUF_S1==0 for the purposes of passing it to
> DBUF_CTL(), and for the purposes of the BIT(int).
> 
> The new code assumes DBUF_S1 == 0 for the purposes of terminating
> the iteration.
> 
> Suo siellä, vetelä täällä.
> 
> We may actually need to change the device info to contain a dbuf
> slice
> mask instead of just the number of slices. That's in case some hw has
> some slices fused off (not sure that's actually a thing but maybe,
> need to check the spec at some point for this). At that point we
> probably want to stash the whole thing into a for_each_dbuf_slice().

Yes, for_each_dbuf_slice sounds much better indeed, that way you
neither have to use indexes nor start from some explicitly hardcoded
slice.

> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 05/20] drm/i915: Make skl_compute_dbuf_slices() behave consistently for all platforms
  2020-03-02 14:50     ` Ville Syrjälä
  2020-03-02 15:50       ` Lisovskiy, Stanislav
@ 2020-04-01  7:52       ` Lisovskiy, Stanislav
  1 sibling, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-04-01  7:52 UTC (permalink / raw)
  To: Ville Syrjälä; +Cc: intel-gfx

On Mon, Mar 02, 2020 at 04:50:37PM +0200, Ville Syrjälä wrote:
> On Tue, Feb 25, 2020 at 05:30:57PM +0000, Lisovskiy, Stanislav wrote:
> > On Tue, 2020-02-25 at 19:11 +0200, Ville Syrjala wrote:
> > > From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > 
> > > Currently skl_compute_dbuf_slices() returns 0 for any inactive pipe
> > > on
> > > icl+, but returns BIT(S1) on pre-icl for any pipe (whether it's
> > > active or
> > > not). Let's make the behaviour consistent and always return 0 for any
> > > inactive pipe.
> > > 
> > > Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> > > Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/intel_pm.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/intel_pm.c
> > > b/drivers/gpu/drm/i915/intel_pm.c
> > > index a2e78969c0df..640f4c4fd508 100644
> > > --- a/drivers/gpu/drm/i915/intel_pm.c
> > > +++ b/drivers/gpu/drm/i915/intel_pm.c
> > > @@ -4408,7 +4408,7 @@ static u8 skl_compute_dbuf_slices(const struct
> > > intel_crtc_state *crtc_state,
> > >  	 * For anything else just return one slice yet.
> > >  	 * Should be extended for other platforms.
> > >  	 */
> > > -	return BIT(DBUF_S1);
> > > +	return active_pipes & BIT(pipe) ? BIT(DBUF_S1) : 0;
> > 
> > I think the initial idea was this won't be even called if there 
> > are no active pipes at all - skl_ddb_get_pipe_allocation_limits would
> > bail out immediately. If there were some active pipes - then we will
> > have to use slice S1 anyway - because there were simply no other slices
> > available. If some pipes were inactive - they are currently skipped by
> > !crtc_state->hw.active check - so I would just keep it simple and don't
> > call this function for non-active pipes at all.
> 
> That's just going to make the caller more messy by forcing it to
> check for active_pipes 0 vs. not. Ie. we'd be splitting the
> responsibility of computing the dbuf slices for this pipe between
> skl_compute_dbuf_slices() and its caller. Not a good idea IMO.

Let's ramp it up. As I understood from your comments we still need dbuf_state.
I would anyway add another table for handling this in some unified manner at least,
however don't want to spend another couple of month discussing that :)

Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>


> 
> -- 
> Ville Syrjälä
> Intel
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 08/20] drm/i915: Introduce proper dbuf state
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 08/20] drm/i915: Introduce proper dbuf state Ville Syrjala
  2020-02-25 17:43   ` Lisovskiy, Stanislav
@ 2020-04-01  8:13   ` Lisovskiy, Stanislav
  1 sibling, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2020-04-01  8:13 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: intel-gfx

On Tue, Feb 25, 2020 at 07:11:13PM +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Add a global state to track the dbuf slices. Gets rid of all the nasty
> coupling between state->modeset and dbuf recomputation. Also we can now
> totally nuke state->active_pipe_changes.
> 
> dev_priv->wm.distrust_bios_wm still remains, but that too will get
> nuked soon.
> 
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_display.c  |  67 +++++--
>  .../drm/i915/display/intel_display_power.c    |   8 +-
>  .../drm/i915/display/intel_display_types.h    |  13 --
>  drivers/gpu/drm/i915/i915_drv.h               |  11 +-
>  drivers/gpu/drm/i915/intel_pm.c               | 189 ++++++++++++------
>  drivers/gpu/drm/i915/intel_pm.h               |  22 ++
>  6 files changed, 209 insertions(+), 101 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> index 6952c398cc43..659b952c8e2f 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -7581,6 +7581,8 @@ static void intel_crtc_disable_noatomic(struct intel_crtc *crtc,
>  		to_intel_bw_state(dev_priv->bw_obj.state);
>  	struct intel_cdclk_state *cdclk_state =
>  		to_intel_cdclk_state(dev_priv->cdclk.obj.state);
> +	struct intel_dbuf_state *dbuf_state =
> +		to_intel_dbuf_state(dev_priv->dbuf.obj.state);
>  	struct intel_crtc_state *crtc_state =
>  		to_intel_crtc_state(crtc->base.state);
>  	enum intel_display_power_domain domain;
> @@ -7654,6 +7656,8 @@ static void intel_crtc_disable_noatomic(struct intel_crtc *crtc,
>  	cdclk_state->min_voltage_level[pipe] = 0;
>  	cdclk_state->active_pipes &= ~BIT(pipe);
>  
> +	dbuf_state->active_pipes &= ~BIT(pipe);
> +
>  	bw_state->data_rate[pipe] = 0;
>  	bw_state->num_active_planes[pipe] = 0;
>  }
> @@ -13991,10 +13995,10 @@ static void verify_wm_state(struct intel_crtc *crtc,
>  	hw_enabled_slices = intel_enabled_dbuf_slices_mask(dev_priv);
>  
>  	if (INTEL_GEN(dev_priv) >= 11 &&
> -	    hw_enabled_slices != dev_priv->enabled_dbuf_slices_mask)
> +	    hw_enabled_slices != dev_priv->dbuf.enabled_slices)
>  		drm_err(&dev_priv->drm,
>  			"mismatch in DBUF Slices (expected 0x%x, got 0x%x)\n",
> -			dev_priv->enabled_dbuf_slices_mask,
> +			dev_priv->dbuf.enabled_slices,
>  			hw_enabled_slices);
>  
>  	/* planes */
> @@ -14529,9 +14533,7 @@ static int intel_modeset_checks(struct intel_atomic_state *state)
>  	state->modeset = true;
>  	state->active_pipes = intel_calc_active_pipes(state, dev_priv->active_pipes);
>  
> -	state->active_pipe_changes = state->active_pipes ^ dev_priv->active_pipes;
> -
> -	if (state->active_pipe_changes) {
> +	if (state->active_pipes != dev_priv->active_pipes) {
>  		ret = _intel_atomic_lock_global_state(state);
>  		if (ret)
>  			return ret;
> @@ -15292,22 +15294,38 @@ static void intel_update_trans_port_sync_crtcs(struct intel_crtc *crtc,
>  static void icl_dbuf_slice_pre_update(struct intel_atomic_state *state)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> -	u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
> -	u8 required_slices = state->enabled_dbuf_slices_mask;
> -	u8 slices_union = hw_enabled_slices | required_slices;
> +	const struct intel_dbuf_state *new_dbuf_state =
> +		intel_atomic_get_new_dbuf_state(state);
> +	const struct intel_dbuf_state *old_dbuf_state =
> +		intel_atomic_get_old_dbuf_state(state);
>  
> -	if (INTEL_GEN(dev_priv) >= 11 && slices_union != hw_enabled_slices)
> -		gen9_dbuf_slices_update(dev_priv, slices_union);
> +	if (!new_dbuf_state ||
> +	    new_dbuf_state->enabled_slices == old_dbuf_state->enabled_slices)
> +		return;
> +
> +	WARN_ON(!new_dbuf_state->base.changed);
> +
> +	gen9_dbuf_slices_update(dev_priv,
> +				old_dbuf_state->enabled_slices |
> +				new_dbuf_state->enabled_slices);
>  }
>  
>  static void icl_dbuf_slice_post_update(struct intel_atomic_state *state)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> -	u8 hw_enabled_slices = dev_priv->enabled_dbuf_slices_mask;
> -	u8 required_slices = state->enabled_dbuf_slices_mask;
> +	const struct intel_dbuf_state *new_dbuf_state =
> +		intel_atomic_get_new_dbuf_state(state);
> +	const struct intel_dbuf_state *old_dbuf_state =
> +		intel_atomic_get_old_dbuf_state(state);
>  
> -	if (INTEL_GEN(dev_priv) >= 11 && required_slices != hw_enabled_slices)
> -		gen9_dbuf_slices_update(dev_priv, required_slices);
> +	if (!new_dbuf_state ||
> +	    new_dbuf_state->enabled_slices == old_dbuf_state->enabled_slices)
> +		return;
> +
> +	WARN_ON(!new_dbuf_state->base.changed);
> +
> +	gen9_dbuf_slices_update(dev_priv,
> +				new_dbuf_state->enabled_slices);
>  }
>  
>  static void skl_commit_modeset_enables(struct intel_atomic_state *state)
> @@ -15562,9 +15580,7 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
>  	if (state->modeset)
>  		intel_encoders_update_prepare(state);
>  
> -	/* Enable all new slices, we might need */
> -	if (state->modeset)
> -		icl_dbuf_slice_pre_update(state);
> +	icl_dbuf_slice_pre_update(state);
>  
>  	/* Now enable the clocks, plane, pipe, and connectors that we set up. */
>  	dev_priv->display.commit_modeset_enables(state);
> @@ -15619,9 +15635,7 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
>  			dev_priv->display.optimize_watermarks(state, crtc);
>  	}
>  
> -	/* Disable all slices, we don't need */
> -	if (state->modeset)
> -		icl_dbuf_slice_post_update(state);
> +	icl_dbuf_slice_post_update(state);
>  
>  	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
>  		intel_post_plane_update(state, crtc);
> @@ -17507,10 +17521,14 @@ void intel_modeset_init_hw(struct drm_i915_private *i915)
>  {
>  	struct intel_cdclk_state *cdclk_state =
>  		to_intel_cdclk_state(i915->cdclk.obj.state);
> +	struct intel_dbuf_state *dbuf_state =
> +		to_intel_dbuf_state(i915->dbuf.obj.state);
>  
>  	intel_update_cdclk(i915);
>  	intel_dump_cdclk_config(&i915->cdclk.hw, "Current CDCLK");
>  	cdclk_state->logical = cdclk_state->actual = i915->cdclk.hw;
> +
> +	dbuf_state->enabled_slices = i915->dbuf.enabled_slices;
>  }
>  
>  static int sanitize_watermarks_add_affected(struct drm_atomic_state *state)
> @@ -17800,6 +17818,10 @@ int intel_modeset_init(struct drm_i915_private *i915)
>  	if (ret)
>  		return ret;
>  
> +	ret = intel_dbuf_init(i915);
> +	if (ret)
> +		return ret;
> +
>  	ret = intel_bw_init(i915);
>  	if (ret)
>  		return ret;
> @@ -18303,6 +18325,8 @@ static void intel_modeset_readout_hw_state(struct drm_device *dev)
>  	struct drm_i915_private *dev_priv = to_i915(dev);
>  	struct intel_cdclk_state *cdclk_state =
>  		to_intel_cdclk_state(dev_priv->cdclk.obj.state);
> +	struct intel_dbuf_state *dbuf_state =
> +		to_intel_dbuf_state(dev_priv->dbuf.obj.state);
>  	enum pipe pipe;
>  	struct intel_crtc *crtc;
>  	struct intel_encoder *encoder;
> @@ -18334,7 +18358,8 @@ static void intel_modeset_readout_hw_state(struct drm_device *dev)
>  			    enableddisabled(crtc_state->hw.active));
>  	}
>  
> -	dev_priv->active_pipes = cdclk_state->active_pipes = active_pipes;
> +	dev_priv->active_pipes = cdclk_state->active_pipes =
> +		dbuf_state->active_pipes = active_pipes;

LGTM, however still active_pipes duplication looks redundant.
It can easily go out of sync somewhere.
Would be nice to do something about this. However yet again
my opinion is that it is more important to get the things going forward now.

Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>


>  
>  	readout_plane_state(dev_priv);
>  
> diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
> index ce3bbc4c7a27..dc0c9694b714 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_power.c
> +++ b/drivers/gpu/drm/i915/display/intel_display_power.c
> @@ -1062,7 +1062,7 @@ static bool gen9_dc_off_power_well_enabled(struct drm_i915_private *dev_priv,
>  static void gen9_assert_dbuf_enabled(struct drm_i915_private *dev_priv)
>  {
>  	u8 hw_enabled_dbuf_slices = intel_enabled_dbuf_slices_mask(dev_priv);
> -	u8 enabled_dbuf_slices = dev_priv->enabled_dbuf_slices_mask;
> +	u8 enabled_dbuf_slices = dev_priv->dbuf.enabled_slices;
>  
>  	drm_WARN(&dev_priv->drm,
>  		 hw_enabled_dbuf_slices != enabled_dbuf_slices,
> @@ -4481,14 +4481,14 @@ void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
>  	for (slice = DBUF_S1; slice < num_slices; slice++)
>  		gen9_dbuf_slice_set(dev_priv, slice, req_slices & BIT(slice));
>  
> -	dev_priv->enabled_dbuf_slices_mask = req_slices;
> +	dev_priv->dbuf.enabled_slices = req_slices;
>  
>  	mutex_unlock(&power_domains->lock);
>  }
>  
>  static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
>  {
> -	dev_priv->enabled_dbuf_slices_mask =
> +	dev_priv->dbuf.enabled_slices =
>  		intel_enabled_dbuf_slices_mask(dev_priv);
>  
>  	/*
> @@ -4496,7 +4496,7 @@ static void gen9_dbuf_enable(struct drm_i915_private *dev_priv)
>  	 * figure out later which slices we have and what we need.
>  	 */
>  	gen9_dbuf_slices_update(dev_priv, BIT(DBUF_S1) |
> -				dev_priv->enabled_dbuf_slices_mask);
> +				dev_priv->dbuf.enabled_slices);
>  }
>  
>  static void gen9_dbuf_disable(struct drm_i915_private *dev_priv)
> diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
> index 0d8a64305464..165efa00d88b 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_types.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_types.h
> @@ -471,16 +471,6 @@ struct intel_atomic_state {
>  
>  	bool dpll_set, modeset;
>  
> -	/*
> -	 * Does this transaction change the pipes that are active?  This mask
> -	 * tracks which CRTC's have changed their active state at the end of
> -	 * the transaction (not counting the temporary disable during modesets).
> -	 * This mask should only be non-zero when intel_state->modeset is true,
> -	 * but the converse is not necessarily true; simply changing a mode may
> -	 * not flip the final active status of any CRTC's
> -	 */
> -	u8 active_pipe_changes;
> -
>  	u8 active_pipes;
>  
>  	struct intel_shared_dpll_state shared_dpll[I915_NUM_PLLS];
> @@ -498,9 +488,6 @@ struct intel_atomic_state {
>  	 */
>  	bool global_state_changed;
>  
> -	/* Number of enabled DBuf slices */
> -	u8 enabled_dbuf_slices_mask;
> -
>  	struct i915_sw_fence commit_ready;
>  
>  	struct llist_node freed;
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 88e4fb8ac739..d03c84f373e6 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -1006,6 +1006,13 @@ struct drm_i915_private {
>  		struct intel_global_obj obj;
>  	} cdclk;
>  
> +	struct {
> +		/* The current hardware dbuf configuration */
> +		u8 enabled_slices;
> +
> +		struct intel_global_obj obj;
> +	} dbuf;
> +
>  	/**
>  	 * wq - Driver workqueue for GEM.
>  	 *
> @@ -1181,12 +1188,12 @@ struct drm_i915_private {
>  		 * Set during HW readout of watermarks/DDB.  Some platforms
>  		 * need to know when we're still using BIOS-provided values
>  		 * (which we don't fully trust).
> +		 *
> +		 * FIXME get rid of this.
>  		 */
>  		bool distrust_bios_wm;
>  	} wm;
>  
> -	u8 enabled_dbuf_slices_mask; /* GEN11 has configurable 2 slices */
> -
>  	struct dram_info {
>  		bool valid;
>  		bool is_16gb_dimm;
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index 640f4c4fd508..d4730d9b4e1b 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -3845,7 +3845,7 @@ static u16 intel_get_ddb_size(struct drm_i915_private *dev_priv)
>  static u8 skl_compute_dbuf_slices(const struct intel_crtc_state *crtc_state,
>  				  u8 active_pipes);
>  
> -static void
> +static int
>  skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
>  				   const struct intel_crtc_state *crtc_state,
>  				   const u64 total_data_rate,
> @@ -3858,30 +3858,29 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
>  	const struct intel_crtc *crtc;
>  	u32 pipe_width = 0, total_width_in_range = 0, width_before_pipe_in_range = 0;
>  	enum pipe for_pipe = to_intel_crtc(for_crtc)->pipe;
> +	struct intel_dbuf_state *new_dbuf_state =
> +		intel_atomic_get_new_dbuf_state(intel_state);
> +	const struct intel_dbuf_state *old_dbuf_state =
> +		intel_atomic_get_old_dbuf_state(intel_state);
> +	u8 active_pipes = new_dbuf_state->active_pipes;
>  	u16 ddb_size;
>  	u32 ddb_range_size;
>  	u32 i;
>  	u32 dbuf_slice_mask;
> -	u32 active_pipes;
>  	u32 offset;
>  	u32 slice_size;
>  	u32 total_slice_mask;
>  	u32 start, end;
> +	int ret;
>  
> -	if (drm_WARN_ON(&dev_priv->drm, !state) || !crtc_state->hw.active) {
> +	*num_active = hweight8(active_pipes);
> +
> +	if (!crtc_state->hw.active) {
>  		alloc->start = 0;
>  		alloc->end = 0;
> -		*num_active = hweight8(dev_priv->active_pipes);
> -		return;
> +		return 0;
>  	}
>  
> -	if (intel_state->active_pipe_changes)
> -		active_pipes = intel_state->active_pipes;
> -	else
> -		active_pipes = dev_priv->active_pipes;
> -
> -	*num_active = hweight8(active_pipes);
> -
>  	ddb_size = intel_get_ddb_size(dev_priv);
>  
>  	slice_size = ddb_size / INTEL_INFO(dev_priv)->num_supported_dbuf_slices;
> @@ -3894,13 +3893,16 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
>  	 * that changes the active CRTC list or do modeset would need to
>  	 * grab _all_ crtc locks, including the one we currently hold.
>  	 */
> -	if (!intel_state->active_pipe_changes && !intel_state->modeset) {
> +	if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes &&
> +	    !dev_priv->wm.distrust_bios_wm) {
>  		/*
>  		 * alloc may be cleared by clear_intel_crtc_state,
>  		 * copy from old state to be sure
> +		 *
> +		 * FIXME get rid of this mess
>  		 */
>  		*alloc = to_intel_crtc_state(for_crtc->state)->wm.skl.ddb;
> -		return;
> +		return 0;
>  	}
>  
>  	/*
> @@ -3979,7 +3981,13 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
>  	 * FIXME: For now we always enable slice S1 as per
>  	 * the Bspec display initialization sequence.
>  	 */
> -	intel_state->enabled_dbuf_slices_mask = total_slice_mask | BIT(DBUF_S1);
> +	new_dbuf_state->enabled_slices = total_slice_mask | BIT(DBUF_S1);
> +
> +	if (old_dbuf_state->enabled_slices != new_dbuf_state->enabled_slices) {
> +		ret = intel_atomic_serialize_global_state(&new_dbuf_state->base);
> +		if (ret)
> +			return ret;
> +	}
>  
>  	start = ddb_range_size * width_before_pipe_in_range / total_width_in_range;
>  	end = ddb_range_size *
> @@ -3990,9 +3998,8 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
>  
>  	DRM_DEBUG_KMS("Pipe %d ddb %d-%d\n", for_pipe,
>  		      alloc->start, alloc->end);
> -	DRM_DEBUG_KMS("Enabled ddb slices mask %x num supported %d\n",
> -		      intel_state->enabled_dbuf_slices_mask,
> -		      INTEL_INFO(dev_priv)->num_supported_dbuf_slices);
> +
> +	return 0;
>  }
>  
>  static int skl_compute_wm_params(const struct intel_crtc_state *crtc_state,
> @@ -4112,8 +4119,8 @@ void skl_pipe_ddb_get_hw_state(struct intel_crtc *crtc,
>  
>  void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv)
>  {
> -	dev_priv->enabled_dbuf_slices_mask =
> -				intel_enabled_dbuf_slices_mask(dev_priv);
> +	dev_priv->dbuf.enabled_slices =
> +		intel_enabled_dbuf_slices_mask(dev_priv);
>  }
>  
>  /*
> @@ -4546,6 +4553,7 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
>  	u64 uv_plane_data_rate[I915_MAX_PLANES] = {};
>  	u32 blocks;
>  	int level;
> +	int ret;
>  
>  	/* Clear the partitioning for disabled planes. */
>  	memset(crtc_state->wm.skl.plane_ddb_y, 0, sizeof(crtc_state->wm.skl.plane_ddb_y));
> @@ -4567,8 +4575,12 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
>  							 uv_plane_data_rate);
>  
>  
> -	skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state, total_data_rate,
> -					   alloc, &num_active);
> +	ret = skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state,
> +						 total_data_rate,
> +						 alloc, &num_active);
> +	if (ret)
> +		return ret;
> +
>  	alloc_size = skl_ddb_entry_size(alloc);
>  	if (alloc_size == 0)
>  		return 0;
> @@ -5451,14 +5463,11 @@ skl_ddb_add_affected_planes(const struct intel_crtc_state *old_crtc_state,
>  static int
>  skl_compute_ddb(struct intel_atomic_state *state)
>  {
> -	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
>  	struct intel_crtc_state *old_crtc_state;
>  	struct intel_crtc_state *new_crtc_state;
>  	struct intel_crtc *crtc;
>  	int ret, i;
>  
> -	state->enabled_dbuf_slices_mask = dev_priv->enabled_dbuf_slices_mask;
> -
>  	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
>  					    new_crtc_state, i) {
>  		ret = skl_allocate_pipe_ddb(new_crtc_state);
> @@ -5598,7 +5607,8 @@ skl_print_wm_changes(struct intel_atomic_state *state)
>  	}
>  }
>  
> -static int intel_add_all_pipes(struct intel_atomic_state *state)
> +static int intel_add_affected_pipes(struct intel_atomic_state *state,
> +				    u8 pipe_mask)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
>  	struct intel_crtc *crtc;
> @@ -5606,6 +5616,9 @@ static int intel_add_all_pipes(struct intel_atomic_state *state)
>  	for_each_intel_crtc(&dev_priv->drm, crtc) {
>  		struct intel_crtc_state *crtc_state;
>  
> +		if ((pipe_mask & BIT(crtc->pipe)) == 0)
> +			continue;
> +
>  		crtc_state = intel_atomic_get_crtc_state(&state->base, crtc);
>  		if (IS_ERR(crtc_state))
>  			return PTR_ERR(crtc_state);
> @@ -5618,49 +5631,54 @@ static int
>  skl_ddb_add_affected_pipes(struct intel_atomic_state *state)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> -	int ret;
> +	struct intel_crtc_state *crtc_state;
> +	struct intel_crtc *crtc;
> +	int i, ret;
>  
> -	/*
> -	 * If this is our first atomic update following hardware readout,
> -	 * we can't trust the DDB that the BIOS programmed for us.  Let's
> -	 * pretend that all pipes switched active status so that we'll
> -	 * ensure a full DDB recompute.
> -	 */
>  	if (dev_priv->wm.distrust_bios_wm) {
> -		ret = drm_modeset_lock(&dev_priv->drm.mode_config.connection_mutex,
> -				       state->base.acquire_ctx);
> -		if (ret)
> -			return ret;
> -
> -		state->active_pipe_changes = INTEL_INFO(dev_priv)->pipe_mask;
> -
>  		/*
> -		 * We usually only initialize state->active_pipes if we
> -		 * we're doing a modeset; make sure this field is always
> -		 * initialized during the sanitization process that happens
> -		 * on the first commit too.
> +		 * skl_ddb_get_pipe_allocation_limits() currently requires
> +		 * all active pipes to be included in the state so that
> +		 * it can redistribute the dbuf among them, and it really
> +		 * wants to recompute things when distrust_bios_wm is set
> +		 * so we add all the pipes to the state.
>  		 */
> -		if (!state->modeset)
> -			state->active_pipes = dev_priv->active_pipes;
> +		ret = intel_add_affected_pipes(state, ~0);
> +		if (ret)
> +			return ret;
>  	}
>  
> -	/*
> -	 * If the modeset changes which CRTC's are active, we need to
> -	 * recompute the DDB allocation for *all* active pipes, even
> -	 * those that weren't otherwise being modified in any way by this
> -	 * atomic commit.  Due to the shrinking of the per-pipe allocations
> -	 * when new active CRTC's are added, it's possible for a pipe that
> -	 * we were already using and aren't changing at all here to suddenly
> -	 * become invalid if its DDB needs exceeds its new allocation.
> -	 *
> -	 * Note that if we wind up doing a full DDB recompute, we can't let
> -	 * any other display updates race with this transaction, so we need
> -	 * to grab the lock on *all* CRTC's.
> -	 */
> -	if (state->active_pipe_changes || state->modeset) {
> -		ret = intel_add_all_pipes(state);
> +	for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
> +		struct intel_dbuf_state *new_dbuf_state;
> +		const struct intel_dbuf_state *old_dbuf_state;
> +
> +		new_dbuf_state = intel_atomic_get_dbuf_state(state);
> +		if (IS_ERR(new_dbuf_state))
> +			return ret;
> +
> +		old_dbuf_state = intel_atomic_get_old_dbuf_state(state);
> +
> +		new_dbuf_state->active_pipes =
> +			intel_calc_active_pipes(state, old_dbuf_state->active_pipes);
> +
> +		if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes)
> +			break;
> +
> +		ret = intel_atomic_lock_global_state(&new_dbuf_state->base);
> +		if (ret)
> +			return ret;
> +
> +		/*
> +		 * skl_ddb_get_pipe_allocation_limits() currently requires
> +		 * all active pipes to be included in the state so that
> +		 * it can redistribute the dbuf among them.
> +		 */
> +		ret = intel_add_affected_pipes(state,
> +					       new_dbuf_state->active_pipes);
>  		if (ret)
>  			return ret;
> +
> +		break;
>  	}
>  
>  	return 0;
> @@ -7493,3 +7511,52 @@ void intel_pm_setup(struct drm_i915_private *dev_priv)
>  	dev_priv->runtime_pm.suspended = false;
>  	atomic_set(&dev_priv->runtime_pm.wakeref_count, 0);
>  }
> +
> +static struct intel_global_state *intel_dbuf_duplicate_state(struct intel_global_obj *obj)
> +{
> +	struct intel_dbuf_state *dbuf_state;
> +
> +	dbuf_state = kmemdup(obj->state, sizeof(*dbuf_state), GFP_KERNEL);
> +	if (!dbuf_state)
> +		return NULL;
> +
> +	return &dbuf_state->base;
> +}
> +
> +static void intel_dbuf_destroy_state(struct intel_global_obj *obj,
> +				     struct intel_global_state *state)
> +{
> +	kfree(state);
> +}
> +
> +static const struct intel_global_state_funcs intel_dbuf_funcs = {
> +	.atomic_duplicate_state = intel_dbuf_duplicate_state,
> +	.atomic_destroy_state = intel_dbuf_destroy_state,
> +};
> +
> +struct intel_dbuf_state *
> +intel_atomic_get_dbuf_state(struct intel_atomic_state *state)
> +{
> +	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> +	struct intel_global_state *dbuf_state;
> +
> +	dbuf_state = intel_atomic_get_global_obj_state(state, &dev_priv->dbuf.obj);
> +	if (IS_ERR(dbuf_state))
> +		return ERR_CAST(dbuf_state);
> +
> +	return to_intel_dbuf_state(dbuf_state);
> +}
> +
> +int intel_dbuf_init(struct drm_i915_private *dev_priv)
> +{
> +	struct intel_dbuf_state *dbuf_state;
> +
> +	dbuf_state = kzalloc(sizeof(*dbuf_state), GFP_KERNEL);
> +	if (!dbuf_state)
> +		return -ENOMEM;
> +
> +	intel_atomic_global_obj_init(dev_priv, &dev_priv->dbuf.obj,
> +				     &dbuf_state->base, &intel_dbuf_funcs);
> +
> +	return 0;
> +}
> diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h
> index d60a85421c5a..fadf7cbc44c4 100644
> --- a/drivers/gpu/drm/i915/intel_pm.h
> +++ b/drivers/gpu/drm/i915/intel_pm.h
> @@ -8,6 +8,8 @@
>  
>  #include <linux/types.h>
>  
> +#include "display/intel_global_state.h"
> +
>  #include "i915_reg.h"
>  
>  struct drm_device;
> @@ -59,4 +61,24 @@ void intel_enable_ipc(struct drm_i915_private *dev_priv);
>  
>  bool intel_set_memory_cxsr(struct drm_i915_private *dev_priv, bool enable);
>  
> +struct intel_dbuf_state {
> +	struct intel_global_state base;
> +
> +	u8 enabled_slices;
> +	u8 active_pipes;
> +};
> +
> +int intel_dbuf_init(struct drm_i915_private *dev_priv);
> +
> +struct intel_dbuf_state *
> +intel_atomic_get_dbuf_state(struct intel_atomic_state *state);
> +
> +#define to_intel_dbuf_state(x) container_of((x), struct intel_dbuf_state, base)
> +#define intel_atomic_get_old_dbuf_state(state) \
> +	to_intel_dbuf_state(intel_atomic_get_old_global_obj_state(state, &to_i915(state->base.dev)->dbuf.obj))
> +#define intel_atomic_get_new_dbuf_state(state) \
> +	to_intel_dbuf_state(intel_atomic_get_new_global_obj_state(state, &to_i915(state->base.dev)->dbuf.obj))
> +
> +int intel_dbuf_init(struct drm_i915_private *dev_priv);
> +
>  #endif /* __INTEL_PM_H__ */
> -- 
> 2.24.1
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 18/20] drm/i915: Encapsulate dbuf state handling harder
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 18/20] drm/i915: Encapsulate dbuf state handling harder Ville Syrjala
@ 2021-01-21 12:55   ` Lisovskiy, Stanislav
  0 siblings, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2021-01-21 12:55 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: intel-gfx

On Tue, Feb 25, 2020 at 07:11:23PM +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> In order to make the dbuf state computation less fragile
> let's make it stand on its own feet by now requiring someone
> to peek into a crystall ball ahead of time to figure out
> which pipes need to be added to the state under which potential
> future conditions. Instead we compute each piece of the state
> as we go along, and if any fallout occurs that affects more than
> the current set of pipes we add the affected pipes to the state
> naturally.
> 
> That requires that we track a few extra thigns in the global
> dbuf state: dbuf slices for each pipe, and the weight each
> pipe has when distributing the same set of slice(s) between
> multiple pipes. Easy enough.
> 
> We do need to follow a somewhat careful sequence of computations
> though as there are several steps involved in cooking up the dbuf
> state. Thoguh we could avoid some of that by computing more things
> on demand instead of relying on earlier step of the algorithm to
> have filled it out. I think the end result is still reasonable
> as the entire sequence is pretty much consolidated into a single
> function instead of being spread around all over.
> 
> The rough sequence is this:
> 1. calculate active_pipes
> 2. calculate dbuf slices for every pipe
> 3. calculate total enabled slices
> 4. calculate new dbuf weights for any crtc in the state
> 5. calculate new ddb entry for every pipe based on the sets of
>    slices and weights, and add any affected crtc to the state
> 6. calculate new plane ddb entries for all crtcs in the state,
>    and add any affected plane to the state so that we'll perform
>    the requisite hw reprogramming
> 
> And as a nice bonus we get to throw dev_priv->wm.distrust_bios_wm
> out the window.

So nice that we finally get those long awaited separate states for
dbuf, cdclk and etc.

Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>

> 
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_display.c  |  15 -
>  .../drm/i915/display/intel_display_debugfs.c  |   1 -
>  drivers/gpu/drm/i915/i915_drv.h               |   9 -
>  drivers/gpu/drm/i915/intel_pm.c               | 356 +++++++-----------
>  drivers/gpu/drm/i915/intel_pm.h               |   2 +
>  5 files changed, 138 insertions(+), 245 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> index 26e4462151a6..e3df43f3932d 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -14856,20 +14856,6 @@ static int intel_atomic_check(struct drm_device *dev,
>  	if (new_cdclk_state && new_cdclk_state->force_min_cdclk_changed)
>  		any_ms = true;
>  
> -	/*
> -	 * distrust_bios_wm will force a full dbuf recomputation
> -	 * but the hardware state will only get updated accordingly
> -	 * if state->modeset==true. Hence distrust_bios_wm==true &&
> -	 * state->modeset==false is an invalid combination which
> -	 * would cause the hardware and software dbuf state to get
> -	 * out of sync. We must prevent that.
> -	 *
> -	 * FIXME clean up this mess and introduce better
> -	 * state tracking for dbuf.
> -	 */
> -	if (dev_priv->wm.distrust_bios_wm)
> -		any_ms = true;
> -
>  	if (any_ms) {
>  		ret = intel_modeset_checks(state);
>  		if (ret)
> @@ -15769,7 +15755,6 @@ static int intel_atomic_commit(struct drm_device *dev,
>  		intel_runtime_pm_put(&dev_priv->runtime_pm, state->wakeref);
>  		return ret;
>  	}
> -	dev_priv->wm.distrust_bios_wm = false;
>  	intel_shared_dpll_swap_state(state);
>  	intel_atomic_track_fbs(state);
>  
> diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> index 46954cc7b6c0..b505de6287e6 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> +++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> @@ -998,7 +998,6 @@ static ssize_t i915_ipc_status_write(struct file *file, const char __user *ubuf,
>  		if (!dev_priv->ipc_enabled && enable)
>  			drm_info(&dev_priv->drm,
>  				 "Enabling IPC: WM will be proper only after next commit\n");
> -		dev_priv->wm.distrust_bios_wm = true;
>  		dev_priv->ipc_enabled = enable;
>  		intel_enable_ipc(dev_priv);
>  	}
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index d03c84f373e6..317e6a468e2e 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -1183,15 +1183,6 @@ struct drm_i915_private {
>  		 * crtc_state->wm.need_postvbl_update.
>  		 */
>  		struct mutex wm_mutex;
> -
> -		/*
> -		 * Set during HW readout of watermarks/DDB.  Some platforms
> -		 * need to know when we're still using BIOS-provided values
> -		 * (which we don't fully trust).
> -		 *
> -		 * FIXME get rid of this.
> -		 */
> -		bool distrust_bios_wm;
>  	} wm;
>  
>  	struct dram_info {
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index 085043528f80..c11508fb3fac 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -3865,56 +3865,22 @@ static unsigned int intel_crtc_ddb_weight(const struct intel_crtc_state *crtc_st
>  	return hdisplay;
>  }
>  
> -static u8 skl_compute_dbuf_slices(struct intel_crtc *crtc,
> -				  u8 active_pipes);
> -
> -static int intel_crtc_dbuf_weights(struct intel_atomic_state *state,
> -				   struct intel_crtc *for_crtc,
> -				   unsigned int *weight_start,
> -				   unsigned int *weight_end,
> -				   unsigned int *weight_total)
> +static void intel_crtc_dbuf_weights(const struct intel_dbuf_state *dbuf_state,
> +				    enum pipe for_pipe,
> +				    unsigned int *weight_start,
> +				    unsigned int *weight_end,
> +				    unsigned int *weight_total)
>  {
> -	const struct intel_dbuf_state *old_dbuf_state =
> -		intel_atomic_get_old_dbuf_state(state);
> -	struct intel_dbuf_state *new_dbuf_state =
> -		intel_atomic_get_new_dbuf_state(state);
> -	u8 active_pipes = new_dbuf_state->active_pipes;
> -	enum pipe for_pipe = for_crtc->pipe;
> -	const struct intel_crtc_state *crtc_state;
> -	struct intel_crtc *crtc;
> -	u8 dbuf_slice_mask;
> -	u8 total_slice_mask;
> -	int i, ret;
> -
> -	/*
> -	 * Get allowed DBuf slices for correspondent pipe and platform.
> -	 */
> -	dbuf_slice_mask = skl_compute_dbuf_slices(for_crtc, active_pipes);
> -	total_slice_mask = dbuf_slice_mask;
> +	struct drm_i915_private *dev_priv =
> +		to_i915(dbuf_state->base.state->base.dev);
> +	enum pipe pipe;
>  
>  	*weight_start = 0;
>  	*weight_end = 0;
>  	*weight_total = 0;
>  
> -	for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
> -		enum pipe pipe = crtc->pipe;
> -		unsigned int weight;
> -		u8 pipe_dbuf_slice_mask;
> -
> -		if (!crtc_state->hw.active)
> -			continue;
> -
> -		pipe_dbuf_slice_mask =
> -			skl_compute_dbuf_slices(crtc, active_pipes);
> -
> -		/*
> -		 * According to BSpec pipe can share one dbuf slice with another
> -		 * pipes or pipe can use multiple dbufs, in both cases we
> -		 * account for other pipes only if they have exactly same mask.
> -		 * However we need to account how many slices we should enable
> -		 * in total.
> -		 */
> -		total_slice_mask |= pipe_dbuf_slice_mask;
> +	for_each_pipe(dev_priv, pipe) {
> +		int weight = dbuf_state->weight[pipe];
>  
>  		/*
>  		 * Do not account pipes using other slice sets
> @@ -3923,12 +3889,10 @@ static int intel_crtc_dbuf_weights(struct intel_atomic_state *state,
>  		 * i.e no partial intersection), so it is enough to check for
>  		 * equality for now.
>  		 */
> -		if (dbuf_slice_mask != pipe_dbuf_slice_mask)
> +		if (dbuf_state->slices[pipe] != dbuf_state->slices[for_pipe])
>  			continue;
>  
> -		weight = intel_crtc_ddb_weight(crtc_state);
>  		*weight_total += weight;
> -
>  		if (pipe < for_pipe) {
>  			*weight_start += weight;
>  			*weight_end += weight;
> @@ -3936,87 +3900,65 @@ static int intel_crtc_dbuf_weights(struct intel_atomic_state *state,
>  			*weight_end += weight;
>  		}
>  	}
> -
> -	/*
> -	 * FIXME: For now we always enable slice S1 as per
> -	 * the Bspec display initialization sequence.
> -	 */
> -	new_dbuf_state->enabled_slices = total_slice_mask | BIT(DBUF_S1);
> -
> -	if (old_dbuf_state->enabled_slices != new_dbuf_state->enabled_slices) {
> -		ret = intel_atomic_serialize_global_state(&new_dbuf_state->base);
> -		if (ret)
> -			return ret;
> -	}
> -
> -	return 0;
>  }
>  
>  static int
> -skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
> -				   const struct intel_crtc_state *crtc_state,
> -				   const u64 total_data_rate,
> -				   struct skl_ddb_entry *alloc, /* out */
> -				   int *num_active /* out */)
> +skl_crtc_allocate_ddb(struct intel_atomic_state *state, struct intel_crtc *crtc)
>  {
> -	struct intel_atomic_state *state =
> -		to_intel_atomic_state(crtc_state->uapi.state);
> -	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
> -	unsigned int weight_start, weight_end, weight_total;
> +	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
> +	unsigned int weight_total, weight_start, weight_end;
>  	const struct intel_dbuf_state *old_dbuf_state =
>  		intel_atomic_get_old_dbuf_state(state);
>  	struct intel_dbuf_state *new_dbuf_state =
>  		intel_atomic_get_new_dbuf_state(state);
> -	u8 active_pipes = new_dbuf_state->active_pipes;
> +	struct intel_crtc_state *crtc_state;
>  	struct skl_ddb_entry ddb_slices;
> +	enum pipe pipe = crtc->pipe;
>  	u32 ddb_range_size;
>  	u32 dbuf_slice_mask;
>  	u32 start, end;
>  	int ret;
>  
> -	*num_active = hweight8(active_pipes);
> -
> -	if (!crtc_state->hw.active) {
> -		alloc->start = 0;
> -		alloc->end = 0;
> -		return 0;
> +	if (new_dbuf_state->weight[pipe] == 0) {
> +		new_dbuf_state->ddb[pipe].start = 0;
> +		new_dbuf_state->ddb[pipe].end = 0;
> +		goto out;
>  	}
>  
> -	/*
> -	 * If the state doesn't change the active CRTC's or there is no
> -	 * modeset request, then there's no need to recalculate;
> -	 * the existing pipe allocation limits should remain unchanged.
> -	 * Note that we're safe from racing commits since any racing commit
> -	 * that changes the active CRTC list or do modeset would need to
> -	 * grab _all_ crtc locks, including the one we currently hold.
> -	 */
> -	if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes &&
> -	    !dev_priv->wm.distrust_bios_wm)
> -		return 0;
> -
> -	/*
> -	 * Get allowed DBuf slices for correspondent pipe and platform.
> -	 */
> -	dbuf_slice_mask = skl_compute_dbuf_slices(crtc, active_pipes);
> +	dbuf_slice_mask = new_dbuf_state->slices[pipe];
>  
>  	skl_ddb_entry_for_slices(dev_priv, dbuf_slice_mask, &ddb_slices);
>  	ddb_range_size = skl_ddb_entry_size(&ddb_slices);
>  
> -	ret = intel_crtc_dbuf_weights(state, crtc,
> -				      &weight_start, &weight_end, &weight_total);
> -	if (ret)
> -		return ret;
> +	intel_crtc_dbuf_weights(new_dbuf_state, pipe,
> +				&weight_start, &weight_end, &weight_total);
>  
>  	start = ddb_range_size * weight_start / weight_total;
>  	end = ddb_range_size * weight_end / weight_total;
>  
> -	alloc->start = ddb_slices.start + start;
> -	alloc->end = ddb_slices.start + end;
> +	new_dbuf_state->ddb[pipe].start = ddb_slices.start + start;
> +	new_dbuf_state->ddb[pipe].end = ddb_slices.start + end;
> +
> +out:
> +	if (skl_ddb_entry_equal(&old_dbuf_state->ddb[pipe],
> +				&new_dbuf_state->ddb[pipe]))
> +		return 0;
> +
> +	ret = intel_atomic_lock_global_state(&new_dbuf_state->base);
> +	if (ret)
> +		return ret;
> +
> +	crtc_state = intel_atomic_get_crtc_state(&state->base, crtc);
> +	if (IS_ERR(crtc_state))
> +		return PTR_ERR(crtc_state);
>  
>  	drm_dbg_kms(&dev_priv->drm,
> -		    "[CRTC:%d:%s] dbuf slices 0x%x, ddb (%d - %d), active pipes 0x%x\n",
> +		    "[CRTC:%d:%s] dbuf slices 0x%x -> 0x%x, ddb (%d - %d) -> (%d - %d), active pipes 0x%x -> 0x%x\n",
>  		    crtc->base.base.id, crtc->base.name,
> -		    dbuf_slice_mask, alloc->start, alloc->end, active_pipes);
> +		    old_dbuf_state->slices[pipe], new_dbuf_state->slices[pipe],
> +		    old_dbuf_state->ddb[pipe].start, old_dbuf_state->ddb[pipe].end,
> +		    new_dbuf_state->ddb[pipe].start, new_dbuf_state->ddb[pipe].end,
> +		    old_dbuf_state->active_pipes, new_dbuf_state->active_pipes);
>  
>  	return 0;
>  }
> @@ -4549,35 +4491,32 @@ icl_get_total_relative_data_rate(struct intel_crtc_state *crtc_state,
>  }
>  
>  static int
> -skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
> +skl_allocate_plane_ddb(struct intel_atomic_state *state,
> +		       struct intel_crtc *crtc)
>  {
> -	struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
>  	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
> -	struct intel_atomic_state *state =
> -		to_intel_atomic_state(crtc_state->uapi.state);
> -	struct intel_dbuf_state *dbuf_state =
> +	struct intel_crtc_state *crtc_state =
> +		intel_atomic_get_new_crtc_state(state, crtc);
> +	const struct intel_dbuf_state *dbuf_state =
>  		intel_atomic_get_new_dbuf_state(state);
> -	struct skl_ddb_entry *alloc = &dbuf_state->ddb[crtc->pipe];
> +	const struct skl_ddb_entry *alloc = &dbuf_state->ddb[crtc->pipe];
> +	int num_active = hweight8(dbuf_state->active_pipes);
>  	u16 alloc_size, start = 0;
>  	u16 total[I915_MAX_PLANES] = {};
>  	u16 uv_total[I915_MAX_PLANES] = {};
>  	u64 total_data_rate;
>  	enum plane_id plane_id;
> -	int num_active;
>  	u64 plane_data_rate[I915_MAX_PLANES] = {};
>  	u64 uv_plane_data_rate[I915_MAX_PLANES] = {};
>  	u32 blocks;
>  	int level;
> -	int ret;
>  
>  	/* Clear the partitioning for disabled planes. */
>  	memset(crtc_state->wm.skl.plane_ddb_y, 0, sizeof(crtc_state->wm.skl.plane_ddb_y));
>  	memset(crtc_state->wm.skl.plane_ddb_uv, 0, sizeof(crtc_state->wm.skl.plane_ddb_uv));
>  
> -	if (!crtc_state->hw.active) {
> -		alloc->start = alloc->end = 0;
> +	if (!crtc_state->hw.active)
>  		return 0;
> -	}
>  
>  	if (INTEL_GEN(dev_priv) >= 11)
>  		total_data_rate =
> @@ -4589,13 +4528,6 @@ skl_allocate_pipe_ddb(struct intel_crtc_state *crtc_state)
>  							 plane_data_rate,
>  							 uv_plane_data_rate);
>  
> -
> -	ret = skl_ddb_get_pipe_allocation_limits(dev_priv, crtc_state,
> -						 total_data_rate,
> -						 alloc, &num_active);
> -	if (ret)
> -		return ret;
> -
>  	alloc_size = skl_ddb_entry_size(alloc);
>  	if (alloc_size == 0)
>  		return 0;
> @@ -5475,39 +5407,114 @@ skl_ddb_add_affected_planes(const struct intel_crtc_state *old_crtc_state,
>  	return 0;
>  }
>  
> +static u8 intel_dbuf_enabled_slices(const struct intel_dbuf_state *dbuf_state)
> +{
> +	struct drm_i915_private *dev_priv = to_i915(dbuf_state->base.state->base.dev);
> +	u8 enabled_slices;
> +	enum pipe pipe;
> +
> +	/*
> +	 * FIXME: For now we always enable slice S1 as per
> +	 * the Bspec display initialization sequence.
> +	 */
> +	enabled_slices = BIT(DBUF_S1);
> +
> +	for_each_pipe(dev_priv, pipe)
> +		enabled_slices |= dbuf_state->slices[pipe];
> +
> +	return enabled_slices;
> +}
> +
>  static int
>  skl_compute_ddb(struct intel_atomic_state *state)
>  {
>  	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
>  	const struct intel_dbuf_state *old_dbuf_state;
> -	const struct intel_dbuf_state *new_dbuf_state;
> +	struct intel_dbuf_state *new_dbuf_state = NULL;
>  	const struct intel_crtc_state *old_crtc_state;
>  	struct intel_crtc_state *new_crtc_state;
>  	struct intel_crtc *crtc;
>  	int ret, i;
>  
> -	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
> -					    new_crtc_state, i) {
> -		ret = skl_allocate_pipe_ddb(new_crtc_state);
> +	for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) {
> +		new_dbuf_state = intel_atomic_get_dbuf_state(state);
> +		if (IS_ERR(new_dbuf_state))
> +			return PTR_ERR(new_dbuf_state);
> +
> +		old_dbuf_state = intel_atomic_get_old_dbuf_state(state);
> +		break;
> +	}
> +
> +	if (!new_dbuf_state)
> +		return 0;
> +
> +	new_dbuf_state->active_pipes =
> +		intel_calc_active_pipes(state, old_dbuf_state->active_pipes);
> +
> +	if (old_dbuf_state->active_pipes != new_dbuf_state->active_pipes) {
> +		ret = intel_atomic_lock_global_state(&new_dbuf_state->base);
>  		if (ret)
>  			return ret;
> +	}
>  
> -		ret = skl_ddb_add_affected_planes(old_crtc_state,
> -						  new_crtc_state);
> +	for_each_intel_crtc(&dev_priv->drm, crtc) {
> +		enum pipe pipe = crtc->pipe;
> +
> +		new_dbuf_state->slices[pipe] =
> +			skl_compute_dbuf_slices(crtc, new_dbuf_state->active_pipes);
> +
> +		if (old_dbuf_state->slices[pipe] == new_dbuf_state->slices[pipe])
> +			continue;
> +
> +		ret = intel_atomic_lock_global_state(&new_dbuf_state->base);
>  		if (ret)
>  			return ret;
>  	}
>  
> -	old_dbuf_state = intel_atomic_get_old_dbuf_state(state);
> -	new_dbuf_state = intel_atomic_get_new_dbuf_state(state);
> +	new_dbuf_state->enabled_slices = intel_dbuf_enabled_slices(new_dbuf_state);
> +
> +	if (old_dbuf_state->enabled_slices != new_dbuf_state->enabled_slices) {
> +		ret = intel_atomic_serialize_global_state(&new_dbuf_state->base);
> +		if (ret)
> +			return ret;
>  
> -	if (new_dbuf_state &&
> -	    new_dbuf_state->enabled_slices != old_dbuf_state->enabled_slices)
>  		drm_dbg_kms(&dev_priv->drm,
>  			    "Enabled dbuf slices 0x%x -> 0x%x (out of %d dbuf slices)\n",
>  			    old_dbuf_state->enabled_slices,
>  			    new_dbuf_state->enabled_slices,
>  			    INTEL_INFO(dev_priv)->num_supported_dbuf_slices);
> +	}
> +
> +	for_each_new_intel_crtc_in_state(state, crtc, new_crtc_state, i) {
> +		enum pipe pipe = crtc->pipe;
> +
> +		new_dbuf_state->weight[crtc->pipe] = intel_crtc_ddb_weight(new_crtc_state);
> +
> +		if (old_dbuf_state->weight[pipe] == new_dbuf_state->weight[pipe])
> +			continue;
> +
> +		ret = intel_atomic_lock_global_state(&new_dbuf_state->base);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	for_each_intel_crtc(&dev_priv->drm, crtc) {
> +		ret = skl_crtc_allocate_ddb(state, crtc);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
> +					    new_crtc_state, i) {
> +		ret = skl_allocate_plane_ddb(state, crtc);
> +		if (ret)
> +			return ret;
> +
> +		ret = skl_ddb_add_affected_planes(old_crtc_state,
> +						  new_crtc_state);
> +		if (ret)
> +			return ret;
> +	}
>  
>  	return 0;
>  }
> @@ -5636,83 +5643,6 @@ skl_print_wm_changes(struct intel_atomic_state *state)
>  	}
>  }
>  
> -static int intel_add_affected_pipes(struct intel_atomic_state *state,
> -				    u8 pipe_mask)
> -{
> -	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> -	struct intel_crtc *crtc;
> -
> -	for_each_intel_crtc(&dev_priv->drm, crtc) {
> -		struct intel_crtc_state *crtc_state;
> -
> -		if ((pipe_mask & BIT(crtc->pipe)) == 0)
> -			continue;
> -
> -		crtc_state = intel_atomic_get_crtc_state(&state->base, crtc);
> -		if (IS_ERR(crtc_state))
> -			return PTR_ERR(crtc_state);
> -	}
> -
> -	return 0;
> -}
> -
> -static int
> -skl_ddb_add_affected_pipes(struct intel_atomic_state *state)
> -{
> -	struct drm_i915_private *dev_priv = to_i915(state->base.dev);
> -	struct intel_crtc_state *crtc_state;
> -	struct intel_crtc *crtc;
> -	int i, ret;
> -
> -	if (dev_priv->wm.distrust_bios_wm) {
> -		/*
> -		 * skl_ddb_get_pipe_allocation_limits() currently requires
> -		 * all active pipes to be included in the state so that
> -		 * it can redistribute the dbuf among them, and it really
> -		 * wants to recompute things when distrust_bios_wm is set
> -		 * so we add all the pipes to the state.
> -		 */
> -		ret = intel_add_affected_pipes(state, ~0);
> -		if (ret)
> -			return ret;
> -	}
> -
> -	for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
> -		struct intel_dbuf_state *new_dbuf_state;
> -		const struct intel_dbuf_state *old_dbuf_state;
> -
> -		new_dbuf_state = intel_atomic_get_dbuf_state(state);
> -		if (IS_ERR(new_dbuf_state))
> -			return ret;
> -
> -		old_dbuf_state = intel_atomic_get_old_dbuf_state(state);
> -
> -		new_dbuf_state->active_pipes =
> -			intel_calc_active_pipes(state, old_dbuf_state->active_pipes);
> -
> -		if (old_dbuf_state->active_pipes == new_dbuf_state->active_pipes)
> -			break;
> -
> -		ret = intel_atomic_lock_global_state(&new_dbuf_state->base);
> -		if (ret)
> -			return ret;
> -
> -		/*
> -		 * skl_ddb_get_pipe_allocation_limits() currently requires
> -		 * all active pipes to be included in the state so that
> -		 * it can redistribute the dbuf among them.
> -		 */
> -		ret = intel_add_affected_pipes(state,
> -					       new_dbuf_state->active_pipes);
> -		if (ret)
> -			return ret;
> -
> -		break;
> -	}
> -
> -	return 0;
> -}
> -
>  /*
>   * To make sure the cursor watermark registers are always consistent
>   * with our computed state the following scenario needs special
> @@ -5781,15 +5711,6 @@ skl_compute_wm(struct intel_atomic_state *state)
>  	struct intel_crtc_state *old_crtc_state;
>  	int ret, i;
>  
> -	ret = skl_ddb_add_affected_pipes(state);
> -	if (ret)
> -		return ret;
> -
> -	/*
> -	 * Calculate WM's for all pipes that are part of this transaction.
> -	 * Note that skl_ddb_add_affected_pipes may have added more CRTC's that
> -	 * weren't otherwise being modified if pipe allocations had to change.
> -	 */
>  	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
>  					    new_crtc_state, i) {
>  		ret = skl_build_pipe_wm(new_crtc_state);
> @@ -5944,11 +5865,6 @@ void skl_wm_get_hw_state(struct drm_i915_private *dev_priv)
>  
>  		skl_pipe_wm_get_hw_state(crtc, &crtc_state->wm.skl.optimal);
>  	}
> -
> -	if (dev_priv->active_pipes) {
> -		/* Fully recompute DDB on first atomic commit */
> -		dev_priv->wm.distrust_bios_wm = true;
> -	}
>  }
>  
>  static void ilk_pipe_wm_get_hw_state(struct intel_crtc *crtc)
> diff --git a/drivers/gpu/drm/i915/intel_pm.h b/drivers/gpu/drm/i915/intel_pm.h
> index d9f84d93280d..3a82b8046f10 100644
> --- a/drivers/gpu/drm/i915/intel_pm.h
> +++ b/drivers/gpu/drm/i915/intel_pm.h
> @@ -66,6 +66,8 @@ struct intel_dbuf_state {
>  	struct intel_global_state base;
>  
>  	struct skl_ddb_entry ddb[I915_MAX_PIPES];
> +	unsigned int weight[I915_MAX_PIPES];
> +	u8 slices[I915_MAX_PIPES];
>  
>  	u8 enabled_slices;
>  	u8 active_pipes;
> -- 
> 2.24.1
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [Intel-gfx] [PATCH v2 19/20] drm/i915: Do a bit more initial readout for dbuf
  2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 19/20] drm/i915: Do a bit more initial readout for dbuf Ville Syrjala
@ 2021-01-21 12:57   ` Lisovskiy, Stanislav
  0 siblings, 0 replies; 55+ messages in thread
From: Lisovskiy, Stanislav @ 2021-01-21 12:57 UTC (permalink / raw)
  To: Ville Syrjala; +Cc: intel-gfx

On Tue, Feb 25, 2020 at 07:11:24PM +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> Readout the dbuf related stuff during driver init/resume and
> stick it into our dbuf state.
> 
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>

Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>

> ---
>  drivers/gpu/drm/i915/display/intel_display.c |  4 --
>  drivers/gpu/drm/i915/intel_pm.c              | 48 +++++++++++++++++++-
>  2 files changed, 46 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
> index e3df43f3932d..21ad1adcc1eb 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -17475,14 +17475,10 @@ void intel_modeset_init_hw(struct drm_i915_private *i915)
>  {
>  	struct intel_cdclk_state *cdclk_state =
>  		to_intel_cdclk_state(i915->cdclk.obj.state);
> -	struct intel_dbuf_state *dbuf_state =
> -		to_intel_dbuf_state(i915->dbuf.obj.state);
>  
>  	intel_update_cdclk(i915);
>  	intel_dump_cdclk_config(&i915->cdclk.hw, "Current CDCLK");
>  	cdclk_state->logical = cdclk_state->actual = i915->cdclk.hw;
> -
> -	dbuf_state->enabled_slices = i915->dbuf.enabled_slices;
>  }
>  
>  static int sanitize_watermarks_add_affected(struct drm_atomic_state *state)
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index c11508fb3fac..7edac506d343 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -5363,6 +5363,18 @@ static inline bool skl_ddb_entries_overlap(const struct skl_ddb_entry *a,
>  	return a->start < b->end && b->start < a->end;
>  }
>  
> +static void skl_ddb_entry_union(struct skl_ddb_entry *a,
> +				const struct skl_ddb_entry *b)
> +{
> +	if (a->end && b->end) {
> +		a->start = min(a->start, b->start);
> +		a->end = max(a->end, b->end);
> +	} else if (b->end) {
> +		a->start = b->start;
> +		a->end = b->end;
> +	}
> +}
> +
>  bool skl_ddb_allocation_overlaps(const struct skl_ddb_entry *ddb,
>  				 const struct skl_ddb_entry *entries,
>  				 int num_entries, int ignore_idx)
> @@ -5857,14 +5869,46 @@ void skl_pipe_wm_get_hw_state(struct intel_crtc *crtc,
>  
>  void skl_wm_get_hw_state(struct drm_i915_private *dev_priv)
>  {
> +	struct intel_dbuf_state *dbuf_state =
> +		to_intel_dbuf_state(dev_priv->dbuf.obj.state);
>  	struct intel_crtc *crtc;
> -	struct intel_crtc_state *crtc_state;
>  
>  	for_each_intel_crtc(&dev_priv->drm, crtc) {
> -		crtc_state = to_intel_crtc_state(crtc->base.state);
> +		struct intel_crtc_state *crtc_state =
> +			to_intel_crtc_state(crtc->base.state);
> +		enum pipe pipe = crtc->pipe;
> +		enum plane_id plane_id;
>  
>  		skl_pipe_wm_get_hw_state(crtc, &crtc_state->wm.skl.optimal);
> +
> +		memset(&dbuf_state->ddb[pipe], 0, sizeof(dbuf_state->ddb[pipe]));
> +
> +		for_each_plane_id_on_crtc(crtc, plane_id) {
> +			struct skl_ddb_entry *ddb_y =
> +				&crtc_state->wm.skl.plane_ddb_y[plane_id];
> +			struct skl_ddb_entry *ddb_uv =
> +				&crtc_state->wm.skl.plane_ddb_uv[plane_id];
> +
> +			skl_ddb_get_hw_plane_state(dev_priv, crtc->pipe,
> +						   plane_id, ddb_y, ddb_uv);
> +
> +			skl_ddb_entry_union(&dbuf_state->ddb[pipe], ddb_y);
> +			skl_ddb_entry_union(&dbuf_state->ddb[pipe], ddb_uv);
> +		}
> +
> +		dbuf_state->slices[pipe] =
> +			skl_compute_dbuf_slices(crtc, dbuf_state->active_pipes);
> +
> +		dbuf_state->weight[pipe] = intel_crtc_ddb_weight(crtc_state);
> +
> +		drm_dbg_kms(&dev_priv->drm,
> +			    "[CRTC:%d:%s] dbuf slices 0x%x, ddb (%d - %d), active pipes 0x%x\n",
> +			    crtc->base.base.id, crtc->base.name,
> +			    dbuf_state->slices[pipe], dbuf_state->ddb[pipe].start,
> +			    dbuf_state->ddb[pipe].end, dbuf_state->active_pipes);
>  	}
> +
> +	dbuf_state->enabled_slices = dev_priv->dbuf.enabled_slices;
>  }
>  
>  static void ilk_pipe_wm_get_hw_state(struct intel_crtc *crtc)
> -- 
> 2.24.1
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 55+ messages in thread

end of thread, other threads:[~2021-01-21 12:56 UTC | newest]

Thread overview: 55+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-25 17:11 [Intel-gfx] [PATCH v2 00/20] drm/i915: Proper dbuf global state Ville Syrjala
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 01/20] drm/i915: Handle some leftover s/intel_crtc/crtc/ Ville Syrjala
2020-02-26  9:29   ` Jani Nikula
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 02/20] drm/i915: Remove garbage WARNs Ville Syrjala
2020-02-26  9:30   ` Jani Nikula
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 03/20] drm/i915: Add missing commas to dbuf tables Ville Syrjala
2020-02-26  9:30   ` Jani Nikula
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 04/20] drm/i915: Use a sentinel to terminate the dbuf slice arrays Ville Syrjala
2020-02-26  9:32   ` Jani Nikula
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 05/20] drm/i915: Make skl_compute_dbuf_slices() behave consistently for all platforms Ville Syrjala
2020-02-25 17:30   ` Lisovskiy, Stanislav
2020-03-02 14:50     ` Ville Syrjälä
2020-03-02 15:50       ` Lisovskiy, Stanislav
2020-04-01  7:52       ` Lisovskiy, Stanislav
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 06/20] drm/i915: Polish some dbuf debugs Ville Syrjala
2020-03-04 16:29   ` Lisovskiy, Stanislav
2020-03-04 18:26     ` Ville Syrjälä
2020-03-05  9:53       ` Lisovskiy, Stanislav
2020-03-05 13:46         ` Ville Syrjälä
2020-03-05 14:56           ` Lisovskiy, Stanislav
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 07/20] drm/i915: Unify the low level dbuf code Ville Syrjala
2020-03-04 17:14   ` Lisovskiy, Stanislav
2020-03-04 17:23   ` Lisovskiy, Stanislav
2020-03-04 18:30     ` Ville Syrjälä
2020-03-05  8:28       ` Lisovskiy, Stanislav
2020-03-05 13:37         ` Ville Syrjälä
2020-03-05 14:01           ` Lisovskiy, Stanislav
2020-03-05  8:46   ` Lisovskiy, Stanislav
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 08/20] drm/i915: Introduce proper dbuf state Ville Syrjala
2020-02-25 17:43   ` Lisovskiy, Stanislav
2020-04-01  8:13   ` Lisovskiy, Stanislav
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 09/20] drm/i915: Nuke skl_ddb_get_hw_state() Ville Syrjala
2020-02-26 11:40   ` Lisovskiy, Stanislav
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 10/20] drm/i915: Move the dbuf pre/post plane update Ville Syrjala
2020-02-26 11:38   ` Lisovskiy, Stanislav
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 11/20] drm/i915: Clean up dbuf debugs during .atomic_check() Ville Syrjala
2020-02-26 11:32   ` Lisovskiy, Stanislav
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 12/20] drm/i915: Extract intel_crtc_ddb_weight() Ville Syrjala
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 13/20] drm/i915: Pass the crtc to skl_compute_dbuf_slices() Ville Syrjala
2020-02-26  8:41   ` Lisovskiy, Stanislav
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 14/20] drm/i915: Introduce intel_dbuf_slice_size() Ville Syrjala
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 15/20] drm/i915: Introduce skl_ddb_entry_for_slices() Ville Syrjala
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 16/20] drm/i915: Move pipe ddb entries into the dbuf state Ville Syrjala
2020-02-27 16:50   ` Ville Syrjala
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 17/20] drm/i915: Extract intel_crtc_dbuf_weights() Ville Syrjala
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 18/20] drm/i915: Encapsulate dbuf state handling harder Ville Syrjala
2021-01-21 12:55   ` Lisovskiy, Stanislav
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 19/20] drm/i915: Do a bit more initial readout for dbuf Ville Syrjala
2021-01-21 12:57   ` Lisovskiy, Stanislav
2020-02-25 17:11 ` [Intel-gfx] [PATCH v2 20/20] drm/i915: Check slice mask for holes Ville Syrjala
2020-02-25 17:47   ` Lisovskiy, Stanislav
2020-02-26 18:04 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for drm/i915: Proper dbuf global state (rev2) Patchwork
2020-02-27 20:21 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Proper dbuf global state (rev3) Patchwork
2020-02-27 20:43 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-02-29  2:40 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).