From: "Lisovskiy, Stanislav" <stanislav.lisovskiy@intel.com>
To: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH 1/7] drm/i915: Fix TGL+ plane SAGV watermark programming
Date: Mon, 1 Mar 2021 10:38:10 +0200 [thread overview]
Message-ID: <20210301083810.GA21872@intel.com> (raw)
In-Reply-To: <20210226153204.1270-2-ville.syrjala@linux.intel.com>
On Fri, Feb 26, 2021 at 05:31:58PM +0200, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
>
> When we switch between SAGV on vs. off we need to reprogram all
> plane wateramrks accordingly. Currently skl_wm_add_affected_planes()
> totally ignores the SAGV watermark and just assumes we will use
> the normal WM0.
>
> Fix this by utilizing skl_plane_wm_level() which picks the
> correct watermark based on use_sagv_wm. Thus we will force
> an update on all the planes whose watermark registers need
> to be reprogrammed.
Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
>
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
> Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> ---
> drivers/gpu/drm/i915/intel_pm.c | 60 ++++++++++++++++++++-------------
> 1 file changed, 37 insertions(+), 23 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index 8cc67f9c4e58..2d0e3e7f11b8 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -4748,11 +4748,10 @@ icl_get_total_relative_data_rate(struct intel_atomic_state *state,
> }
>
> static const struct skl_wm_level *
> -skl_plane_wm_level(const struct intel_crtc_state *crtc_state,
> +skl_plane_wm_level(const struct skl_pipe_wm *pipe_wm,
> enum plane_id plane_id,
> int level)
> {
> - const struct skl_pipe_wm *pipe_wm = &crtc_state->wm.skl.optimal;
> const struct skl_plane_wm *wm = &pipe_wm->planes[plane_id];
>
> if (level == 0 && pipe_wm->use_sagv_wm)
> @@ -5572,21 +5571,17 @@ void skl_write_plane_wm(struct intel_plane *plane,
> int level, max_level = ilk_wm_max_level(dev_priv);
> enum plane_id plane_id = plane->id;
> enum pipe pipe = plane->pipe;
> - const struct skl_plane_wm *wm =
> - &crtc_state->wm.skl.optimal.planes[plane_id];
> + const struct skl_pipe_wm *pipe_wm = &crtc_state->wm.skl.optimal;
> + const struct skl_plane_wm *wm = &pipe_wm->planes[plane_id];
> const struct skl_ddb_entry *ddb_y =
> &crtc_state->wm.skl.plane_ddb_y[plane_id];
> const struct skl_ddb_entry *ddb_uv =
> &crtc_state->wm.skl.plane_ddb_uv[plane_id];
>
> - for (level = 0; level <= max_level; level++) {
> - const struct skl_wm_level *wm_level;
> -
> - wm_level = skl_plane_wm_level(crtc_state, plane_id, level);
> -
> + for (level = 0; level <= max_level; level++)
> skl_write_wm_level(dev_priv, PLANE_WM(pipe, plane_id, level),
> - wm_level);
> - }
> + skl_plane_wm_level(pipe_wm, plane_id, level));
> +
> skl_write_wm_level(dev_priv, PLANE_WM_TRANS(pipe, plane_id),
> &wm->trans_wm);
>
> @@ -5612,19 +5607,15 @@ void skl_write_cursor_wm(struct intel_plane *plane,
> int level, max_level = ilk_wm_max_level(dev_priv);
> enum plane_id plane_id = plane->id;
> enum pipe pipe = plane->pipe;
> - const struct skl_plane_wm *wm =
> - &crtc_state->wm.skl.optimal.planes[plane_id];
> + const struct skl_pipe_wm *pipe_wm = &crtc_state->wm.skl.optimal;
> + const struct skl_plane_wm *wm = &pipe_wm->planes[plane_id];
> const struct skl_ddb_entry *ddb =
> &crtc_state->wm.skl.plane_ddb_y[plane_id];
>
> - for (level = 0; level <= max_level; level++) {
> - const struct skl_wm_level *wm_level;
> -
> - wm_level = skl_plane_wm_level(crtc_state, plane_id, level);
> -
> + for (level = 0; level <= max_level; level++)
> skl_write_wm_level(dev_priv, CUR_WM(pipe, level),
> - wm_level);
> - }
> + skl_plane_wm_level(pipe_wm, plane_id, level));
> +
> skl_write_wm_level(dev_priv, CUR_WM_TRANS(pipe), &wm->trans_wm);
>
> skl_ddb_entry_write(dev_priv, CUR_BUF_CFG(pipe), ddb);
> @@ -5964,6 +5955,29 @@ skl_print_wm_changes(struct intel_atomic_state *state)
> }
> }
>
> +static bool skl_plane_selected_wm_equals(struct intel_plane *plane,
> + const struct skl_pipe_wm *old_pipe_wm,
> + const struct skl_pipe_wm *new_pipe_wm)
> +{
> + const struct skl_plane_wm *old_wm = &old_pipe_wm->planes[plane->id];
> + const struct skl_plane_wm *new_wm = &new_pipe_wm->planes[plane->id];
> + struct drm_i915_private *i915 = to_i915(plane->base.dev);
> + int level, max_level = ilk_wm_max_level(i915);
> +
> + for (level = 0; level <= max_level; level++) {
> + /*
> + * We don't check uv_wm as the hardware doesn't actually
> + * use it. It only gets used for calculating the required
> + * ddb allocation.
> + */
> + if (!skl_wm_level_equals(skl_plane_wm_level(old_pipe_wm, level, plane->id),
> + skl_plane_wm_level(new_pipe_wm, level, plane->id)))
> + return false;
> + }
> +
> + return skl_wm_level_equals(&old_wm->trans_wm, &new_wm->trans_wm);
> +}
> +
> /*
> * To make sure the cursor watermark registers are always consistent
> * with our computed state the following scenario needs special
> @@ -6009,9 +6023,9 @@ static int skl_wm_add_affected_planes(struct intel_atomic_state *state,
> * with the software state.
> */
> if (!drm_atomic_crtc_needs_modeset(&new_crtc_state->uapi) &&
> - skl_plane_wm_equals(dev_priv,
> - &old_crtc_state->wm.skl.optimal.planes[plane_id],
> - &new_crtc_state->wm.skl.optimal.planes[plane_id]))
> + skl_plane_selected_wm_equals(plane,
> + &old_crtc_state->wm.skl.optimal,
> + &new_crtc_state->wm.skl.optimal))
> continue;
>
> plane_state = intel_atomic_get_plane_state(state, plane);
> --
> 2.26.2
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2021-03-01 8:36 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-26 15:31 [Intel-gfx] [PATCH 0/7] drm/i915: Fix up TGL+ SAGV watermarks Ville Syrjala
2021-02-26 15:31 ` [Intel-gfx] [PATCH 1/7] drm/i915: Fix TGL+ plane SAGV watermark programming Ville Syrjala
2021-03-01 8:38 ` Lisovskiy, Stanislav [this message]
2021-02-26 15:31 ` [Intel-gfx] [PATCH 2/7] drm/i915: Zero out SAGV wm when we don't have enough DDB for it Ville Syrjala
2021-03-01 8:42 ` Lisovskiy, Stanislav
2021-02-26 15:32 ` [Intel-gfx] [PATCH 3/7] drm/i915: Print wm changes if sagv_wm0 changes Ville Syrjala
2021-03-01 9:14 ` Lisovskiy, Stanislav
2021-02-26 15:32 ` [Intel-gfx] [PATCH 4/7] drm/i915: Stuff SAGV watermark into a sub-structure Ville Syrjala
2021-03-01 9:17 ` Lisovskiy, Stanislav
2021-02-26 15:32 ` [Intel-gfx] [PATCH 5/7] drm/i915: Introduce SAGV transtion watermark Ville Syrjala
2021-03-01 9:21 ` Lisovskiy, Stanislav
2021-02-26 15:32 ` [Intel-gfx] [PATCH 6/7] drm/i915: Check tgl+ SAGV watermarks properly Ville Syrjala
2021-03-01 9:24 ` Lisovskiy, Stanislav
2021-02-26 15:32 ` [Intel-gfx] [PATCH 7/7] drm/i915: Clean up verify_wm_state() Ville Syrjala
2021-03-01 9:27 ` Lisovskiy, Stanislav
2021-02-26 15:44 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/i915: Fix up TGL+ SAGV watermarks Patchwork
2021-02-26 16:14 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210301083810.GA21872@intel.com \
--to=stanislav.lisovskiy@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=ville.syrjala@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).