All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/27] DC Patches May 15, 2020
@ 2020-05-15 18:12 Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 01/27] drm/amd/display: Minimize DSC resource re-assignment Rodrigo Siqueira
                   ` (26 more replies)
  0 siblings, 27 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:12 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Bhawanpreet.Lakha, Rodrigo.Siqueira, Harry.Wentland,
	Aurabindo.Pillai

This DC patchset brings improvements in multiple areas. In summary, we
highlight:
 
* FW updates;
* Fix issues on DML, modes, ABM, and others;
* Remove file that use FPU unit;
* Improvements on DMUB, DM, DP, and others.

Anthony Koo (2):
  drm/amd/display: FW release 1.0.10
  drm/amd/display: FW Release 1.0.11

Aric Cyr (1):
  drm/amd/display: 3.2.85

Dmytro Laktyushkin (5):
  drm/amd/display: fix and simplify pipe split logic
  drm/amd/display: update dml interfaces and variables
  drm/amd/display: correct rn NUM_VMID
  drm/amd/display: fix dml log2 function
  drm/amd/display: fix dml immediate flip input

Jaehyun Chung (1):
  drm/amd/display: Handle persistence in DM

Jake Wang (1):
  drm/amd/display: vbios data table packing

Jinze Xu (1):
  drm/amd/display: Set/Reset avmute when disable/enable stream

Nicholas Kazlauskas (6):
  drm/amd/display: Check bss_data_size before going down legacy DMUB
    load path
  drm/amd/display: Don't pass invalid fw_bss_data pointer into DMUB srv
  drm/amd/display: Defer cursor lock until after VUPDATE
  drm/amd/display: Avoid pipe split when plane is too small
  drm/amd/display: Add DMUB firmware version helpers in DMUB service
  drm/amd/display: Support CW4 for DMUB ringbuffer inbox

Nikola Cornij (1):
  drm/amd/display: Minimize DSC resource re-assignment

Rodrigo Siqueira (2):
  drm/amd/display: Add bit swap helper based on endianness
  drm/amd/display: Remove dml_common_def file

Stylon Wang (1):
  drm/amd/display: Fix incorrectly pruned modes with deep color

Sung Lee (1):
  drm/amd/display: Do not fail if build scaling params fails

Vladimir Stempen (1):
  drm/amd/display: DP training to set properly SCRAMBLING_DISABLE

Wenjing Liu (1):
  drm/amd/display: DP link layer test 4.2.1.1 fix due to specs update

Wyatt Wood (1):
  drm/amd/display: Fix ABM memory alignment issue

Yongqiang Sun (2):
  drm/amd/display: Implement some asic specific abm call backs.
  drm/amd/display: Remove nv12 work around

 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 106 +++++---
 .../display/amdgpu_dm/amdgpu_dm_services.c    |  25 --
 .../drm/amd/display/dc/bios/bios_parser2.c    |  98 ++++++++
 .../gpu/drm/amd/display/dc/calcs/dcn_calcs.c  |  21 +-
 drivers/gpu/drm/amd/display/dc/core/dc_link.c |  11 +-
 .../gpu/drm/amd/display/dc/core/dc_link_ddc.c |  13 +-
 .../gpu/drm/amd/display/dc/core/dc_link_dp.c  |  86 ++++---
 .../drm/amd/display/dc/core/dc_link_hwss.c    |   2 +-
 .../gpu/drm/amd/display/dc/core/dc_resource.c |  28 ++-
 .../drm/amd/display/dc/core/dc_vm_helper.c    |   3 -
 drivers/gpu/drm/amd/display/dc/dc.h           |   5 +-
 .../gpu/drm/amd/display/dc/dc_bios_types.h    |   4 +-
 drivers/gpu/drm/amd/display/dc/dc_link.h      |   1 +
 drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c |  92 -------
 drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c |   2 +-
 .../display/dc/dce110/dce110_hw_sequencer.c   |  11 +
 .../display/dc/dce110/dce110_hw_sequencer.h   |   1 +
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c |  72 +++++-
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.h |   5 +
 .../gpu/drm/amd/display/dc/dcn10/dcn10_init.c |   2 +
 .../drm/amd/display/dc/dcn20/dcn20_hubbub.h   |   1 +
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.c    |   6 +-
 .../gpu/drm/amd/display/dc/dcn20/dcn20_init.c |   2 +
 .../drm/amd/display/dc/dcn20/dcn20_resource.c | 193 ++++++++-------
 .../drm/amd/display/dc/dcn20/dcn20_resource.h |   7 +-
 .../drm/amd/display/dc/dcn21/dcn21_hubbub.c   |   7 +-
 .../gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c | 121 +--------
 .../drm/amd/display/dc/dcn21/dcn21_hwseq.c    |  89 +++++++
 .../drm/amd/display/dc/dcn21/dcn21_hwseq.h    |   6 +
 .../gpu/drm/amd/display/dc/dcn21/dcn21_init.c |   6 +-
 .../drm/amd/display/dc/dcn21/dcn21_resource.c |   4 +-
 drivers/gpu/drm/amd/display/dc/dm_services.h  |  69 ------
 drivers/gpu/drm/amd/display/dc/dml/Makefile   |   2 -
 .../dc/dml/dcn20/display_rq_dlg_calc_20.c     |  33 +--
 .../dc/dml/dcn20/display_rq_dlg_calc_20.h     |   1 -
 .../dc/dml/dcn20/display_rq_dlg_calc_20v2.c   |  33 +--
 .../dc/dml/dcn20/display_rq_dlg_calc_20v2.h   |   1 -
 .../dc/dml/dcn21/display_rq_dlg_calc_21.c     |  36 +--
 .../dc/dml/dcn21/display_rq_dlg_calc_21.h     |   2 +-
 .../amd/display/dc/dml/display_mode_enums.h   |   6 +
 .../drm/amd/display/dc/dml/display_mode_lib.h |   6 +-
 .../amd/display/dc/dml/display_mode_structs.h |  11 +
 .../drm/amd/display/dc/dml/display_mode_vba.c |  60 +++--
 .../drm/amd/display/dc/dml/display_mode_vba.h | 229 ++++++++++--------
 .../display/dc/dml/display_rq_dlg_helpers.h   |   1 -
 .../display/dc/dml/dml1_display_rq_dlg_calc.h |   2 -
 .../drm/amd/display/dc/dml/dml_common_defs.c  |  43 ----
 .../drm/amd/display/dc/dml/dml_common_defs.h  |  37 ---
 .../drm/amd/display/dc/dml/dml_inline_defs.h  |  19 +-
 .../gpu/drm/amd/display/dc/inc/dc_link_ddc.h  |   2 +-
 .../gpu/drm/amd/display/dc/inc/dc_link_dp.h   |   2 +-
 drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h  |   3 -
 .../gpu/drm/amd/display/dc/inc/hw_sequencer.h |   7 +
 drivers/gpu/drm/amd/display/dc/inc/resource.h |   2 +
 drivers/gpu/drm/amd/display/dmub/dmub_srv.h   |  11 +
 .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   |  16 +-
 .../drm/amd/display/dmub/inc/dmub_cmd_dal.h   |  35 +++
 .../drm/amd/display/dmub/inc/dmub_fw_meta.h   |   2 +
 .../gpu/drm/amd/display/dmub/inc/dmub_rb.h    |   6 +-
 .../gpu/drm/amd/display/dmub/inc/dmub_types.h |   9 +-
 .../gpu/drm/amd/display/dmub/src/dmub_dcn20.c |  28 ++-
 .../gpu/drm/amd/display/dmub/src/dmub_srv.c   |   8 +-
 .../drm/amd/display/modules/inc/mod_stats.h   |   8 +-
 .../amd/display/modules/power/power_helpers.c |  95 +++++---
 .../gpu/drm/amd/display/modules/vmid/vmid.c   |   7 +-
 65 files changed, 983 insertions(+), 879 deletions(-)
 delete mode 100644 drivers/gpu/drm/amd/display/dc/dml/dml_common_defs.c
 delete mode 100644 drivers/gpu/drm/amd/display/dc/dml/dml_common_defs.h

-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH 01/27] drm/amd/display: Minimize DSC resource re-assignment
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
@ 2020-05-15 18:12 ` Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 02/27] drm/amd/display: fix and simplify pipe split logic Rodrigo Siqueira
                   ` (25 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:12 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Nikola Cornij,
	Aurabindo.Pillai, Bhawanpreet.Lakha

From: Nikola Cornij <nikola.cornij@amd.com>

[why]
Assigning a different DSC resource than the one previosly used is
currently not handled. This causes black screen on mode change when more
than one monitor is connected on some ASICs.

[how]
- Acquire the previously used DSC if available
- Make sure re-program is triggered if new DSC is used

Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
Signed-off-by: Nikola Cornij <nikola.cornij@amd.com>
---
 .../gpu/drm/amd/display/dc/core/dc_resource.c |  3 ++
 .../drm/amd/display/dc/dcn20/dcn20_resource.c | 28 +++++++++++++------
 .../drm/amd/display/dc/dcn20/dcn20_resource.h |  2 +-
 3 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
index cb5d11f11cad..bbef8c67d1db 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
@@ -2666,6 +2666,9 @@ bool pipe_need_reprogram(
 		false == pipe_ctx_old->stream->dpms_off)
 		return true;
 
+	if (pipe_ctx_old->stream_res.dsc != pipe_ctx->stream_res.dsc)
+		return true;
+
 	return false;
 }
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
index 778e2e8fd2c6..4912160f81b3 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
@@ -1663,22 +1663,32 @@ enum dc_status dcn20_build_mapped_resource(const struct dc *dc, struct dc_state
 }
 
 
-static void acquire_dsc(struct resource_context *res_ctx,
-			const struct resource_pool *pool,
+static void acquire_dsc(const struct dc *dc,
+			struct resource_context *res_ctx,
 			struct display_stream_compressor **dsc,
 			int pipe_idx)
 {
 	int i;
+	const struct resource_pool *pool = dc->res_pool;
+	struct display_stream_compressor *dsc_old = dc->current_state->res_ctx.pipe_ctx[pipe_idx].stream_res.dsc;
 
-	ASSERT(*dsc == NULL);
+	ASSERT(*dsc == NULL); /* If this ASSERT fails, dsc was not released properly */
 	*dsc = NULL;
 
+	/* Always do 1-to-1 mapping when number of DSCs is same as number of pipes */
 	if (pool->res_cap->num_dsc == pool->res_cap->num_opp) {
 		*dsc = pool->dscs[pipe_idx];
 		res_ctx->is_dsc_acquired[pipe_idx] = true;
 		return;
 	}
 
+	/* Return old DSC to avoid the need for re-programming */
+	if (dsc_old && !res_ctx->is_dsc_acquired[dsc_old->inst]) {
+		*dsc = dsc_old;
+		res_ctx->is_dsc_acquired[dsc_old->inst] = true;
+		return ;
+	}
+
 	/* Find first free DSC */
 	for (i = 0; i < pool->res_cap->num_dsc; i++)
 		if (!res_ctx->is_dsc_acquired[i]) {
@@ -1710,7 +1720,6 @@ enum dc_status dcn20_add_dsc_to_stream_resource(struct dc *dc,
 {
 	enum dc_status result = DC_OK;
 	int i;
-	const struct resource_pool *pool = dc->res_pool;
 
 	/* Get a DSC if required and available */
 	for (i = 0; i < dc->res_pool->pipe_count; i++) {
@@ -1722,7 +1731,7 @@ enum dc_status dcn20_add_dsc_to_stream_resource(struct dc *dc,
 		if (pipe_ctx->stream_res.dsc)
 			continue;
 
-		acquire_dsc(&dc_ctx->res_ctx, pool, &pipe_ctx->stream_res.dsc, i);
+		acquire_dsc(dc, &dc_ctx->res_ctx, &pipe_ctx->stream_res.dsc, i);
 
 		/* The number of DSCs can be less than the number of pipes */
 		if (!pipe_ctx->stream_res.dsc) {
@@ -1850,12 +1859,13 @@ static void swizzle_to_dml_params(
 }
 
 bool dcn20_split_stream_for_odm(
+		const struct dc *dc,
 		struct resource_context *res_ctx,
-		const struct resource_pool *pool,
 		struct pipe_ctx *prev_odm_pipe,
 		struct pipe_ctx *next_odm_pipe)
 {
 	int pipe_idx = next_odm_pipe->pipe_idx;
+	const struct resource_pool *pool = dc->res_pool;
 
 	*next_odm_pipe = *prev_odm_pipe;
 
@@ -1913,7 +1923,7 @@ bool dcn20_split_stream_for_odm(
 	}
 	next_odm_pipe->stream_res.opp = pool->opps[next_odm_pipe->pipe_idx];
 	if (next_odm_pipe->stream->timing.flags.DSC == 1) {
-		acquire_dsc(res_ctx, pool, &next_odm_pipe->stream_res.dsc, next_odm_pipe->pipe_idx);
+		acquire_dsc(dc, res_ctx, &next_odm_pipe->stream_res.dsc, next_odm_pipe->pipe_idx);
 		ASSERT(next_odm_pipe->stream_res.dsc);
 		if (next_odm_pipe->stream_res.dsc == NULL)
 			return false;
@@ -2792,7 +2802,7 @@ bool dcn20_fast_validate_bw(
 			hsplit_pipe = dcn20_find_secondary_pipe(dc, &context->res_ctx, dc->res_pool, pipe);
 			ASSERT(hsplit_pipe);
 			if (!dcn20_split_stream_for_odm(
-					&context->res_ctx, dc->res_pool,
+					dc, &context->res_ctx,
 					pipe, hsplit_pipe))
 				goto validate_fail;
 			pipe_split_from[hsplit_pipe->pipe_idx] = pipe_idx;
@@ -2821,7 +2831,7 @@ bool dcn20_fast_validate_bw(
 				}
 				if (context->bw_ctx.dml.vba.ODMCombineEnabled[pipe_idx]) {
 					if (!dcn20_split_stream_for_odm(
-							&context->res_ctx, dc->res_pool,
+							dc, &context->res_ctx,
 							pipe, hsplit_pipe))
 						goto validate_fail;
 					dcn20_build_mapped_resource(dc, context, pipe->stream);
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.h b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.h
index d5448c9b0e15..ed5d31253314 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.h
@@ -136,8 +136,8 @@ void dcn20_split_stream_for_mpc(
 		struct pipe_ctx *primary_pipe,
 		struct pipe_ctx *secondary_pipe);
 bool dcn20_split_stream_for_odm(
+		const struct dc *dc,
 		struct resource_context *res_ctx,
-		const struct resource_pool *pool,
 		struct pipe_ctx *prev_odm_pipe,
 		struct pipe_ctx *next_odm_pipe);
 struct pipe_ctx *dcn20_find_secondary_pipe(struct dc *dc,
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 02/27] drm/amd/display: fix and simplify pipe split logic
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 01/27] drm/amd/display: Minimize DSC resource re-assignment Rodrigo Siqueira
@ 2020-05-15 18:12 ` Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 03/27] drm/amd/display: Handle persistence in DM Rodrigo Siqueira
                   ` (24 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:12 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Dmytro Laktyushkin,
	Eric Bernstein, Aurabindo.Pillai, Bhawanpreet.Lakha

From: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>

Current odm/mpc combine logic to detect which pipes need to split
logically is flawed leading to incorrect pipe merge/split operations
being taken.

This change cleans up the logic and fixes the logical errors.

Signed-off-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
Reviewed-by: Eric Bernstein <Eric.Bernstein@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 .../gpu/drm/amd/display/dc/core/dc_resource.c |  25 ++-
 .../drm/amd/display/dc/dcn20/dcn20_resource.c | 153 ++++++++----------
 .../drm/amd/display/dc/dcn20/dcn20_resource.h |   5 +-
 drivers/gpu/drm/amd/display/dc/inc/resource.h |   2 +
 4 files changed, 94 insertions(+), 91 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
index bbef8c67d1db..0c5619364e7d 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
@@ -532,6 +532,24 @@ static inline void get_vp_scan_direction(
 		*flip_horz_scan_dir = !*flip_horz_scan_dir;
 }
 
+int get_num_mpc_splits(struct pipe_ctx *pipe)
+{
+	int mpc_split_count = 0;
+	struct pipe_ctx *other_pipe = pipe->bottom_pipe;
+
+	while (other_pipe && other_pipe->plane_state == pipe->plane_state) {
+		mpc_split_count++;
+		other_pipe = other_pipe->bottom_pipe;
+	}
+	other_pipe = pipe->top_pipe;
+	while (other_pipe && other_pipe->plane_state == pipe->plane_state) {
+		mpc_split_count++;
+		other_pipe = other_pipe->top_pipe;
+	}
+
+	return mpc_split_count;
+}
+
 int get_num_odm_splits(struct pipe_ctx *pipe)
 {
 	int odm_split_count = 0;
@@ -556,16 +574,11 @@ static void calculate_split_count_and_index(struct pipe_ctx *pipe_ctx, int *spli
 		/*Check for mpc split*/
 		struct pipe_ctx *split_pipe = pipe_ctx->top_pipe;
 
+		*split_count = get_num_mpc_splits(pipe_ctx);
 		while (split_pipe && split_pipe->plane_state == pipe_ctx->plane_state) {
 			(*split_idx)++;
-			(*split_count)++;
 			split_pipe = split_pipe->top_pipe;
 		}
-		split_pipe = pipe_ctx->bottom_pipe;
-		while (split_pipe && split_pipe->plane_state == pipe_ctx->plane_state) {
-			(*split_count)++;
-			split_pipe = split_pipe->bottom_pipe;
-		}
 	} else {
 		/*Get odm split index*/
 		struct pipe_ctx *split_pipe = pipe_ctx->prev_odm_pipe;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
index 4912160f81b3..4190ee592e6d 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
@@ -1663,7 +1663,7 @@ enum dc_status dcn20_build_mapped_resource(const struct dc *dc, struct dc_state
 }
 
 
-static void acquire_dsc(const struct dc *dc,
+void dcn20_acquire_dsc(const struct dc *dc,
 			struct resource_context *res_ctx,
 			struct display_stream_compressor **dsc,
 			int pipe_idx)
@@ -1731,7 +1731,7 @@ enum dc_status dcn20_add_dsc_to_stream_resource(struct dc *dc,
 		if (pipe_ctx->stream_res.dsc)
 			continue;
 
-		acquire_dsc(dc, &dc_ctx->res_ctx, &pipe_ctx->stream_res.dsc, i);
+		dcn20_acquire_dsc(dc, &dc_ctx->res_ctx, &pipe_ctx->stream_res.dsc, i);
 
 		/* The number of DSCs can be less than the number of pipes */
 		if (!pipe_ctx->stream_res.dsc) {
@@ -1923,7 +1923,7 @@ bool dcn20_split_stream_for_odm(
 	}
 	next_odm_pipe->stream_res.opp = pool->opps[next_odm_pipe->pipe_idx];
 	if (next_odm_pipe->stream->timing.flags.DSC == 1) {
-		acquire_dsc(dc, res_ctx, &next_odm_pipe->stream_res.dsc, next_odm_pipe->pipe_idx);
+		dcn20_acquire_dsc(dc, res_ctx, &next_odm_pipe->stream_res.dsc, next_odm_pipe->pipe_idx);
 		ASSERT(next_odm_pipe->stream_res.dsc);
 		if (next_odm_pipe->stream_res.dsc == NULL)
 			return false;
@@ -2586,27 +2586,6 @@ static void dcn20_merge_pipes_for_validate(
 	}
 }
 
-int dcn20_find_previous_split_count(struct pipe_ctx *pipe)
-{
-	int previous_split = 1;
-	struct pipe_ctx *current_pipe = pipe;
-
-	while (current_pipe->bottom_pipe) {
-		if (current_pipe->plane_state != current_pipe->bottom_pipe->plane_state)
-			break;
-		previous_split++;
-		current_pipe = current_pipe->bottom_pipe;
-	}
-	current_pipe = pipe;
-	while (current_pipe->top_pipe) {
-		if (current_pipe->plane_state != current_pipe->top_pipe->plane_state)
-			break;
-		previous_split++;
-		current_pipe = current_pipe->top_pipe;
-	}
-	return previous_split;
-}
-
 int dcn20_validate_apply_pipe_split_flags(
 		struct dc *dc,
 		struct dc_state *context,
@@ -2618,6 +2597,8 @@ int dcn20_validate_apply_pipe_split_flags(
 	int plane_count = 0;
 	bool force_split = false;
 	bool avoid_split = dc->debug.pipe_split_policy == MPC_SPLIT_AVOID;
+	struct vba_vars_st *v = &context->bw_ctx.dml.vba;
+	int max_mpc_comb = v->maxMpcComb;
 
 	if (context->stream_count > 1) {
 		if (dc->debug.pipe_split_policy == MPC_SPLIT_AVOID_MULT_DISP)
@@ -2638,15 +2619,13 @@ int dcn20_validate_apply_pipe_split_flags(
 
 	/* Avoid split loop looks for lowest voltage level that allows most unsplit pipes possible */
 	if (avoid_split) {
-		int max_mpc_comb = context->bw_ctx.dml.vba.maxMpcComb;
-
 		for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
 			if (!context->res_ctx.pipe_ctx[i].stream)
 				continue;
 
 			for (vlevel_split = vlevel; vlevel <= context->bw_ctx.dml.soc.num_states; vlevel++)
-				if (context->bw_ctx.dml.vba.NoOfDPP[vlevel][0][pipe_idx] == 1 &&
-						context->bw_ctx.dml.vba.ModeSupport[vlevel][0])
+				if (v->NoOfDPP[vlevel][0][pipe_idx] == 1 &&
+						v->ModeSupport[vlevel][0])
 					break;
 			/* Impossible to not split this pipe */
 			if (vlevel > context->bw_ctx.dml.soc.num_states)
@@ -2655,21 +2634,21 @@ int dcn20_validate_apply_pipe_split_flags(
 				max_mpc_comb = 0;
 			pipe_idx++;
 		}
-		context->bw_ctx.dml.vba.maxMpcComb = max_mpc_comb;
+		v->maxMpcComb = max_mpc_comb;
 	}
 
 	/* Split loop sets which pipe should be split based on dml outputs and dc flags */
 	for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
 		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
-		int pipe_plane = context->bw_ctx.dml.vba.pipe_plane[pipe_idx];
+		int pipe_plane = v->pipe_plane[pipe_idx];
+		bool split4mpc = context->stream_count == 1 && plane_count == 1
+				&& dc->config.enable_4to1MPC && dc->res_pool->pipe_count >= 4;
 
 		if (!context->res_ctx.pipe_ctx[i].stream)
 			continue;
 
-		if (force_split
-				|| context->bw_ctx.dml.vba.NoOfDPP[vlevel][context->bw_ctx.dml.vba.maxMpcComb][pipe_plane] > 1) {
-			if (context->stream_count == 1 && plane_count == 1
-					&& dc->config.enable_4to1MPC && dc->res_pool->pipe_count >= 4)
+		if (force_split || v->NoOfDPP[vlevel][max_mpc_comb][pipe_plane] > 1) {
+			if (split4mpc)
 				split[i] = 4;
 			else
 				split[i] = 2;
@@ -2685,66 +2664,72 @@ int dcn20_validate_apply_pipe_split_flags(
 			split[i] = 2;
 		if (dc->debug.force_odm_combine & (1 << pipe->stream_res.tg->inst)) {
 			split[i] = 2;
-			context->bw_ctx.dml.vba.ODMCombineEnablePerState[vlevel][pipe_plane] = dm_odm_combine_mode_2to1;
+			v->ODMCombineEnablePerState[vlevel][pipe_plane] = dm_odm_combine_mode_2to1;
 		}
-		context->bw_ctx.dml.vba.ODMCombineEnabled[pipe_plane] =
-			context->bw_ctx.dml.vba.ODMCombineEnablePerState[vlevel][pipe_plane];
-
-		if (pipe->prev_odm_pipe && context->bw_ctx.dml.vba.ODMCombineEnabled[pipe_plane] != dm_odm_combine_mode_disabled) {
-			/*Already split odm pipe tree, don't try to split again*/
-			split[i] = 0;
-			split[pipe->prev_odm_pipe->pipe_idx] = 0;
-		} else if (pipe->top_pipe && pipe->plane_state == pipe->top_pipe->plane_state
-				&& context->bw_ctx.dml.vba.ODMCombineEnabled[pipe_plane] == dm_odm_combine_mode_disabled) {
-			/*If 2 way split but can support 4 way split, then split each pipe again*/
-			if (context->stream_count == 1 && plane_count == 1
-					&& dc->config.enable_4to1MPC && dc->res_pool->pipe_count >= 4) {
-				split[i] = 2;
-			} else {
+		v->ODMCombineEnabled[pipe_plane] =
+			v->ODMCombineEnablePerState[vlevel][pipe_plane];
+
+		if (v->ODMCombineEnabled[pipe_plane] == dm_odm_combine_mode_disabled) {
+			if (get_num_mpc_splits(pipe) == 1) {
+				/*If need split for mpc but 2 way split already*/
+				if (split[i] == 4)
+					split[i] = 2; /* 2 -> 4 MPC */
+				else if (split[i] == 2)
+					split[i] = 0; /* 2 -> 2 MPC */
+				else if (pipe->top_pipe && pipe->top_pipe->plane_state == pipe->plane_state)
+					merge[i] = true; /* 2 -> 1 MPC */
+			} else if (get_num_mpc_splits(pipe) == 3) {
+				/*If need split for mpc but 4 way split already*/
+				if (split[i] == 2 && ((pipe->top_pipe && !pipe->top_pipe->top_pipe)
+						|| !pipe->bottom_pipe)) {
+					merge[i] = true; /* 4 -> 2 MPC */
+				} else if (split[i] == 0 && pipe->top_pipe &&
+						pipe->top_pipe->plane_state == pipe->plane_state)
+					merge[i] = true; /* 4 -> 1 MPC */
 				split[i] = 0;
-				split[pipe->top_pipe->pipe_idx] = 0;
-			}
-		} else if (pipe->prev_odm_pipe || (dcn20_find_previous_split_count(pipe) == 2 && pipe->top_pipe)) {
-			if (split[i] == 0) {
-				/*Exiting mpc/odm combine*/
-				merge[i] = true;
-			} else {
-				/*Transition from mpc combine to odm combine or vice versa*/
-				ASSERT(0); /*should not actually happen yet*/
-				split[i] = 2;
-				merge[i] = true;
+			} else if (get_num_odm_splits(pipe)) {
+				/* ODM -> MPC transition */
+				ASSERT(0); /* NOT expected yet */
 				if (pipe->prev_odm_pipe) {
-					split[pipe->prev_odm_pipe->pipe_idx] = 2;
-					merge[pipe->prev_odm_pipe->pipe_idx] = true;
-				} else {
-					split[pipe->top_pipe->pipe_idx] = 2;
-					merge[pipe->top_pipe->pipe_idx] = true;
+					split[i] = 0;
+					merge[i] = true;
 				}
 			}
-		} else if (dcn20_find_previous_split_count(pipe) == 3) {
-			if (split[i] == 0 && !pipe->top_pipe) {
-				merge[pipe->bottom_pipe->pipe_idx] = true;
-				merge[pipe->bottom_pipe->bottom_pipe->pipe_idx] = true;
-			} else if (split[i] == 2 && !pipe->top_pipe) {
-				merge[pipe->bottom_pipe->bottom_pipe->pipe_idx] = true;
-				split[i] = 0;
-			}
-		} else if (dcn20_find_previous_split_count(pipe) == 4) {
-			if (split[i] == 0 && !pipe->top_pipe) {
-				merge[pipe->bottom_pipe->pipe_idx] = true;
-				merge[pipe->bottom_pipe->bottom_pipe->pipe_idx] = true;
-				merge[pipe->bottom_pipe->bottom_pipe->bottom_pipe->pipe_idx] = true;
-			} else if (split[i] == 2 && !pipe->top_pipe) {
-				merge[pipe->bottom_pipe->bottom_pipe->pipe_idx] = true;
-				merge[pipe->bottom_pipe->bottom_pipe->bottom_pipe->pipe_idx] = true;
+		} else {
+			if (get_num_odm_splits(pipe) == 1) {
+				/*If need split for odm but 2 way split already*/
+				if (split[i] == 4)
+					split[i] = 2; /* 2 -> 4 ODM */
+				else if (split[i] == 2)
+					split[i] = 0; /* 2 -> 2 ODM */
+				else if (pipe->prev_odm_pipe) {
+					ASSERT(0); /* NOT expected yet */
+					merge[i] = true; /* exit ODM */
+				}
+			} else if (get_num_odm_splits(pipe) == 3) {
+				/*If need split for odm but 4 way split already*/
+				if (split[i] == 2 && ((pipe->prev_odm_pipe && !pipe->prev_odm_pipe->prev_odm_pipe)
+						|| !pipe->next_odm_pipe)) {
+					ASSERT(0); /* NOT expected yet */
+					merge[i] = true; /* 4 -> 2 ODM */
+				} else if (split[i] == 0 && pipe->prev_odm_pipe) {
+					ASSERT(0); /* NOT expected yet */
+					merge[i] = true; /* exit ODM */
+				}
 				split[i] = 0;
+			} else if (get_num_mpc_splits(pipe)) {
+				/* MPC -> ODM transition */
+				ASSERT(0); /* NOT expected yet */
+				if (pipe->top_pipe && pipe->top_pipe->plane_state == pipe->plane_state) {
+					split[i] = 0;
+					merge[i] = true;
+				}
 			}
 		}
 
 		/* Adjust dppclk when split is forced, do not bother with dispclk */
-		if (split[i] != 0
-				&& context->bw_ctx.dml.vba.NoOfDPP[vlevel][context->bw_ctx.dml.vba.maxMpcComb][pipe_idx] == 1)
-			context->bw_ctx.dml.vba.RequiredDPPCLK[vlevel][context->bw_ctx.dml.vba.maxMpcComb][pipe_idx] /= 2;
+		if (split[i] != 0 && v->NoOfDPP[vlevel][max_mpc_comb][pipe_idx] == 1)
+			v->RequiredDPPCLK[vlevel][max_mpc_comb][pipe_idx] /= 2;
 		pipe_idx++;
 	}
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.h b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.h
index ed5d31253314..2c1959845c29 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.h
@@ -119,7 +119,6 @@ void dcn20_set_mcif_arb_params(
 		display_e2e_pipe_params_st *pipes,
 		int pipe_cnt);
 bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context, bool fast_validate);
-int dcn20_find_previous_split_count(struct pipe_ctx *pipe);
 int dcn20_validate_apply_pipe_split_flags(
 		struct dc *dc,
 		struct dc_state *context,
@@ -140,6 +139,10 @@ bool dcn20_split_stream_for_odm(
 		struct resource_context *res_ctx,
 		struct pipe_ctx *prev_odm_pipe,
 		struct pipe_ctx *next_odm_pipe);
+void dcn20_acquire_dsc(const struct dc *dc,
+			struct resource_context *res_ctx,
+			struct display_stream_compressor **dsc,
+			int pipe_idx);
 struct pipe_ctx *dcn20_find_secondary_pipe(struct dc *dc,
 		struct resource_context *res_ctx,
 		const struct resource_pool *pool,
diff --git a/drivers/gpu/drm/amd/display/dc/inc/resource.h b/drivers/gpu/drm/amd/display/dc/inc/resource.h
index 109c589eb97c..a9be495af922 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/resource.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/resource.h
@@ -177,6 +177,8 @@ unsigned int resource_pixel_format_to_bpp(enum surface_pixel_format format);
 void get_audio_check(struct audio_info *aud_modes,
 	struct audio_check *aud_chk);
 
+int get_num_mpc_splits(struct pipe_ctx *pipe);
+
 int get_num_odm_splits(struct pipe_ctx *pipe);
 
 #endif /* DRIVERS_GPU_DRM_AMD_DC_DEV_DC_INC_RESOURCE_H_ */
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 03/27] drm/amd/display: Handle persistence in DM
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 01/27] drm/amd/display: Minimize DSC resource re-assignment Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 02/27] drm/amd/display: fix and simplify pipe split logic Rodrigo Siqueira
@ 2020-05-15 18:12 ` Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 04/27] drm/amd/display: Do not fail if build scaling params fails Rodrigo Siqueira
                   ` (23 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:12 UTC (permalink / raw)
  To: amd-gfx
  Cc: Jaehyun Chung, Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira,
	Aurabindo.Pillai, Bhawanpreet.Lakha, Anthony Koo

From: Jaehyun Chung <jaehyun.chung@amd.com>

[Why]
Remove dm_write_persistent_data and dm_read_persistent_data as
persistence should be handled in DM.

[How]
Remove functions. Move read/write calls into DM layer while maintaining
logic.

Signed-off-by: Jaehyun Chung <jaehyun.chung@amd.com>
Reviewed-by: Anthony Koo <Anthony.Koo@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 .../display/amdgpu_dm/amdgpu_dm_services.c    | 25 -------
 drivers/gpu/drm/amd/display/dc/dm_services.h  | 69 -------------------
 .../drm/amd/display/modules/inc/mod_stats.h   |  8 ++-
 3 files changed, 7 insertions(+), 95 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
index 022da5d45d4d..51f57420fadd 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
@@ -47,29 +47,4 @@ void dm_perf_trace_timestamp(const char *func_name, unsigned int line)
 {
 }
 
-bool dm_write_persistent_data(struct dc_context *ctx,
-		const struct dc_sink *sink,
-		const char *module_name,
-		const char *key_name,
-		void *params,
-		unsigned int size,
-		struct persistent_data_flag *flag)
-{
-	/*TODO implement*/
-	return false;
-}
-
-bool dm_read_persistent_data(struct dc_context *ctx,
-				const struct dc_sink *sink,
-				const char *module_name,
-				const char *key_name,
-				void *params,
-				unsigned int size,
-				struct persistent_data_flag *flag)
-{
-	/*TODO implement*/
-	return false;
-}
-
 /**** power component interfaces ****/
-
diff --git a/drivers/gpu/drm/amd/display/dc/dm_services.h b/drivers/gpu/drm/amd/display/dc/dm_services.h
index 968ff1fef486..fdd1943c828d 100644
--- a/drivers/gpu/drm/amd/display/dc/dm_services.h
+++ b/drivers/gpu/drm/amd/display/dc/dm_services.h
@@ -261,75 +261,6 @@ struct persistent_data_flag {
 	bool save_per_edid;
 };
 
-/* Call to write data in registry editor for persistent data storage.
- *
- * \inputs      sink - identify edid/link for registry folder creation
- *              module name - identify folders for registry
- *              key name - identify keys within folders for registry
- *              params - value to write in defined folder/key
- *              size - size of the input params
- *              flag - determine whether to save by link or edid
- *
- * \returns     true - call is successful
- *              false - call failed
- *
- * sink         module         key
- * -----------------------------------------------------------------------------
- * NULL         NULL           NULL     - failure
- * NULL         NULL           -        - create key with param value
- *                                                      under base folder
- * NULL         -              NULL     - create module folder under base folder
- * -            NULL           NULL     - failure
- * NULL         -              -        - create key under module folder
- *                                            with no edid/link identification
- * -            NULL           -        - create key with param value
- *                                                       under base folder
- * -            -              NULL     - create module folder under base folder
- * -            -              -        - create key under module folder
- *                                              with edid/link identification
- */
-bool dm_write_persistent_data(struct dc_context *ctx,
-		const struct dc_sink *sink,
-		const char *module_name,
-		const char *key_name,
-		void *params,
-		unsigned int size,
-		struct persistent_data_flag *flag);
-
-
-/* Call to read data in registry editor for persistent data storage.
- *
- * \inputs      sink - identify edid/link for registry folder creation
- *              module name - identify folders for registry
- *              key name - identify keys within folders for registry
- *              size - size of the output params
- *              flag - determine whether it was save by link or edid
- *
- * \returns     params - value read from defined folder/key
- *              true - call is successful
- *              false - call failed
- *
- * sink         module         key
- * -----------------------------------------------------------------------------
- * NULL         NULL           NULL     - failure
- * NULL         NULL           -        - read key under base folder
- * NULL         -              NULL     - failure
- * -            NULL           NULL     - failure
- * NULL         -              -        - read key under module folder
- *                                             with no edid/link identification
- * -            NULL           -        - read key under base folder
- * -            -              NULL     - failure
- * -            -              -        - read key under module folder
- *                                              with edid/link identification
- */
-bool dm_read_persistent_data(struct dc_context *ctx,
-		const struct dc_sink *sink,
-		const char *module_name,
-		const char *key_name,
-		void *params,
-		unsigned int size,
-		struct persistent_data_flag *flag);
-
 bool dm_query_extended_brightness_caps
 	(struct dc_context *ctx, enum dm_acpi_display_type display,
 			struct dm_acpi_atif_backlight_caps *pCaps);
diff --git a/drivers/gpu/drm/amd/display/modules/inc/mod_stats.h b/drivers/gpu/drm/amd/display/modules/inc/mod_stats.h
index 3812094b52e8..4220fd8fdd60 100644
--- a/drivers/gpu/drm/amd/display/modules/inc/mod_stats.h
+++ b/drivers/gpu/drm/amd/display/modules/inc/mod_stats.h
@@ -36,7 +36,13 @@ struct mod_stats_caps {
 	bool dummy;
 };
 
-struct mod_stats *mod_stats_create(struct dc *dc);
+struct mod_stats_init_params {
+	unsigned int stats_enable;
+	unsigned int stats_entries;
+};
+
+struct mod_stats *mod_stats_create(struct dc *dc,
+		struct mod_stats_init_params *init_params);
 
 void mod_stats_destroy(struct mod_stats *mod_stats);
 
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 04/27] drm/amd/display: Do not fail if build scaling params fails
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (2 preceding siblings ...)
  2020-05-15 18:12 ` [PATCH 03/27] drm/amd/display: Handle persistence in DM Rodrigo Siqueira
@ 2020-05-15 18:12 ` Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 05/27] drm/amd/display: DP training to set properly SCRAMBLING_DISABLE Rodrigo Siqueira
                   ` (22 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:12 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sung Lee, Dmytro Laktyushkin, Sunpeng.Li, Harry.Wentland,
	Rodrigo.Siqueira, Aurabindo.Pillai, Bhawanpreet.Lakha

From: Sung Lee <sung.lee@amd.com>

[WHY]
Failing validation when building scaling parameters causes corruption to
occur due to pipe splitting with smaller pixel widths than HW supports.
This needs to fail silently for now to hide the corruption until the
corruption itself can be fixed.

[HOW]
Do not fail validation if building scaling params fails.

Signed-off-by: Sung Lee <sung.lee@amd.com>
Reviewed-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
index 4190ee592e6d..d00de61ac720 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
@@ -2824,8 +2824,8 @@ bool dcn20_fast_validate_bw(
 					dcn20_split_stream_for_mpc(
 							&context->res_ctx, dc->res_pool,
 							pipe, hsplit_pipe);
-					if (!resource_build_scaling_params(pipe) || !resource_build_scaling_params(hsplit_pipe))
-						goto validate_fail;
+					resource_build_scaling_params(pipe);
+					resource_build_scaling_params(hsplit_pipe);
 				}
 				pipe_split_from[hsplit_pipe->pipe_idx] = pipe_idx;
 			}
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 05/27] drm/amd/display: DP training to set properly SCRAMBLING_DISABLE
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (3 preceding siblings ...)
  2020-05-15 18:12 ` [PATCH 04/27] drm/amd/display: Do not fail if build scaling params fails Rodrigo Siqueira
@ 2020-05-15 18:12 ` Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 06/27] drm/amd/display: Check bss_data_size before going down legacy DMUB load path Rodrigo Siqueira
                   ` (21 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:12 UTC (permalink / raw)
  To: amd-gfx
  Cc: Vladimir Stempen, Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira,
	Wenjing Liu, Aurabindo.Pillai, Bhawanpreet.Lakha

From: Vladimir Stempen <vladimir.stempen@amd.com>

[Why]
DP training sequence to set SCRAMBLING_DISABLE bit properly based on
training pattern - per DP Spec.

[How]
Update dpcd_pattern.v1_4.SCRAMBLING_DISABLE with 1 for TPS1, TPS2, TPS3,
but not for TPS4.

Signed-off-by: Vladimir Stempen <Vladimir.Stempen@amd.com>
Reviewed-by: Wenjing Liu <Wenjing.Liu@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 .../gpu/drm/amd/display/dc/core/dc_link_dp.c  | 27 +++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index ebad1787f5cb..6db1f16957ac 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -219,6 +219,30 @@ static enum dpcd_training_patterns
 	return dpcd_tr_pattern;
 }
 
+static uint8_t dc_dp_initialize_scrambling_data_symbols(
+	struct dc_link *link,
+	enum dc_dp_training_pattern pattern)
+{
+	uint8_t disable_scrabled_data_symbols = 0;
+
+	switch (pattern) {
+	case DP_TRAINING_PATTERN_SEQUENCE_1:
+	case DP_TRAINING_PATTERN_SEQUENCE_2:
+	case DP_TRAINING_PATTERN_SEQUENCE_3:
+		disable_scrabled_data_symbols = 1;
+		break;
+	case DP_TRAINING_PATTERN_SEQUENCE_4:
+		disable_scrabled_data_symbols = 0;
+		break;
+	default:
+		ASSERT(0);
+		DC_LOG_HW_LINK_TRAINING("%s: Invalid HW Training pattern: %d\n",
+			__func__, pattern);
+		break;
+	}
+	return disable_scrabled_data_symbols;
+}
+
 static inline bool is_repeater(struct dc_link *link, uint32_t offset)
 {
 	return (!link->is_lttpr_mode_transparent && offset != 0);
@@ -251,6 +275,9 @@ static void dpcd_set_lt_pattern_and_lane_settings(
 	dpcd_pattern.v1_4.TRAINING_PATTERN_SET =
 		dc_dp_training_pattern_to_dpcd_training_pattern(link, pattern);
 
+	dpcd_pattern.v1_4.SCRAMBLING_DISABLE =
+		dc_dp_initialize_scrambling_data_symbols(link, pattern);
+
 	dpcd_lt_buffer[DP_TRAINING_PATTERN_SET - DP_TRAINING_PATTERN_SET]
 		= dpcd_pattern.raw;
 
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 06/27] drm/amd/display: Check bss_data_size before going down legacy DMUB load path
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (4 preceding siblings ...)
  2020-05-15 18:12 ` [PATCH 05/27] drm/amd/display: DP training to set properly SCRAMBLING_DISABLE Rodrigo Siqueira
@ 2020-05-15 18:12 ` Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 07/27] drm/amd/display: Don't pass invalid fw_bss_data pointer into DMUB srv Rodrigo Siqueira
                   ` (20 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:12 UTC (permalink / raw)
  To: amd-gfx
  Cc: Zhan Liu, Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira,
	Aurabindo.Pillai, Bhawanpreet.Lakha, Nicholas Kazlauskas

From: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>

[Why]
New unified firmware binary with only inst const still passes down
fw_bss_data != NULL and params->bss_data_size == 0 from DM.

This leads it into the legacy path causing firmware state allocation to
be too small.

[How]
Check bss_data_size as well.

Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Reviewed-by: Zhan Liu <Zhan.Liu@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
index 0e3751d94cb0..3cfbc27f3eab 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
@@ -98,12 +98,12 @@ dmub_get_fw_meta_info(const struct dmub_srv_region_params *params)
 	uint32_t blob_size = 0;
 	uint32_t meta_offset = 0;
 
-	if (params->fw_bss_data) {
+	if (params->fw_bss_data && params->bss_data_size) {
 		/* Legacy metadata region. */
 		blob = params->fw_bss_data;
 		blob_size = params->bss_data_size;
 		meta_offset = DMUB_FW_META_OFFSET;
-	} else if (params->fw_inst_const) {
+	} else if (params->fw_inst_const && params->inst_const_size) {
 		/* Combined metadata region. */
 		blob = params->fw_inst_const;
 		blob_size = params->inst_const_size;
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 07/27] drm/amd/display: Don't pass invalid fw_bss_data pointer into DMUB srv
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (5 preceding siblings ...)
  2020-05-15 18:12 ` [PATCH 06/27] drm/amd/display: Check bss_data_size before going down legacy DMUB load path Rodrigo Siqueira
@ 2020-05-15 18:12 ` Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 08/27] drm/amd/display: Add bit swap helper based on endianness Rodrigo Siqueira
                   ` (19 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:12 UTC (permalink / raw)
  To: amd-gfx
  Cc: Zhan Liu, Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira,
	Aurabindo.Pillai, Bhawanpreet.Lakha, Nicholas Kazlauskas

From: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>

[Why]
If bss_data_size is 0 then we shouldn't be passing down fw_bss_data into
the DMUB service since the region isn't really "valid."

[How]
Pass NULL instead if the size is 0.

Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Reviewed-by: Zhan Liu <Zhan.Liu@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 60fe64aef11b..d2bb0d9839c9 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1211,10 +1211,10 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
 					PSP_HEADER_BYTES - PSP_FOOTER_BYTES;
 	region_params.bss_data_size = le32_to_cpu(hdr->bss_data_bytes);
 	region_params.vbios_size = adev->bios_size;
-	region_params.fw_bss_data =
+	region_params.fw_bss_data = region_params.bss_data_size ?
 		adev->dm.dmub_fw->data +
 		le32_to_cpu(hdr->header.ucode_array_offset_bytes) +
-		le32_to_cpu(hdr->inst_const_bytes);
+		le32_to_cpu(hdr->inst_const_bytes) : NULL;
 	region_params.fw_inst_const =
 		adev->dm.dmub_fw->data +
 		le32_to_cpu(hdr->header.ucode_array_offset_bytes) +
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 08/27] drm/amd/display: Add bit swap helper based on endianness
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (6 preceding siblings ...)
  2020-05-15 18:12 ` [PATCH 07/27] drm/amd/display: Don't pass invalid fw_bss_data pointer into DMUB srv Rodrigo Siqueira
@ 2020-05-15 18:12 ` Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 09/27] drm/amd/display: Implement some asic specific abm call backs Rodrigo Siqueira
                   ` (18 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:12 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Aurabindo.Pillai,
	Wyatt Wood, Bhawanpreet.Lakha

Christian Koenig pointed out a code duplication related to bit swap in
case of big-endian manipulation. This commit adds a helper for handling
this verification and reduces the requirement of replicate some part of
the code.

Signed-off-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
Reviewed-by: Wyatt Wood <Wyatt.Wood@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 .../amd/display/modules/power/power_helpers.c | 50 ++++++++++---------
 1 file changed, 26 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/modules/power/power_helpers.c b/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
index 8c37bcc27132..60b92f099af5 100644
--- a/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
+++ b/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
@@ -29,6 +29,8 @@
 #include "core_types.h"
 
 #define DIV_ROUNDUP(a, b) (((a)+((b)/2))/(b))
+#define bswap16_based_on_endian(big_endian, value) \
+	(big_endian) ? cpu_to_be16(value) : cpu_to_le16(value)
 
 /* Possible Min Reduction config from least aggressive to most aggressive
  *  0    1     2     3     4     5     6     7     8     9     10    11   12
@@ -624,30 +626,30 @@ void fill_iram_v_2_3(struct iram_table_v_2_2 *ram_table, struct dmcu_iram_parame
 	ram_table->iir_curve[4] = 0x65;
 
 	//Gamma 2.2
-	ram_table->crgb_thresh[0] = (big_endian) ? cpu_to_be16(0x127c) : cpu_to_le16(0x127c);
-	ram_table->crgb_thresh[1] = (big_endian) ? cpu_to_be16(0x151b) : cpu_to_le16(0x151b);
-	ram_table->crgb_thresh[2] = (big_endian) ? cpu_to_be16(0x17d5) : cpu_to_le16(0x17d5);
-	ram_table->crgb_thresh[3] = (big_endian) ? cpu_to_be16(0x1a56) : cpu_to_le16(0x1a56);
-	ram_table->crgb_thresh[4] = (big_endian) ? cpu_to_be16(0x1c83) : cpu_to_le16(0x1c83);
-	ram_table->crgb_thresh[5] = (big_endian) ? cpu_to_be16(0x1e72) : cpu_to_le16(0x1e72);
-	ram_table->crgb_thresh[6] = (big_endian) ? cpu_to_be16(0x20f0) : cpu_to_le16(0x20f0);
-	ram_table->crgb_thresh[7] = (big_endian) ? cpu_to_be16(0x232b) : cpu_to_le16(0x232b);
-	ram_table->crgb_offset[0] = (big_endian) ? cpu_to_be16(0x2999) : cpu_to_le16(0x2999);
-	ram_table->crgb_offset[1] = (big_endian) ? cpu_to_be16(0x3999) : cpu_to_le16(0x3999);
-	ram_table->crgb_offset[2] = (big_endian) ? cpu_to_be16(0x4666) : cpu_to_le16(0x4666);
-	ram_table->crgb_offset[3] = (big_endian) ? cpu_to_be16(0x5999) : cpu_to_le16(0x5999);
-	ram_table->crgb_offset[4] = (big_endian) ? cpu_to_be16(0x6333) : cpu_to_le16(0x6333);
-	ram_table->crgb_offset[5] = (big_endian) ? cpu_to_be16(0x7800) : cpu_to_le16(0x7800);
-	ram_table->crgb_offset[6] = (big_endian) ? cpu_to_be16(0x8c00) : cpu_to_le16(0x8c00);
-	ram_table->crgb_offset[7] = (big_endian) ? cpu_to_be16(0xa000) : cpu_to_le16(0xa000);
-	ram_table->crgb_slope[0]  = (big_endian) ? cpu_to_be16(0x3609) : cpu_to_le16(0x3609);
-	ram_table->crgb_slope[1]  = (big_endian) ? cpu_to_be16(0x2dfa) : cpu_to_le16(0x2dfa);
-	ram_table->crgb_slope[2]  = (big_endian) ? cpu_to_be16(0x27ea) : cpu_to_le16(0x27ea);
-	ram_table->crgb_slope[3]  = (big_endian) ? cpu_to_be16(0x235d) : cpu_to_le16(0x235d);
-	ram_table->crgb_slope[4]  = (big_endian) ? cpu_to_be16(0x2042) : cpu_to_le16(0x2042);
-	ram_table->crgb_slope[5]  = (big_endian) ? cpu_to_be16(0x1dc3) : cpu_to_le16(0x1dc3);
-	ram_table->crgb_slope[6]  = (big_endian) ? cpu_to_be16(0x1b1a) : cpu_to_le16(0x1b1a);
-	ram_table->crgb_slope[7]  = (big_endian) ? cpu_to_be16(0x1910) : cpu_to_le16(0x1910);
+	ram_table->crgb_thresh[0] = bswap16_based_on_endian(big_endian, 0x127c);
+	ram_table->crgb_thresh[1] = bswap16_based_on_endian(big_endian, 0x151b);
+	ram_table->crgb_thresh[2] = bswap16_based_on_endian(big_endian, 0x17d5);
+	ram_table->crgb_thresh[3] = bswap16_based_on_endian(big_endian, 0x1a56);
+	ram_table->crgb_thresh[4] = bswap16_based_on_endian(big_endian, 0x1c83);
+	ram_table->crgb_thresh[5] = bswap16_based_on_endian(big_endian, 0x1e72);
+	ram_table->crgb_thresh[6] = bswap16_based_on_endian(big_endian, 0x20f0);
+	ram_table->crgb_thresh[7] = bswap16_based_on_endian(big_endian, 0x232b);
+	ram_table->crgb_offset[0] = bswap16_based_on_endian(big_endian, 0x2999);
+	ram_table->crgb_offset[1] = bswap16_based_on_endian(big_endian, 0x3999);
+	ram_table->crgb_offset[2] = bswap16_based_on_endian(big_endian, 0x4666);
+	ram_table->crgb_offset[3] = bswap16_based_on_endian(big_endian, 0x5999);
+	ram_table->crgb_offset[4] = bswap16_based_on_endian(big_endian, 0x6333);
+	ram_table->crgb_offset[5] = bswap16_based_on_endian(big_endian, 0x7800);
+	ram_table->crgb_offset[6] = bswap16_based_on_endian(big_endian, 0x8c00);
+	ram_table->crgb_offset[7] = bswap16_based_on_endian(big_endian, 0xa000);
+	ram_table->crgb_slope[0]  = bswap16_based_on_endian(big_endian, 0x3609);
+	ram_table->crgb_slope[1]  = bswap16_based_on_endian(big_endian, 0x2dfa);
+	ram_table->crgb_slope[2]  = bswap16_based_on_endian(big_endian, 0x27ea);
+	ram_table->crgb_slope[3]  = bswap16_based_on_endian(big_endian, 0x235d);
+	ram_table->crgb_slope[4]  = bswap16_based_on_endian(big_endian, 0x2042);
+	ram_table->crgb_slope[5]  = bswap16_based_on_endian(big_endian, 0x1dc3);
+	ram_table->crgb_slope[6]  = bswap16_based_on_endian(big_endian, 0x1b1a);
+	ram_table->crgb_slope[7]  = bswap16_based_on_endian(big_endian, 0x1910);
 
 	fill_backlight_transform_table_v_2_2(
 			params, ram_table, big_endian);
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 09/27] drm/amd/display: Implement some asic specific abm call backs.
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (7 preceding siblings ...)
  2020-05-15 18:12 ` [PATCH 08/27] drm/amd/display: Add bit swap helper based on endianness Rodrigo Siqueira
@ 2020-05-15 18:12 ` Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 10/27] drm/amd/display: FW release 1.0.10 Rodrigo Siqueira
                   ` (17 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:12 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Aurabindo.Pillai,
	Yongqiang Sun, Bhawanpreet.Lakha, Anthony Koo

From: Yongqiang Sun <yongqiang.sun@amd.com>

[Why & How]
Implement abm set_pipe call stacks
Have some asics speicifc call stacks for abm.

Signed-off-by: Yongqiang Sun <yongqiang.sun@amd.com>
Reviewed-by: Anthony Koo <Anthony.Koo@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c | 92 -------------------
 .../display/dc/dce110/dce110_hw_sequencer.c   | 11 +++
 .../display/dc/dce110/dce110_hw_sequencer.h   |  1 +
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c |  3 +-
 .../gpu/drm/amd/display/dc/dcn10/dcn10_init.c |  1 +
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.c    |  3 +-
 .../gpu/drm/amd/display/dc/dcn20/dcn20_init.c |  1 +
 .../drm/amd/display/dc/dcn21/dcn21_hwseq.c    | 89 ++++++++++++++++++
 .../drm/amd/display/dc/dcn21/dcn21_hwseq.h    |  6 ++
 .../gpu/drm/amd/display/dc/dcn21/dcn21_init.c |  5 +-
 .../gpu/drm/amd/display/dc/inc/hw_sequencer.h |  2 +
 .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   |  4 +
 12 files changed, 120 insertions(+), 98 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c
index da0b29abfbda..0cf130dc4e52 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm.c
@@ -50,71 +50,7 @@
 
 #define DISABLE_ABM_IMMEDIATELY 255
 
-static bool dmub_abm_set_pipe(struct abm *abm, uint32_t otg_inst, uint32_t panel_inst)
-{
-	union dmub_rb_cmd cmd;
-	struct dc_context *dc = abm->ctx;
-	uint32_t ramping_boundary = 0xFFFF;
-
-	cmd.abm_set_pipe.header.type = DMUB_CMD__ABM;
-	cmd.abm_set_pipe.header.sub_type = DMUB_CMD__ABM_SET_PIPE;
-	cmd.abm_set_pipe.abm_set_pipe_data.otg_inst = otg_inst;
-	cmd.abm_set_pipe.abm_set_pipe_data.panel_inst = panel_inst;
-	cmd.abm_set_pipe.abm_set_pipe_data.ramping_boundary = ramping_boundary;
-	cmd.abm_set_pipe.header.payload_bytes = sizeof(struct dmub_cmd_abm_set_pipe_data);
-
-	dc_dmub_srv_cmd_queue(dc->dmub_srv, &cmd);
-	dc_dmub_srv_cmd_execute(dc->dmub_srv);
-	dc_dmub_srv_wait_idle(dc->dmub_srv);
-
-	return true;
-}
-
-static void dmcub_set_backlight_level(
-	struct dce_abm *dce_abm,
-	uint32_t backlight_pwm_u16_16,
-	uint32_t frame_ramp,
-	uint32_t otg_inst,
-	uint32_t panel_inst)
-{
-	union dmub_rb_cmd cmd;
-	struct dc_context *dc = dce_abm->base.ctx;
-	unsigned int backlight_8_bit = 0;
-	uint32_t s2;
-
-	if (backlight_pwm_u16_16 & 0x10000)
-		// Check for max backlight condition
-		backlight_8_bit = 0xFF;
-	else
-		// Take MSB of fractional part since backlight is not max
-		backlight_8_bit = (backlight_pwm_u16_16 >> 8) & 0xFF;
-
-	dmub_abm_set_pipe(&dce_abm->base, otg_inst, panel_inst);
-
-	REG_UPDATE(BL1_PWM_USER_LEVEL, BL1_PWM_USER_LEVEL, backlight_pwm_u16_16);
-
-	if (otg_inst == 0)
-		frame_ramp = 0;
-
-	cmd.abm_set_backlight.header.type = DMUB_CMD__ABM;
-	cmd.abm_set_backlight.header.sub_type = DMUB_CMD__ABM_SET_BACKLIGHT;
-	cmd.abm_set_backlight.abm_set_backlight_data.frame_ramp = frame_ramp;
-	cmd.abm_set_backlight.header.payload_bytes = sizeof(struct dmub_cmd_abm_set_backlight_data);
 
-	dc_dmub_srv_cmd_queue(dc->dmub_srv, &cmd);
-	dc_dmub_srv_cmd_execute(dc->dmub_srv);
-	dc_dmub_srv_wait_idle(dc->dmub_srv);
-
-	// Update requested backlight level
-	s2 = REG_READ(BIOS_SCRATCH_2);
-
-	s2 &= ~ATOM_S2_CURRENT_BL_LEVEL_MASK;
-	backlight_8_bit &= (ATOM_S2_CURRENT_BL_LEVEL_MASK >>
-				ATOM_S2_CURRENT_BL_LEVEL_SHIFT);
-	s2 |= (backlight_8_bit << ATOM_S2_CURRENT_BL_LEVEL_SHIFT);
-
-	REG_WRITE(BIOS_SCRATCH_2, s2);
-}
 
 static void dmub_abm_enable_fractional_pwm(struct dc_context *dc)
 {
@@ -211,31 +147,6 @@ static bool dmub_abm_set_level(struct abm *abm, uint32_t level)
 	return true;
 }
 
-static bool dmub_abm_immediate_disable(struct abm *abm, uint32_t panel_inst)
-{
-	dmub_abm_set_pipe(abm, DISABLE_ABM_IMMEDIATELY, panel_inst);
-
-	return true;
-}
-
-static bool dmub_abm_set_backlight_level_pwm(
-		struct abm *abm,
-		unsigned int backlight_pwm_u16_16,
-		unsigned int frame_ramp,
-		unsigned int otg_inst,
-		uint32_t panel_inst)
-{
-	struct dce_abm *dce_abm = TO_DMUB_ABM(abm);
-
-	dmcub_set_backlight_level(dce_abm,
-			backlight_pwm_u16_16,
-			frame_ramp,
-			otg_inst,
-			panel_inst);
-
-	return true;
-}
-
 static bool dmub_abm_init_config(struct abm *abm,
 	const char *src,
 	unsigned int bytes)
@@ -266,11 +177,8 @@ static bool dmub_abm_init_config(struct abm *abm,
 static const struct abm_funcs abm_funcs = {
 	.abm_init = dmub_abm_init,
 	.set_abm_level = dmub_abm_set_level,
-	.set_pipe = dmub_abm_set_pipe,
-	.set_backlight_level_pwm = dmub_abm_set_backlight_level_pwm,
 	.get_current_backlight = dmub_abm_get_current_backlight,
 	.get_target_backlight = dmub_abm_get_target_backlight,
-	.set_abm_immediate_disable = dmub_abm_immediate_disable,
 	.init_abm_config = dmub_abm_init_config,
 };
 
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
index b77e9dc16086..a475e529ae1c 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
@@ -2767,6 +2767,16 @@ void dce110_set_abm_immediate_disable(struct pipe_ctx *pipe_ctx)
 		panel_cntl->funcs->store_backlight_level(panel_cntl);
 }
 
+void dce110_set_pipe(struct pipe_ctx *pipe_ctx)
+{
+	struct abm *abm = pipe_ctx->stream_res.abm;
+	struct panel_cntl *panel_cntl = pipe_ctx->stream->link->panel_cntl;
+	uint32_t otg_inst = pipe_ctx->stream_res.tg->inst + 1;
+
+	if (abm && panel_cntl)
+		abm->funcs->set_pipe(abm, otg_inst, panel_cntl->inst);
+}
+
 static const struct hw_sequencer_funcs dce110_funcs = {
 	.program_gamut_remap = program_gamut_remap,
 	.program_output_csc = program_output_csc,
@@ -2804,6 +2814,7 @@ static const struct hw_sequencer_funcs dce110_funcs = {
 	.set_cursor_attribute = dce110_set_cursor_attribute,
 	.set_backlight_level = dce110_set_backlight_level,
 	.set_abm_immediate_disable = dce110_set_abm_immediate_disable,
+	.set_pipe = dce110_set_pipe,
 };
 
 static const struct hwseq_private_funcs dce110_private_funcs = {
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.h
index fe5326df00f7..b6f3843d3d05 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.h
@@ -89,6 +89,7 @@ bool dce110_set_backlight_level(struct pipe_ctx *pipe_ctx,
 		uint32_t backlight_pwm_u16_16,
 		uint32_t frame_ramp);
 void dce110_set_abm_immediate_disable(struct pipe_ctx *pipe_ctx);
+void dce110_set_pipe(struct pipe_ctx *pipe_ctx);
 
 #endif /* __DC_HWSS_DCE110_H__ */
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index f36d1f57b846..27cae98936ea 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -2506,8 +2506,7 @@ void dcn10_blank_pixel_data(
 		if (stream_res->tg->funcs->set_blank)
 			stream_res->tg->funcs->set_blank(stream_res->tg, blank);
 		if (stream_res->abm) {
-			stream_res->abm->funcs->set_pipe(stream_res->abm, stream_res->tg->inst + 1,
-					stream->link->panel_cntl->inst);
+			dc->hwss.set_pipe(pipe_ctx);
 			stream_res->abm->funcs->set_abm_level(stream_res->abm, stream->abm_level);
 		}
 	} else if (blank) {
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
index 897a3d25685a..9f8c89b6a763 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
@@ -74,6 +74,7 @@ static const struct hw_sequencer_funcs dcn10_funcs = {
 	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
 	.set_backlight_level = dce110_set_backlight_level,
 	.set_abm_immediate_disable = dce110_set_abm_immediate_disable,
+	.set_pipe = dce110_set_pipe,
 };
 
 static const struct hwseq_private_funcs dcn10_private_funcs = {
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
index da5333d165ac..258dcd33787e 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
@@ -996,8 +996,7 @@ void dcn20_blank_pixel_data(
 
 	if (!blank)
 		if (stream_res->abm) {
-			stream_res->abm->funcs->set_pipe(stream_res->abm, stream_res->tg->inst + 1,
-					stream->link->panel_cntl->inst);
+			dc->hwss.set_pipe(pipe_ctx);
 			stream_res->abm->funcs->set_abm_level(stream_res->abm, stream->abm_level);
 		}
 }
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
index a8bcd747d7ba..e20760fa11ff 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
@@ -85,6 +85,7 @@ static const struct hw_sequencer_funcs dcn20_funcs = {
 	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
 	.set_backlight_level = dce110_set_backlight_level,
 	.set_abm_immediate_disable = dce110_set_abm_immediate_disable,
+	.set_pipe = dce110_set_pipe,
 };
 
 static const struct hwseq_private_funcs dcn20_private_funcs = {
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.c
index ada65b1a7eb1..01f1d3d9a639 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.c
@@ -28,10 +28,13 @@
 #include "core_types.h"
 #include "resource.h"
 #include "dce/dce_hwseq.h"
+#include "dce110/dce110_hw_sequencer.h"
 #include "dcn21_hwseq.h"
 #include "vmid.h"
 #include "reg_helper.h"
 #include "hw/clk_mgr.h"
+#include "dc_dmub_srv.h"
+#include "abm.h"
 
 
 #define DC_LOGGER_INIT(logger)
@@ -134,3 +137,89 @@ void dcn21_PLAT_58856_wa(struct dc_state *context, struct pipe_ctx *pipe_ctx)
 	pipe_ctx->stream->dpms_off = true;
 }
 
+static bool dmub_abm_set_pipe(struct abm *abm, uint32_t otg_inst, uint32_t option, uint32_t panel_inst)
+{
+	union dmub_rb_cmd cmd;
+	struct dc_context *dc = abm->ctx;
+	uint32_t ramping_boundary = 0xFFFF;
+
+	cmd.abm_set_pipe.header.type = DMUB_CMD__ABM;
+	cmd.abm_set_pipe.header.sub_type = DMUB_CMD__ABM_SET_PIPE;
+	cmd.abm_set_pipe.abm_set_pipe_data.otg_inst = otg_inst;
+	cmd.abm_set_pipe.abm_set_pipe_data.set_pipe_option = option;
+	cmd.abm_set_pipe.abm_set_pipe_data.panel_inst = panel_inst;
+	cmd.abm_set_pipe.abm_set_pipe_data.ramping_boundary = ramping_boundary;
+	cmd.abm_set_pipe.header.payload_bytes = sizeof(struct dmub_cmd_abm_set_pipe_data);
+
+	dc_dmub_srv_cmd_queue(dc->dmub_srv, &cmd);
+	dc_dmub_srv_cmd_execute(dc->dmub_srv);
+	dc_dmub_srv_wait_idle(dc->dmub_srv);
+
+	return true;
+}
+
+void dcn21_set_abm_immediate_disable(struct pipe_ctx *pipe_ctx)
+{
+	struct abm *abm = pipe_ctx->stream_res.abm;
+	uint32_t otg_inst = pipe_ctx->stream_res.tg->inst;
+	struct panel_cntl *panel_cntl = pipe_ctx->stream->link->panel_cntl;
+
+	struct dmcu *dmcu = pipe_ctx->stream->ctx->dc->res_pool->dmcu;
+
+	if (dmcu) {
+		dce110_set_abm_immediate_disable(pipe_ctx);
+		return;
+	}
+
+	if (abm && panel_cntl)
+		dmub_abm_set_pipe(abm, otg_inst, SET_ABM_PIPE_IMMEDIATELY_DISABLE,
+				panel_cntl->inst);
+}
+
+void dcn21_set_pipe(struct pipe_ctx *pipe_ctx)
+{
+	struct abm *abm = pipe_ctx->stream_res.abm;
+	uint32_t otg_inst = pipe_ctx->stream_res.tg->inst;
+	struct panel_cntl *panel_cntl = pipe_ctx->stream->link->panel_cntl;
+	struct dmcu *dmcu = pipe_ctx->stream->ctx->dc->res_pool->dmcu;
+
+	if (dmcu) {
+		dce110_set_pipe(pipe_ctx);
+		return;
+	}
+
+	if (abm && panel_cntl)
+		dmub_abm_set_pipe(abm, otg_inst, SET_ABM_PIPE_NORMAL, panel_cntl->inst);
+}
+
+bool dcn21_set_backlight_level(struct pipe_ctx *pipe_ctx,
+		uint32_t backlight_pwm_u16_16,
+		uint32_t frame_ramp)
+{
+	union dmub_rb_cmd cmd;
+	struct dc_context *dc = pipe_ctx->stream->ctx;
+	struct abm *abm = pipe_ctx->stream_res.abm;
+	uint32_t otg_inst = pipe_ctx->stream_res.tg->inst;
+	struct panel_cntl *panel_cntl = pipe_ctx->stream->link->panel_cntl;
+
+	if (dc->dc->res_pool->dmcu) {
+		dce110_set_backlight_level(pipe_ctx, backlight_pwm_u16_16, frame_ramp);
+		return true;
+	}
+
+	if (abm && panel_cntl)
+		dmub_abm_set_pipe(abm, otg_inst, SET_ABM_PIPE_NORMAL, panel_cntl->inst);
+
+	cmd.abm_set_backlight.header.type = DMUB_CMD__ABM;
+	cmd.abm_set_backlight.header.sub_type = DMUB_CMD__ABM_SET_BACKLIGHT;
+	cmd.abm_set_backlight.abm_set_backlight_data.frame_ramp = frame_ramp;
+	cmd.abm_set_backlight.abm_set_backlight_data.backlight_user_level = backlight_pwm_u16_16;
+	cmd.abm_set_backlight.header.payload_bytes = sizeof(struct dmub_cmd_abm_set_backlight_data);
+
+	dc_dmub_srv_cmd_queue(dc->dmub_srv, &cmd);
+	dc_dmub_srv_cmd_execute(dc->dmub_srv);
+	dc_dmub_srv_wait_idle(dc->dmub_srv);
+
+	return true;
+}
+
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.h b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.h
index 26bf24d3b59f..9e97747e57cd 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.h
@@ -47,4 +47,10 @@ void dcn21_optimize_pwr_state(
 void dcn21_PLAT_58856_wa(struct dc_state *context,
 		struct pipe_ctx *pipe_ctx);
 
+void dcn21_set_pipe(struct pipe_ctx *pipe_ctx);
+void dcn21_set_abm_immediate_disable(struct pipe_ctx *pipe_ctx);
+bool dcn21_set_backlight_level(struct pipe_ctx *pipe_ctx,
+		uint32_t backlight_pwm_u16_16,
+		uint32_t frame_ramp);
+
 #endif /* __DC_HWSS_DCN21_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
index e97dfaa656e9..9a2d1f755839 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
@@ -87,8 +87,9 @@ static const struct hw_sequencer_funcs dcn21_funcs = {
 	.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
 	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
 	.power_down = dce110_power_down,
-	.set_backlight_level = dce110_set_backlight_level,
-	.set_abm_immediate_disable = dce110_set_abm_immediate_disable,
+	.set_backlight_level = dcn21_set_backlight_level,
+	.set_abm_immediate_disable = dcn21_set_abm_immediate_disable,
+	.set_pipe = dcn21_set_pipe,
 };
 
 static const struct hwseq_private_funcs dcn21_private_funcs = {
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
index 3b2ea9bdb62c..2e8f3fecc6a3 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
@@ -198,6 +198,8 @@ struct hw_sequencer_funcs {
 
 	void (*set_abm_immediate_disable)(struct pipe_ctx *pipe_ctx);
 
+	void (*set_pipe)(struct pipe_ctx *pipe_ctx);
+
 
 };
 
diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
index 599bf2055bcb..cbfde2706c18 100644
--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
@@ -36,6 +36,9 @@
 #define DMUB_RB_SIZE (DMUB_RB_CMD_SIZE * DMUB_RB_MAX_ENTRY)
 #define REG_SET_MASK 0xFFFF
 
+#define SET_ABM_PIPE_GRADUALLY_DISABLE           0
+#define SET_ABM_PIPE_IMMEDIATELY_DISABLE         255
+#define SET_ABM_PIPE_NORMAL                      1
 
 /*
  * Command IDs should be treated as stable ABI.
@@ -272,6 +275,7 @@ struct dmub_rb_cmd_abm_set_pipe {
 
 struct dmub_cmd_abm_set_backlight_data {
 	uint32_t frame_ramp;
+	uint32_t backlight_user_level;
 };
 
 struct dmub_rb_cmd_abm_set_backlight {
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 10/27] drm/amd/display: FW release 1.0.10
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (8 preceding siblings ...)
  2020-05-15 18:12 ` [PATCH 09/27] drm/amd/display: Implement some asic specific abm call backs Rodrigo Siqueira
@ 2020-05-15 18:12 ` Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 11/27] drm/amd/display: Fix ABM memory alignment issue Rodrigo Siqueira
                   ` (16 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:12 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Aurabindo.Pillai,
	Bhawanpreet.Lakha, Anthony Koo

From: Anthony Koo <Anthony.Koo@amd.com>

Signed-off-by: Anthony Koo <Anthony.Koo@amd.com>
Reviewed-by: Anthony Koo <Anthony.Koo@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c |  2 +-
 .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   | 12 ++++---
 .../drm/amd/display/dmub/inc/dmub_cmd_dal.h   | 35 +++++++++++++++++++
 .../gpu/drm/amd/display/dmub/inc/dmub_types.h |  9 +++--
 4 files changed, 49 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
index 044a0133ebb1..fd4e1021903a 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
@@ -231,7 +231,7 @@ static bool dmub_psr_copy_settings(struct dmub_psr *dmub,
 	copy_settings_data->smu_optimizations_en		= psr_context->allow_smu_optimizations;
 	copy_settings_data->frame_delay				= psr_context->frame_delay;
 	copy_settings_data->frame_cap_ind			= psr_context->psrFrameCaptureIndicationReq;
-	copy_settings_data->debug.visual_confirm		= dc->dc->debug.visual_confirm == VISUAL_CONFIRM_PSR ?
+	copy_settings_data->debug.bitfields.visual_confirm	= dc->dc->debug.visual_confirm == VISUAL_CONFIRM_PSR ?
 									true : false;
 
 	dc_dmub_srv_cmd_queue(dc->dmub_srv, &cmd);
diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
index cbfde2706c18..7782b7fc1ce0 100644
--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
@@ -219,6 +219,7 @@ struct dmub_rb_cmd_dpphy_init {
 };
 
 struct dmub_cmd_psr_copy_settings_data {
+	union dmub_psr_debug_flags debug;
 	uint16_t psr_level;
 	uint8_t dpp_inst;
 	uint8_t mpcc_inst;
@@ -231,7 +232,7 @@ struct dmub_cmd_psr_copy_settings_data {
 	uint8_t smu_optimizations_en;
 	uint8_t frame_delay;
 	uint8_t frame_cap_ind;
-	struct dmub_psr_debug_flags debug;
+	uint8_t pad[3];
 };
 
 struct dmub_rb_cmd_psr_copy_settings {
@@ -241,6 +242,7 @@ struct dmub_rb_cmd_psr_copy_settings {
 
 struct dmub_cmd_psr_set_level_data {
 	uint16_t psr_level;
+	uint8_t pad[2];
 };
 
 struct dmub_rb_cmd_psr_set_level {
@@ -262,10 +264,10 @@ struct dmub_rb_cmd_psr_set_version {
 };
 
 struct dmub_cmd_abm_set_pipe_data {
-	uint32_t ramping_boundary;
-	uint32_t otg_inst;
-	uint32_t panel_inst;
-	uint32_t set_pipe_option;
+	uint8_t otg_inst;
+	uint8_t panel_inst;
+	uint8_t set_pipe_option;
+	uint8_t ramping_boundary; // TODO: Remove this
 };
 
 struct dmub_rb_cmd_abm_set_pipe {
diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_dal.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_dal.h
index e42de9ded275..3ed77b6f0e44 100644
--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_dal.h
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_dal.h
@@ -26,6 +26,11 @@
 #ifndef _DMUB_CMD_DAL_H_
 #define _DMUB_CMD_DAL_H_
 
+#define NUM_AMBI_LEVEL                  5
+#define NUM_AGGR_LEVEL                  4
+#define NUM_POWER_FN_SEGS               8
+#define NUM_BL_CURVE_SEGS               16
+
 /*
  * Command IDs should be treated as stable ABI.
  * Do not reuse or modify IDs.
@@ -53,4 +58,34 @@ enum dmub_cmd_abm_type {
 	DMUB_CMD__ABM_SET_PWM_FRAC	= 5,
 };
 
+/*
+ * Parameters for ABM2.4 algorithm.
+ * Padded explicitly to 32-bit boundary.
+ */
+struct abm_config_table {
+	/* Parameters for crgb conversion */
+	uint16_t crgb_thresh[NUM_POWER_FN_SEGS];                 // 0B
+	uint16_t crgb_offset[NUM_POWER_FN_SEGS];                 // 15B
+	uint16_t crgb_slope[NUM_POWER_FN_SEGS];                  // 31B
+
+	/* Parameters for custom curve */
+	uint16_t backlight_thresholds[NUM_BL_CURVE_SEGS];        // 47B
+	uint16_t backlight_offsets[NUM_BL_CURVE_SEGS];           // 79B
+
+	uint16_t ambient_thresholds_lux[NUM_AMBI_LEVEL];         // 111B
+	uint16_t min_abm_backlight;                              // 121B
+
+	uint8_t min_reduction[NUM_AMBI_LEVEL][NUM_AGGR_LEVEL];   // 123B
+	uint8_t max_reduction[NUM_AMBI_LEVEL][NUM_AGGR_LEVEL];   // 143B
+	uint8_t bright_pos_gain[NUM_AMBI_LEVEL][NUM_AGGR_LEVEL]; // 163B
+	uint8_t dark_pos_gain[NUM_AMBI_LEVEL][NUM_AGGR_LEVEL];   // 183B
+	uint8_t hybrid_factor[NUM_AGGR_LEVEL];                   // 203B
+	uint8_t contrast_factor[NUM_AGGR_LEVEL];                 // 207B
+	uint8_t deviation_gain[NUM_AGGR_LEVEL];                  // 211B
+	uint8_t min_knee[NUM_AGGR_LEVEL];                        // 215B
+	uint8_t max_knee[NUM_AGGR_LEVEL];                        // 219B
+	uint8_t iir_curve[NUM_AMBI_LEVEL];                       // 223B
+	uint8_t pad3[3];                                         // 228B
+};
+
 #endif /* _DMUB_CMD_DAL_H_ */
diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_types.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_types.h
index bed5b023a396..f61af26fc73e 100644
--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_types.h
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_types.h
@@ -63,9 +63,12 @@ union dmub_addr {
 	uint64_t quad_part;
 };
 
-struct dmub_psr_debug_flags {
-	uint8_t visual_confirm : 1;
-	uint8_t reserved : 7;
+union dmub_psr_debug_flags {
+	struct {
+		uint8_t visual_confirm : 1;
+	} bitfields;
+
+	unsigned int u32All;
 };
 
 #if defined(__cplusplus)
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 11/27] drm/amd/display: Fix ABM memory alignment issue
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (9 preceding siblings ...)
  2020-05-15 18:12 ` [PATCH 10/27] drm/amd/display: FW release 1.0.10 Rodrigo Siqueira
@ 2020-05-15 18:12 ` Rodrigo Siqueira
  2020-05-15 18:12 ` [PATCH 12/27] drm/amd/display: 3.2.85 Rodrigo Siqueira
                   ` (15 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:12 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Aurabindo.Pillai,
	Wyatt Wood, Bhawanpreet.Lakha, Anthony Koo

From: Wyatt Wood <wyatt.wood@amd.com>

[Why]
Due to packing of abm_config_table, memory addresses aren't aligned to
32 bit boundary dmcub prefers.  Therefore when using pointers to this
structure, it's possible that dmcub will automatically align the data
read from that address, yielding incorrect values.

[How]
Instead of packing 1 byte boundary, explicitly pack values to 4 byte
boundary. Since there is a dependency on the existing iram table
structure on driver side, we must copy to a second structure, which is
aligned correctly, before passing to fw.

Signed-off-by: Wyatt Wood <wyatt.wood@amd.com>
Reviewed-by: Anthony Koo <Anthony.Koo@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 .../amd/display/modules/power/power_helpers.c | 45 +++++++++++++++++--
 1 file changed, 42 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/modules/power/power_helpers.c b/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
index 60b92f099af5..dbfdeed0b6e6 100644
--- a/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
+++ b/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
@@ -27,6 +27,7 @@
 #include "dc/inc/hw/abm.h"
 #include "dc.h"
 #include "core_types.h"
+#include "dmub_cmd_dal.h"
 
 #define DIV_ROUNDUP(a, b) (((a)+((b)/2))/(b))
 #define bswap16_based_on_endian(big_endian, value) \
@@ -658,17 +659,55 @@ void fill_iram_v_2_3(struct iram_table_v_2_2 *ram_table, struct dmcu_iram_parame
 bool dmub_init_abm_config(struct abm *abm,
 	struct dmcu_iram_parameters params)
 {
-	unsigned char ram_table[IRAM_SIZE];
+	struct iram_table_v_2_2 ram_table;
+	struct abm_config_table config;
 	bool result = false;
+	uint32_t i, j = 0;
 
 	if (abm == NULL)
 		return false;
 
 	memset(&ram_table, 0, sizeof(ram_table));
+	memset(&config, 0, sizeof(config));
+
+	fill_iram_v_2_3(&ram_table, params, false);
+
+	// We must copy to structure that is aligned to 32-bit
+	for (i = 0; i < NUM_POWER_FN_SEGS; i++) {
+		config.crgb_thresh[i] = ram_table.crgb_thresh[i];
+		config.crgb_offset[i] = ram_table.crgb_offset[i];
+		config.crgb_slope[i] = ram_table.crgb_slope[i];
+	}
+
+	for (i = 0; i < NUM_BL_CURVE_SEGS; i++) {
+		config.backlight_thresholds[i] = ram_table.backlight_thresholds[i];
+		config.backlight_offsets[i] = ram_table.backlight_offsets[i];
+	}
+
+	for (i = 0; i < NUM_AMBI_LEVEL; i++)
+		config.iir_curve[i] = ram_table.iir_curve[i];
+
+	for (i = 0; i < NUM_AMBI_LEVEL; i++) {
+		for (j = 0; j < NUM_AGGR_LEVEL; j++) {
+			config.min_reduction[i][j] = ram_table.min_reduction[i][j];
+			config.max_reduction[i][j] = ram_table.max_reduction[i][j];
+			config.bright_pos_gain[i][j] = ram_table.bright_pos_gain[i][j];
+			config.dark_pos_gain[i][j] = ram_table.dark_pos_gain[i][j];
+		}
+	}
+
+	for (i = 0; i < NUM_AGGR_LEVEL; i++) {
+		config.hybrid_factor[i] = ram_table.hybrid_factor[i];
+		config.contrast_factor[i] = ram_table.contrast_factor[i];
+		config.deviation_gain[i] = ram_table.deviation_gain[i];
+		config.min_knee[i] = ram_table.min_knee[i];
+		config.max_knee[i] = ram_table.max_knee[i];
+	}
+
+	config.min_abm_backlight = ram_table.min_abm_backlight;
 
-	fill_iram_v_2_3((struct iram_table_v_2_2 *)ram_table, params, false);
 	result = abm->funcs->init_abm_config(
-		abm, (char *)(&ram_table), IRAM_RESERVE_AREA_START_V2_2);
+		abm, (char *)(&config), sizeof(struct abm_config_table));
 
 	return result;
 }
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 12/27] drm/amd/display: 3.2.85
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (10 preceding siblings ...)
  2020-05-15 18:12 ` [PATCH 11/27] drm/amd/display: Fix ABM memory alignment issue Rodrigo Siqueira
@ 2020-05-15 18:12 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 13/27] drm/amd/display: Remove dml_common_def file Rodrigo Siqueira
                   ` (14 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:12 UTC (permalink / raw)
  To: amd-gfx
  Cc: Aric Cyr, Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira,
	Aurabindo.Pillai, Bhawanpreet.Lakha

From: Aric Cyr <aric.cyr@amd.com>

Signed-off-by: Aric Cyr <aric.cyr@amd.com>
Reviewed-by: Aric Cyr <Aric.Cyr@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index 85908561c741..a4b30233aee3 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -42,7 +42,7 @@
 #include "inc/hw/dmcu.h"
 #include "dml/display_mode_lib.h"
 
-#define DC_VER "3.2.84"
+#define DC_VER "3.2.85"
 
 #define MAX_SURFACES 3
 #define MAX_PLANES 6
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 13/27] drm/amd/display: Remove dml_common_def file
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (11 preceding siblings ...)
  2020-05-15 18:12 ` [PATCH 12/27] drm/amd/display: 3.2.85 Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 14/27] drm/amd/display: update dml interfaces and variables Rodrigo Siqueira
                   ` (13 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Dmytro Laktyushkin, Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira,
	Harry Wentland, Peter Zijlstra, Aurabindo.Pillai, Tony Cheng,
	Alexander Deucher, Bhawanpreet.Lakha, Christian König

During the rework for removing the FPU issues, I found the following
warning:

 [..] dml_common_defs.o: warning: objtool: dml_round()+0x9: FPU
      instruction outside of kernel_fpu_{begin,end}()

This file has a single function that does not need to be in a specific
file. This commit drop dml_common_defs file, and move dml_round function
to dml_inline_defs.

CC: Christian König <christian.koenig@amd.com>
CC: Alexander Deucher <Alexander.Deucher@amd.com>
CC: Peter Zijlstra <peterz@infradead.org>
CC: Tony Cheng <tony.cheng@amd.com>
CC: Harry Wentland <hwentlan@amd.com>
Signed-off-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
Reviewed-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dml/Makefile   |  2 -
 .../dc/dml/dcn20/display_rq_dlg_calc_20.h     |  1 -
 .../dc/dml/dcn20/display_rq_dlg_calc_20v2.h   |  1 -
 .../dc/dml/dcn21/display_rq_dlg_calc_21.h     |  2 +-
 .../drm/amd/display/dc/dml/display_mode_lib.h |  6 ++-
 .../drm/amd/display/dc/dml/display_mode_vba.h |  2 -
 .../display/dc/dml/display_rq_dlg_helpers.h   |  1 -
 .../display/dc/dml/dml1_display_rq_dlg_calc.h |  2 -
 .../drm/amd/display/dc/dml/dml_common_defs.c  | 43 -------------------
 .../drm/amd/display/dc/dml/dml_common_defs.h  | 37 ----------------
 .../drm/amd/display/dc/dml/dml_inline_defs.h  | 15 ++++++-
 11 files changed, 18 insertions(+), 94 deletions(-)
 delete mode 100644 drivers/gpu/drm/amd/display/dc/dml/dml_common_defs.c
 delete mode 100644 drivers/gpu/drm/amd/display/dc/dml/dml_common_defs.h

diff --git a/drivers/gpu/drm/amd/display/dc/dml/Makefile b/drivers/gpu/drm/amd/display/dc/dml/Makefile
index 7ee8b8460a9b..e34c3376efc1 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/dml/Makefile
@@ -63,10 +63,8 @@ CFLAGS_$(AMDDALPATH)/dc/dml/dcn21/display_rq_dlg_calc_21.o := $(dml_ccflags)
 endif
 CFLAGS_$(AMDDALPATH)/dc/dml/dml1_display_rq_dlg_calc.o := $(dml_ccflags)
 CFLAGS_$(AMDDALPATH)/dc/dml/display_rq_dlg_helpers.o := $(dml_ccflags)
-CFLAGS_$(AMDDALPATH)/dc/dml/dml_common_defs.o := $(dml_ccflags)
 
 DML = display_mode_lib.o display_rq_dlg_helpers.o dml1_display_rq_dlg_calc.o \
-	dml_common_defs.o
 
 ifdef CONFIG_DRM_AMD_DC_DCN
 DML += display_mode_vba.o dcn20/display_rq_dlg_calc_20.o dcn20/display_mode_vba_20.o
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.h b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.h
index 8c86b63ddf07..1e557ddcb638 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.h
@@ -26,7 +26,6 @@
 #ifndef __DML20_DISPLAY_RQ_DLG_CALC_H__
 #define __DML20_DISPLAY_RQ_DLG_CALC_H__
 
-#include "../dml_common_defs.h"
 #include "../display_rq_dlg_helpers.h"
 
 struct display_mode_lib;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.h b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.h
index 0378406bf7e7..0d53e871a9d1 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.h
@@ -26,7 +26,6 @@
 #ifndef __DML20V2_DISPLAY_RQ_DLG_CALC_H__
 #define __DML20V2_DISPLAY_RQ_DLG_CALC_H__
 
-#include "../dml_common_defs.h"
 #include "../display_rq_dlg_helpers.h"
 
 struct display_mode_lib;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.h b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.h
index 83e95f8cbff2..e8f7785e3fc6 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.h
@@ -26,7 +26,7 @@
 #ifndef __DML21_DISPLAY_RQ_DLG_CALC_H__
 #define __DML21_DISPLAY_RQ_DLG_CALC_H__
 
-#include "../dml_common_defs.h"
+#include "dm_services.h"
 #include "../display_rq_dlg_helpers.h"
 
 struct display_mode_lib;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.h
index cf2758ca5b02..c77c3d827e4a 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.h
@@ -25,8 +25,10 @@
 #ifndef __DISPLAY_MODE_LIB_H__
 #define __DISPLAY_MODE_LIB_H__
 
-
-#include "dml_common_defs.h"
+#include "dm_services.h"
+#include "dc_features.h"
+#include "display_mode_structs.h"
+#include "display_mode_enums.h"
 #include "display_mode_vba.h"
 
 enum dml_project {
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h
index 6a7b20927a6b..3f559e725ab1 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h
@@ -27,8 +27,6 @@
 #ifndef __DML2_DISPLAY_MODE_VBA_H__
 #define __DML2_DISPLAY_MODE_VBA_H__
 
-#include "dml_common_defs.h"
-
 struct display_mode_lib;
 
 void ModeSupportAndSystemConfiguration(struct display_mode_lib *mode_lib);
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_rq_dlg_helpers.h b/drivers/gpu/drm/amd/display/dc/dml/display_rq_dlg_helpers.h
index 1f24db830737..2555ef0358c2 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_rq_dlg_helpers.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_rq_dlg_helpers.h
@@ -26,7 +26,6 @@
 #ifndef __DISPLAY_RQ_DLG_HELPERS_H__
 #define __DISPLAY_RQ_DLG_HELPERS_H__
 
-#include "dml_common_defs.h"
 #include "display_mode_lib.h"
 
 /* Function: Printer functions
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.h b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.h
index 304164986bd8..9c06913ad767 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.h
@@ -26,8 +26,6 @@
 #ifndef __DISPLAY_RQ_DLG_CALC_H__
 #define __DISPLAY_RQ_DLG_CALC_H__
 
-#include "dml_common_defs.h"
-
 struct display_mode_lib;
 
 #include "display_rq_dlg_helpers.h"
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml_common_defs.c b/drivers/gpu/drm/amd/display/dc/dml/dml_common_defs.c
deleted file mode 100644
index 723af0b2dda0..000000000000
--- a/drivers/gpu/drm/amd/display/dc/dml/dml_common_defs.c
+++ /dev/null
@@ -1,43 +0,0 @@
-/*
- * Copyright 2017 Advanced Micro Devices, Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- * Authors: AMD
- *
- */
-
-#include "dml_common_defs.h"
-#include "dcn_calc_math.h"
-
-#include "dml_inline_defs.h"
-
-double dml_round(double a)
-{
-	double round_pt = 0.5;
-	double ceil = dml_ceil(a, 1);
-	double floor = dml_floor(a, 1);
-
-	if (a - floor >= round_pt)
-		return ceil;
-	else
-		return floor;
-}
-
-
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml_common_defs.h b/drivers/gpu/drm/amd/display/dc/dml/dml_common_defs.h
deleted file mode 100644
index f78cbae9db88..000000000000
--- a/drivers/gpu/drm/amd/display/dc/dml/dml_common_defs.h
+++ /dev/null
@@ -1,37 +0,0 @@
-/*
- * Copyright 2017 Advanced Micro Devices, Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- * Authors: AMD
- *
- */
-
-#ifndef __DC_COMMON_DEFS_H__
-#define __DC_COMMON_DEFS_H__
-
-#include "dm_services.h"
-#include "dc_features.h"
-#include "display_mode_structs.h"
-#include "display_mode_enums.h"
-
-
-double dml_round(double a);
-
-#endif /* __DC_COMMON_DEFS_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h b/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
index ded71ea82413..02e06c9b3230 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
@@ -26,7 +26,6 @@
 #ifndef __DML_INLINE_DEFS_H__
 #define __DML_INLINE_DEFS_H__
 
-#include "dml_common_defs.h"
 #include "dcn_calc_math.h"
 #include "dml_logger.h"
 
@@ -75,6 +74,18 @@ static inline double dml_floor(double a, double granularity)
 	return (double) dcn_bw_floor2(a, granularity);
 }
 
+static inline double dml_round(double a)
+{
+	double round_pt = 0.5;
+	double ceil = dml_ceil(a, 1);
+	double floor = dml_floor(a, 1);
+
+	if (a - floor >= round_pt)
+		return ceil;
+	else
+		return floor;
+}
+
 static inline int dml_log2(double x)
 {
 	return dml_round((double)dcn_bw_log(x, 2));
@@ -112,7 +123,7 @@ static inline double dml_log(double x, double base)
 
 static inline unsigned int dml_round_to_multiple(unsigned int num,
 						 unsigned int multiple,
-						 bool up)
+						 unsigned char up)
 {
 	unsigned int remainder;
 
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 14/27] drm/amd/display: update dml interfaces and variables
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (12 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 13/27] drm/amd/display: Remove dml_common_def file Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 15/27] drm/amd/display: DP link layer test 4.2.1.1 fix due to specs update Rodrigo Siqueira
                   ` (12 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Dmytro Laktyushkin,
	Eric Bernstein, Aurabindo.Pillai, Bhawanpreet.Lakha

From: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>

Preparation for new asic support.

Signed-off-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
Reviewed-by: Eric Bernstein <Eric.Bernstein@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 .../dc/dml/dcn20/display_rq_dlg_calc_20.c     |  33 +--
 .../dc/dml/dcn20/display_rq_dlg_calc_20v2.c   |  33 +--
 .../dc/dml/dcn21/display_rq_dlg_calc_21.c     |  36 +--
 .../amd/display/dc/dml/display_mode_enums.h   |   8 +-
 .../amd/display/dc/dml/display_mode_structs.h |  11 +
 .../drm/amd/display/dc/dml/display_mode_vba.c |  54 +++--
 .../drm/amd/display/dc/dml/display_mode_vba.h | 227 ++++++++++--------
 7 files changed, 186 insertions(+), 216 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
index ca807846032f..72423dc425dc 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
@@ -890,11 +890,6 @@ static void dml20_rq_dlg_get_dlg_params(struct display_mode_lib *mode_lib,
 	double refcyc_per_req_delivery_c;
 
 	unsigned int full_recout_width;
-	double xfc_transfer_delay;
-	double xfc_precharge_delay;
-	double xfc_remote_surface_flip_latency;
-	double xfc_dst_y_delta_drq_limit;
-	double xfc_prefetch_margin;
 	double refcyc_per_req_delivery_pre_cur0;
 	double refcyc_per_req_delivery_cur0;
 	double refcyc_per_req_delivery_pre_cur1;
@@ -1344,22 +1339,6 @@ static void dml20_rq_dlg_get_dlg_params(struct display_mode_lib *mode_lib,
 		ASSERT(refcyc_per_req_delivery_c < dml_pow(2, 13));
 	}
 
-	// XFC
-	xfc_transfer_delay = get_xfc_transfer_delay(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
-	xfc_precharge_delay = get_xfc_precharge_delay(mode_lib,
-			e2e_pipe_param,
-			num_pipes,
-			pipe_idx);
-	xfc_remote_surface_flip_latency = get_xfc_remote_surface_flip_latency(mode_lib,
-			e2e_pipe_param,
-			num_pipes,
-			pipe_idx);
-	xfc_dst_y_delta_drq_limit = xfc_remote_surface_flip_latency;
-	xfc_prefetch_margin = get_xfc_prefetch_margin(mode_lib,
-			e2e_pipe_param,
-			num_pipes,
-			pipe_idx);
-
 	// TTU - Cursor
 	refcyc_per_req_delivery_pre_cur0 = 0.0;
 	refcyc_per_req_delivery_cur0 = 0.0;
@@ -1510,17 +1489,7 @@ static void dml20_rq_dlg_get_dlg_params(struct display_mode_lib *mode_lib,
 	disp_dlg_regs->chunk_hdl_adjust_cur1 = 3;
 	disp_dlg_regs->dst_y_offset_cur1 = 0;
 
-	disp_dlg_regs->xfc_reg_transfer_delay = xfc_transfer_delay;
-	disp_dlg_regs->xfc_reg_precharge_delay = xfc_precharge_delay;
-	disp_dlg_regs->xfc_reg_remote_surface_flip_latency = xfc_remote_surface_flip_latency;
-	disp_dlg_regs->xfc_reg_prefetch_margin = dml_ceil(xfc_prefetch_margin * refclk_freq_in_mhz,
-			1);
-
-	// slave has to have this value also set to off
-	if (src->xfc_enable && !src->xfc_slave)
-		disp_dlg_regs->dst_y_delta_drq_limit = dml_ceil(xfc_dst_y_delta_drq_limit, 1);
-	else
-		disp_dlg_regs->dst_y_delta_drq_limit = 0x7fff; // off
+	disp_dlg_regs->dst_y_delta_drq_limit = 0x7fff; // off
 
 	disp_ttu_regs->refcyc_per_req_delivery_pre_l = (unsigned int) (refcyc_per_req_delivery_pre_l
 			* dml_pow(2, 10));
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
index 287b7a0ad108..9c78446c3a9d 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20v2.c
@@ -890,11 +890,6 @@ static void dml20v2_rq_dlg_get_dlg_params(struct display_mode_lib *mode_lib,
 	double refcyc_per_req_delivery_c;
 
 	unsigned int full_recout_width;
-	double xfc_transfer_delay;
-	double xfc_precharge_delay;
-	double xfc_remote_surface_flip_latency;
-	double xfc_dst_y_delta_drq_limit;
-	double xfc_prefetch_margin;
 	double refcyc_per_req_delivery_pre_cur0;
 	double refcyc_per_req_delivery_cur0;
 	double refcyc_per_req_delivery_pre_cur1;
@@ -1345,22 +1340,6 @@ static void dml20v2_rq_dlg_get_dlg_params(struct display_mode_lib *mode_lib,
 		ASSERT(refcyc_per_req_delivery_c < dml_pow(2, 13));
 	}
 
-	// XFC
-	xfc_transfer_delay = get_xfc_transfer_delay(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
-	xfc_precharge_delay = get_xfc_precharge_delay(mode_lib,
-			e2e_pipe_param,
-			num_pipes,
-			pipe_idx);
-	xfc_remote_surface_flip_latency = get_xfc_remote_surface_flip_latency(mode_lib,
-			e2e_pipe_param,
-			num_pipes,
-			pipe_idx);
-	xfc_dst_y_delta_drq_limit = xfc_remote_surface_flip_latency;
-	xfc_prefetch_margin = get_xfc_prefetch_margin(mode_lib,
-			e2e_pipe_param,
-			num_pipes,
-			pipe_idx);
-
 	// TTU - Cursor
 	refcyc_per_req_delivery_pre_cur0 = 0.0;
 	refcyc_per_req_delivery_cur0 = 0.0;
@@ -1511,17 +1490,7 @@ static void dml20v2_rq_dlg_get_dlg_params(struct display_mode_lib *mode_lib,
 	disp_dlg_regs->chunk_hdl_adjust_cur1 = 3;
 	disp_dlg_regs->dst_y_offset_cur1 = 0;
 
-	disp_dlg_regs->xfc_reg_transfer_delay = xfc_transfer_delay;
-	disp_dlg_regs->xfc_reg_precharge_delay = xfc_precharge_delay;
-	disp_dlg_regs->xfc_reg_remote_surface_flip_latency = xfc_remote_surface_flip_latency;
-	disp_dlg_regs->xfc_reg_prefetch_margin = dml_ceil(xfc_prefetch_margin * refclk_freq_in_mhz,
-			1);
-
-	// slave has to have this value also set to off
-	if (src->xfc_enable && !src->xfc_slave)
-		disp_dlg_regs->dst_y_delta_drq_limit = dml_ceil(xfc_dst_y_delta_drq_limit, 1);
-	else
-		disp_dlg_regs->dst_y_delta_drq_limit = 0x7fff; // off
+	disp_dlg_regs->dst_y_delta_drq_limit = 0x7fff; // off
 
 	disp_ttu_regs->refcyc_per_req_delivery_pre_l = (unsigned int) (refcyc_per_req_delivery_pre_l
 			* dml_pow(2, 10));
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
index 90a5fefef05b..edd41d358291 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
@@ -936,11 +936,6 @@ static void dml_rq_dlg_get_dlg_params(
 	double refcyc_per_req_delivery_c;
 
 	unsigned int full_recout_width;
-	double xfc_transfer_delay;
-	double xfc_precharge_delay;
-	double xfc_remote_surface_flip_latency;
-	double xfc_dst_y_delta_drq_limit;
-	double xfc_prefetch_margin;
 	double refcyc_per_req_delivery_pre_cur0;
 	double refcyc_per_req_delivery_cur0;
 	double refcyc_per_req_delivery_pre_cur1;
@@ -1412,25 +1407,6 @@ static void dml_rq_dlg_get_dlg_params(
 		ASSERT(refcyc_per_req_delivery_c < dml_pow(2, 13));
 	}
 
-	// XFC
-	xfc_transfer_delay = get_xfc_transfer_delay(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
-	xfc_precharge_delay = get_xfc_precharge_delay(
-			mode_lib,
-			e2e_pipe_param,
-			num_pipes,
-			pipe_idx);
-	xfc_remote_surface_flip_latency = get_xfc_remote_surface_flip_latency(
-			mode_lib,
-			e2e_pipe_param,
-			num_pipes,
-			pipe_idx);
-	xfc_dst_y_delta_drq_limit = xfc_remote_surface_flip_latency;
-	xfc_prefetch_margin = get_xfc_prefetch_margin(
-			mode_lib,
-			e2e_pipe_param,
-			num_pipes,
-			pipe_idx);
-
 	// TTU - Cursor
 	refcyc_per_req_delivery_pre_cur0 = 0.0;
 	refcyc_per_req_delivery_cur0 = 0.0;
@@ -1621,17 +1597,7 @@ static void dml_rq_dlg_get_dlg_params(
 	disp_dlg_regs->chunk_hdl_adjust_cur1 = 3;
 	disp_dlg_regs->dst_y_offset_cur1 = 0;
 
-	disp_dlg_regs->xfc_reg_transfer_delay = xfc_transfer_delay;
-	disp_dlg_regs->xfc_reg_precharge_delay = xfc_precharge_delay;
-	disp_dlg_regs->xfc_reg_remote_surface_flip_latency = xfc_remote_surface_flip_latency;
-	disp_dlg_regs->xfc_reg_prefetch_margin = dml_ceil(
-			xfc_prefetch_margin * refclk_freq_in_mhz, 1);
-
-	// slave has to have this value also set to off
-	if (src->xfc_enable && !src->xfc_slave)
-		disp_dlg_regs->dst_y_delta_drq_limit = dml_ceil(xfc_dst_y_delta_drq_limit, 1);
-	else
-		disp_dlg_regs->dst_y_delta_drq_limit = 0x7fff; // off
+	disp_dlg_regs->dst_y_delta_drq_limit = 0x7fff; // off
 
 	disp_ttu_regs->refcyc_per_req_delivery_pre_l = (unsigned int) (refcyc_per_req_delivery_pre_l
 			* dml_pow(2, 10));
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h
index bfc2f39bd1ef..5baaefd29ba6 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h
@@ -177,8 +177,14 @@ enum odm_combine_policy {
 };
 
 enum immediate_flip_requirement {
-	dm_immediate_flip_not_required,
 	dm_immediate_flip_required,
+	dm_immediate_flip_not_required,
+};
+
+enum unbounded_requesting_policy {
+	dm_unbounded_requesting,
+	dm_unbounded_requesting_edp_only,
+	dm_unbounded_requesting_disable
 };
 
 #endif
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
index 439ffd04be34..dbd766a4342b 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
@@ -82,6 +82,7 @@ struct _vcs_dpi_soc_bounding_box_st {
 	double pct_ideal_dram_sdp_bw_after_urgent_pixel_only; // PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelDataOnly
 	double pct_ideal_dram_sdp_bw_after_urgent_pixel_and_vm;
 	double pct_ideal_dram_sdp_bw_after_urgent_vm_only;
+	double pct_ideal_sdp_bw_after_urgent;
 	double max_avg_sdp_bw_use_normal_percent;
 	double max_avg_dram_bw_use_normal_percent;
 	unsigned int max_request_size_bytes;
@@ -125,6 +126,7 @@ struct _vcs_dpi_ip_params_st {
 	bool use_min_dcfclk;
 	bool gpuvm_enable;
 	bool hostvm_enable;
+	bool dsc422_native_support;
 	unsigned int gpuvm_max_page_table_levels;
 	unsigned int hostvm_max_page_table_levels;
 	unsigned int hostvm_cached_page_table_levels;
@@ -143,6 +145,7 @@ struct _vcs_dpi_ip_params_st {
 	unsigned char pte_enable;
 	unsigned int pte_chunk_size_kbytes;
 	unsigned int meta_chunk_size_kbytes;
+	unsigned int min_meta_chunk_size_bytes;
 	unsigned int writeback_chunk_size_kbytes;
 	unsigned int line_buffer_size_bits;
 	unsigned int max_line_buffer_lines;
@@ -158,6 +161,7 @@ struct _vcs_dpi_ip_params_st {
 	double writeback_max_vscl_ratio;
 	double writeback_min_hscl_ratio;
 	double writeback_min_vscl_ratio;
+	double maximum_dsc_bits_per_component;
 	unsigned int writeback_max_hscl_taps;
 	unsigned int writeback_max_vscl_taps;
 	unsigned int writeback_line_buffer_luma_buffer_size;
@@ -219,11 +223,14 @@ struct _vcs_dpi_display_xfc_params_st {
 
 struct _vcs_dpi_display_pipe_source_params_st {
 	int source_format;
+	double dcc_fraction_of_zs_req_luma;
+	double dcc_fraction_of_zs_req_chroma;
 	unsigned char dcc;
 	unsigned int dcc_rate;
 	unsigned int dcc_rate_chroma;
 	unsigned char dcc_use_global;
 	unsigned char vm;
+	bool unbounded_req_mode;
 	bool gpuvm;    // gpuvm enabled
 	bool hostvm;    // hostvm enabled
 	bool gpuvm_levels_force_en;
@@ -324,6 +331,8 @@ struct _vcs_dpi_display_pipe_dest_params_st {
 	unsigned int vblank_end;
 	unsigned int htotal;
 	unsigned int vtotal;
+	unsigned int refresh_rate;
+	unsigned int vfront_porch;
 	unsigned int vactive;
 	unsigned int hactive;
 	unsigned int vstartup_start;
@@ -333,6 +342,7 @@ struct _vcs_dpi_display_pipe_dest_params_st {
 	unsigned char interlaced;
 	double pixel_rate_mhz;
 	unsigned char synchronized_vblank_all_planes;
+	unsigned char synchronize_timing_if_single_refresh_rate;
 	unsigned char otg_inst;
 	unsigned int odm_combine;
 	unsigned char use_maximum_vstartup;
@@ -469,6 +479,7 @@ struct _vcs_dpi_display_dlg_regs_st {
 	unsigned int refcyc_per_vm_req_vblank;
 	unsigned int refcyc_per_vm_req_flip;
 	unsigned int refcyc_per_vm_dmdata;
+	unsigned int dmdata_dl_delta;
 };
 
 struct _vcs_dpi_display_ttu_regs_st {
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
index b19988f54721..2d549736f9b8 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
@@ -91,15 +91,13 @@ dml_get_attr_func(wm_stutter_exit, mode_lib->vba.StutterExitWatermark);
 dml_get_attr_func(wm_stutter_enter_exit, mode_lib->vba.StutterEnterPlusExitWatermark);
 dml_get_attr_func(wm_dram_clock_change, mode_lib->vba.DRAMClockChangeWatermark);
 dml_get_attr_func(wm_writeback_dram_clock_change, mode_lib->vba.WritebackDRAMClockChangeWatermark);
-dml_get_attr_func(wm_xfc_underflow, mode_lib->vba.UrgentWatermark); // xfc_underflow maps to urgent
 dml_get_attr_func(stutter_efficiency, mode_lib->vba.StutterEfficiency);
 dml_get_attr_func(stutter_efficiency_no_vblank, mode_lib->vba.StutterEfficiencyNotIncludingVBlank);
+dml_get_attr_func(stutter_period, mode_lib->vba.StutterPeriod);
 dml_get_attr_func(urgent_latency, mode_lib->vba.UrgentLatency);
 dml_get_attr_func(urgent_extra_latency, mode_lib->vba.UrgentExtraLatency);
 dml_get_attr_func(nonurgent_latency, mode_lib->vba.NonUrgentLatencyTolerance);
-dml_get_attr_func(
-		dram_clock_change_latency,
-		mode_lib->vba.MinActiveDRAMClockChangeLatencySupported);
+dml_get_attr_func(dram_clock_change_latency, mode_lib->vba.MinActiveDRAMClockChangeLatencySupported);
 dml_get_attr_func(dispclk_calculated, mode_lib->vba.DISPCLK_calculated);
 dml_get_attr_func(total_data_read_bw, mode_lib->vba.TotalDataReadBandwidth);
 dml_get_attr_func(return_bw, mode_lib->vba.ReturnBW);
@@ -119,6 +117,7 @@ dml_get_pipe_attr_func(dsc_delay, mode_lib->vba.DSCDelay);
 dml_get_pipe_attr_func(dppclk_calculated, mode_lib->vba.DPPCLK_calculated);
 dml_get_pipe_attr_func(dscclk_calculated, mode_lib->vba.DSCCLK_calculated);
 dml_get_pipe_attr_func(min_ttu_vblank, mode_lib->vba.MinTTUVBlank);
+dml_get_pipe_attr_func(min_ttu_vblank_in_us, mode_lib->vba.MinTTUVBlank);
 dml_get_pipe_attr_func(vratio_prefetch_l, mode_lib->vba.VRatioPrefetchY);
 dml_get_pipe_attr_func(vratio_prefetch_c, mode_lib->vba.VRatioPrefetchC);
 dml_get_pipe_attr_func(dst_x_after_scaler, mode_lib->vba.DSTXAfterScaler);
@@ -127,18 +126,37 @@ dml_get_pipe_attr_func(dst_y_per_vm_vblank, mode_lib->vba.DestinationLinesToRequ
 dml_get_pipe_attr_func(dst_y_per_row_vblank, mode_lib->vba.DestinationLinesToRequestRowInVBlank);
 dml_get_pipe_attr_func(dst_y_prefetch, mode_lib->vba.DestinationLinesForPrefetch);
 dml_get_pipe_attr_func(dst_y_per_vm_flip, mode_lib->vba.DestinationLinesToRequestVMInImmediateFlip);
-dml_get_pipe_attr_func(
-		dst_y_per_row_flip,
-		mode_lib->vba.DestinationLinesToRequestRowInImmediateFlip);
-
-dml_get_pipe_attr_func(xfc_transfer_delay, mode_lib->vba.XFCTransferDelay);
-dml_get_pipe_attr_func(xfc_precharge_delay, mode_lib->vba.XFCPrechargeDelay);
-dml_get_pipe_attr_func(xfc_remote_surface_flip_latency, mode_lib->vba.XFCRemoteSurfaceFlipLatency);
-dml_get_pipe_attr_func(xfc_prefetch_margin, mode_lib->vba.XFCPrefetchMargin);
+dml_get_pipe_attr_func(dst_y_per_row_flip, mode_lib->vba.DestinationLinesToRequestRowInImmediateFlip);
 dml_get_pipe_attr_func(refcyc_per_vm_group_vblank, mode_lib->vba.TimePerVMGroupVBlank);
 dml_get_pipe_attr_func(refcyc_per_vm_group_flip, mode_lib->vba.TimePerVMGroupFlip);
 dml_get_pipe_attr_func(refcyc_per_vm_req_vblank, mode_lib->vba.TimePerVMRequestVBlank);
 dml_get_pipe_attr_func(refcyc_per_vm_req_flip, mode_lib->vba.TimePerVMRequestFlip);
+dml_get_pipe_attr_func(refcyc_per_vm_group_vblank_in_us, mode_lib->vba.TimePerVMGroupVBlank);
+dml_get_pipe_attr_func(refcyc_per_vm_group_flip_in_us, mode_lib->vba.TimePerVMGroupFlip);
+dml_get_pipe_attr_func(refcyc_per_vm_req_vblank_in_us, mode_lib->vba.TimePerVMRequestVBlank);
+dml_get_pipe_attr_func(refcyc_per_vm_req_flip_in_us, mode_lib->vba.TimePerVMRequestFlip);
+dml_get_pipe_attr_func(refcyc_per_vm_dmdata_in_us, mode_lib->vba.Tdmdl_vm);
+dml_get_pipe_attr_func(dmdata_dl_delta_in_us, mode_lib->vba.Tdmdl);
+dml_get_pipe_attr_func(refcyc_per_line_delivery_l_in_us, mode_lib->vba.DisplayPipeLineDeliveryTimeLuma);
+dml_get_pipe_attr_func(refcyc_per_line_delivery_c_in_us, mode_lib->vba.DisplayPipeLineDeliveryTimeChroma);
+dml_get_pipe_attr_func(refcyc_per_line_delivery_pre_l_in_us, mode_lib->vba.DisplayPipeLineDeliveryTimeLumaPrefetch);
+dml_get_pipe_attr_func(refcyc_per_line_delivery_pre_c_in_us, mode_lib->vba.DisplayPipeLineDeliveryTimeChromaPrefetch);
+dml_get_pipe_attr_func(refcyc_per_req_delivery_l_in_us, mode_lib->vba.DisplayPipeRequestDeliveryTimeLuma);
+dml_get_pipe_attr_func(refcyc_per_req_delivery_c_in_us, mode_lib->vba.DisplayPipeRequestDeliveryTimeChroma);
+dml_get_pipe_attr_func(refcyc_per_req_delivery_pre_l_in_us, mode_lib->vba.DisplayPipeRequestDeliveryTimeLumaPrefetch);
+dml_get_pipe_attr_func(refcyc_per_req_delivery_pre_c_in_us, mode_lib->vba.DisplayPipeRequestDeliveryTimeChromaPrefetch);
+dml_get_pipe_attr_func(refcyc_per_cursor_req_delivery_in_us, mode_lib->vba.CursorRequestDeliveryTime);
+dml_get_pipe_attr_func(refcyc_per_cursor_req_delivery_pre_in_us, mode_lib->vba.CursorRequestDeliveryTimePrefetch);
+dml_get_pipe_attr_func(refcyc_per_meta_chunk_nom_l_in_us, mode_lib->vba.TimePerMetaChunkNominal);
+dml_get_pipe_attr_func(refcyc_per_meta_chunk_nom_c_in_us, mode_lib->vba.TimePerChromaMetaChunkNominal);
+dml_get_pipe_attr_func(refcyc_per_meta_chunk_vblank_l_in_us, mode_lib->vba.TimePerMetaChunkVBlank);
+dml_get_pipe_attr_func(refcyc_per_meta_chunk_vblank_c_in_us, mode_lib->vba.TimePerChromaMetaChunkVBlank);
+dml_get_pipe_attr_func(refcyc_per_meta_chunk_flip_l_in_us, mode_lib->vba.TimePerMetaChunkFlip);
+dml_get_pipe_attr_func(refcyc_per_meta_chunk_flip_c_in_us, mode_lib->vba.TimePerChromaMetaChunkFlip);
+
+dml_get_pipe_attr_func(vupdate_offset, mode_lib->vba.VUpdateOffsetPix);
+dml_get_pipe_attr_func(vupdate_width, mode_lib->vba.VUpdateWidthPix);
+dml_get_pipe_attr_func(vready_offset, mode_lib->vba.VReadyOffsetPix);
 
 unsigned int get_vstartup_calculated(
 		struct display_mode_lib *mode_lib,
@@ -293,8 +311,10 @@ static void fetch_ip_params(struct display_mode_lib *mode_lib)
 	mode_lib->vba.MaxPSCLToLBThroughput = ip->max_pscl_lb_bw_pix_per_clk;
 	mode_lib->vba.ROBBufferSizeInKByte = ip->rob_buffer_size_kbytes;
 	mode_lib->vba.DETBufferSizeInKByte = ip->det_buffer_size_kbytes;
+
 	mode_lib->vba.PixelChunkSizeInKByte = ip->pixel_chunk_size_kbytes;
 	mode_lib->vba.MetaChunkSize = ip->meta_chunk_size_kbytes;
+	mode_lib->vba.MinMetaChunkSizeBytes = ip->min_meta_chunk_size_bytes;
 	mode_lib->vba.WritebackChunkSize = ip->writeback_chunk_size_kbytes;
 	mode_lib->vba.LineBufferSize = ip->line_buffer_size_bits;
 	mode_lib->vba.MaxLineBufferLines = ip->max_line_buffer_lines;
@@ -425,9 +445,7 @@ static void fetch_pipe_params(struct display_mode_lib *mode_lib)
 		/* TODO: Needs to be set based on src->dcc_rate_luma/chroma */
 		mode_lib->vba.DCCRateLuma[mode_lib->vba.NumberOfActivePlanes] = src->dcc_rate;
 		mode_lib->vba.DCCRateChroma[mode_lib->vba.NumberOfActivePlanes] = src->dcc_rate_chroma;
-
-		mode_lib->vba.SourcePixelFormat[mode_lib->vba.NumberOfActivePlanes] =
-				(enum source_format_class) (src->source_format);
+		mode_lib->vba.SourcePixelFormat[mode_lib->vba.NumberOfActivePlanes] = (enum source_format_class) (src->source_format);
 		mode_lib->vba.HActive[mode_lib->vba.NumberOfActivePlanes] = dst->hactive;
 		mode_lib->vba.VActive[mode_lib->vba.NumberOfActivePlanes] = dst->vactive;
 		mode_lib->vba.SurfaceTiling[mode_lib->vba.NumberOfActivePlanes] =
@@ -648,10 +666,12 @@ static void fetch_pipe_params(struct display_mode_lib *mode_lib)
 
 	// TODO: ODMCombineEnabled => 2 * DPPPerPlane...actually maybe not since all pipes are specified
 	// Do we want the dscclk to automatically be halved? Guess not since the value is specified
-
+	mode_lib->vba.SynchronizeTimingsIfSingleRefreshRate = pipes[0].pipe.dest.synchronize_timing_if_single_refresh_rate;
 	mode_lib->vba.SynchronizedVBlank = pipes[0].pipe.dest.synchronized_vblank_all_planes;
-	for (k = 1; k < mode_lib->vba.cache_num_pipes; ++k)
+	for (k = 1; k < mode_lib->vba.cache_num_pipes; ++k) {
+		ASSERT(mode_lib->vba.SynchronizeTimingsIfSingleRefreshRate == pipes[k].pipe.dest.synchronize_timing_if_single_refresh_rate);
 		ASSERT(mode_lib->vba.SynchronizedVBlank == pipes[k].pipe.dest.synchronized_vblank_all_planes);
+	}
 
 	mode_lib->vba.GPUVMEnable = false;
 	mode_lib->vba.HostVMEnable = false;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h
index 3f559e725ab1..d281a6f933f4 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h
@@ -41,9 +41,9 @@ dml_get_attr_decl(wm_stutter_exit);
 dml_get_attr_decl(wm_stutter_enter_exit);
 dml_get_attr_decl(wm_dram_clock_change);
 dml_get_attr_decl(wm_writeback_dram_clock_change);
-dml_get_attr_decl(wm_xfc_underflow);
 dml_get_attr_decl(stutter_efficiency_no_vblank);
 dml_get_attr_decl(stutter_efficiency);
+dml_get_attr_decl(stutter_period);
 dml_get_attr_decl(urgent_latency);
 dml_get_attr_decl(urgent_extra_latency);
 dml_get_attr_decl(nonurgent_latency);
@@ -61,6 +61,7 @@ dml_get_pipe_attr_decl(dsc_delay);
 dml_get_pipe_attr_decl(dppclk_calculated);
 dml_get_pipe_attr_decl(dscclk_calculated);
 dml_get_pipe_attr_decl(min_ttu_vblank);
+dml_get_pipe_attr_decl(min_ttu_vblank_in_us);
 dml_get_pipe_attr_decl(vratio_prefetch_l);
 dml_get_pipe_attr_decl(vratio_prefetch_c);
 dml_get_pipe_attr_decl(dst_x_after_scaler);
@@ -70,14 +71,36 @@ dml_get_pipe_attr_decl(dst_y_per_row_vblank);
 dml_get_pipe_attr_decl(dst_y_prefetch);
 dml_get_pipe_attr_decl(dst_y_per_vm_flip);
 dml_get_pipe_attr_decl(dst_y_per_row_flip);
-dml_get_pipe_attr_decl(xfc_transfer_delay);
-dml_get_pipe_attr_decl(xfc_precharge_delay);
-dml_get_pipe_attr_decl(xfc_remote_surface_flip_latency);
-dml_get_pipe_attr_decl(xfc_prefetch_margin);
 dml_get_pipe_attr_decl(refcyc_per_vm_group_vblank);
 dml_get_pipe_attr_decl(refcyc_per_vm_group_flip);
 dml_get_pipe_attr_decl(refcyc_per_vm_req_vblank);
 dml_get_pipe_attr_decl(refcyc_per_vm_req_flip);
+dml_get_pipe_attr_decl(refcyc_per_vm_group_vblank_in_us);
+dml_get_pipe_attr_decl(refcyc_per_vm_group_flip_in_us);
+dml_get_pipe_attr_decl(refcyc_per_vm_req_vblank_in_us);
+dml_get_pipe_attr_decl(refcyc_per_vm_req_flip_in_us);
+dml_get_pipe_attr_decl(refcyc_per_vm_dmdata_in_us);
+dml_get_pipe_attr_decl(dmdata_dl_delta_in_us);
+dml_get_pipe_attr_decl(refcyc_per_line_delivery_l_in_us);
+dml_get_pipe_attr_decl(refcyc_per_line_delivery_c_in_us);
+dml_get_pipe_attr_decl(refcyc_per_line_delivery_pre_l_in_us);
+dml_get_pipe_attr_decl(refcyc_per_line_delivery_pre_c_in_us);
+dml_get_pipe_attr_decl(refcyc_per_req_delivery_l_in_us);
+dml_get_pipe_attr_decl(refcyc_per_req_delivery_c_in_us);
+dml_get_pipe_attr_decl(refcyc_per_req_delivery_pre_l_in_us);
+dml_get_pipe_attr_decl(refcyc_per_req_delivery_pre_c_in_us);
+dml_get_pipe_attr_decl(refcyc_per_cursor_req_delivery_in_us);
+dml_get_pipe_attr_decl(refcyc_per_cursor_req_delivery_pre_in_us);
+dml_get_pipe_attr_decl(refcyc_per_meta_chunk_nom_l_in_us);
+dml_get_pipe_attr_decl(refcyc_per_meta_chunk_nom_c_in_us);
+dml_get_pipe_attr_decl(refcyc_per_meta_chunk_vblank_l_in_us);
+dml_get_pipe_attr_decl(refcyc_per_meta_chunk_vblank_c_in_us);
+dml_get_pipe_attr_decl(refcyc_per_meta_chunk_flip_l_in_us);
+dml_get_pipe_attr_decl(refcyc_per_meta_chunk_flip_c_in_us);
+
+dml_get_pipe_attr_decl(vupdate_offset);
+dml_get_pipe_attr_decl(vupdate_width);
+dml_get_pipe_attr_decl(vready_offset);
 
 unsigned int get_vstartup_calculated(
 		struct display_mode_lib *mode_lib,
@@ -229,8 +252,7 @@ struct vba_vars_st {
 	unsigned int OverrideGPUVMPageTableLevels;
 	unsigned int OverrideHostVMPageTableLevels;
 	unsigned int MetaChunkSize;
-	double MinPixelChunkSizeBytes;
-	double MinMetaChunkSizeBytes;
+	unsigned int MinMetaChunkSizeBytes;
 	unsigned int WritebackChunkSize;
 	bool ODMCapability;
 	unsigned int NumberOfDSC;
@@ -344,8 +366,8 @@ struct vba_vars_st {
 	unsigned int EffectiveLBLatencyHidingSourceLinesLuma;
 	unsigned int EffectiveLBLatencyHidingSourceLinesChroma;
 	double BandwidthAvailableForImmediateFlip;
-	unsigned int PrefetchMode[DC__VOLTAGE_STATES + 1][2];
-	unsigned int PrefetchModePerState[DC__VOLTAGE_STATES + 1][2];
+	unsigned int PrefetchMode[DC__VOLTAGE_STATES][2];
+	unsigned int PrefetchModePerState[DC__VOLTAGE_STATES][2];
 	unsigned int MinPrefetchMode;
 	unsigned int MaxPrefetchMode;
 	bool AnyLinesForVMOrRowTooLarge;
@@ -393,16 +415,16 @@ struct vba_vars_st {
 	unsigned int MaxNumWriteback;
 	bool WritebackLumaAndChromaScalingSupported;
 	bool Cursor64BppSupport;
-	double DCFCLKPerState[DC__VOLTAGE_STATES + 1];
-	double DCFCLKState[DC__VOLTAGE_STATES + 1][2];
-	double FabricClockPerState[DC__VOLTAGE_STATES + 1];
-	double SOCCLKPerState[DC__VOLTAGE_STATES + 1];
-	double PHYCLKPerState[DC__VOLTAGE_STATES + 1];
-	double DTBCLKPerState[DC__VOLTAGE_STATES + 1];
-	double MaxDppclk[DC__VOLTAGE_STATES + 1];
-	double MaxDSCCLK[DC__VOLTAGE_STATES + 1];
-	double DRAMSpeedPerState[DC__VOLTAGE_STATES + 1];
-	double MaxDispclk[DC__VOLTAGE_STATES + 1];
+	double DCFCLKPerState[DC__VOLTAGE_STATES];
+	double DCFCLKState[DC__VOLTAGE_STATES][2];
+	double FabricClockPerState[DC__VOLTAGE_STATES];
+	double SOCCLKPerState[DC__VOLTAGE_STATES];
+	double PHYCLKPerState[DC__VOLTAGE_STATES];
+	double DTBCLKPerState[DC__VOLTAGE_STATES];
+	double MaxDppclk[DC__VOLTAGE_STATES];
+	double MaxDSCCLK[DC__VOLTAGE_STATES];
+	double DRAMSpeedPerState[DC__VOLTAGE_STATES];
+	double MaxDispclk[DC__VOLTAGE_STATES];
 	int VoltageOverrideLevel;
 
 	/*outputs*/
@@ -413,11 +435,11 @@ struct vba_vars_st {
 	bool WritebackLatencySupport;
 	bool WritebackModeSupport;
 	bool Writeback10bpc420Supported;
-	bool BandwidthSupport[DC__VOLTAGE_STATES + 1];
+	bool BandwidthSupport[DC__VOLTAGE_STATES];
 	unsigned int TotalNumberOfActiveWriteback;
 	double CriticalPoint;
 	double ReturnBWToDCNPerState;
-	bool IsErrorResult[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
+	bool IsErrorResult[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
 	bool prefetch_vm_bw_valid;
 	bool prefetch_row_bw_valid;
 	bool NumberOfOTGSupport;
@@ -425,7 +447,7 @@ struct vba_vars_st {
 	bool WritebackScaleRatioAndTapsSupport;
 	bool CursorSupport;
 	bool PitchSupport;
-	enum dm_validation_status ValidationStatus[DC__VOLTAGE_STATES + 1];
+	enum dm_validation_status ValidationStatus[DC__VOLTAGE_STATES];
 
 	double WritebackLineBufferLumaBufferSize;
 	double WritebackLineBufferChromaBufferSize;
@@ -443,7 +465,7 @@ struct vba_vars_st {
 	double OutputLinkDPLanes[DC__NUM_DPP__MAX];
 	double ForcedOutputLinkBPP[DC__NUM_DPP__MAX]; // Mode Support only
 	double ImmediateFlipBW[DC__NUM_DPP__MAX];
-	double MaxMaxVStartup[DC__VOLTAGE_STATES + 1][2];
+	double MaxMaxVStartup[DC__VOLTAGE_STATES][2];
 
 	double WritebackLumaVExtra;
 	double WritebackChromaVExtra;
@@ -470,7 +492,7 @@ struct vba_vars_st {
 	double RoundedUpMaxSwathSizeBytesC;
 	double EffectiveDETLBLinesLuma;
 	double EffectiveDETLBLinesChroma;
-	double ProjectedDCFCLKDeepSleep[DC__VOLTAGE_STATES + 1][2];
+	double ProjectedDCFCLKDeepSleep[DC__VOLTAGE_STATES][2];
 	double PDEAndMetaPTEBytesPerFrameY;
 	double PDEAndMetaPTEBytesPerFrameC;
 	unsigned int MetaRowBytesY;
@@ -488,47 +510,47 @@ struct vba_vars_st {
 	double FractionOfUrgentBandwidthImmediateFlip; // Mode Support debugging output
 
 	/* ms locals */
-	double IdealSDPPortBandwidthPerState[DC__VOLTAGE_STATES + 1][2];
-	unsigned int NoOfDPP[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
+	double IdealSDPPortBandwidthPerState[DC__VOLTAGE_STATES][2];
+	unsigned int NoOfDPP[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
 	int NoOfDPPThisState[DC__NUM_DPP__MAX];
-	enum odm_combine_mode ODMCombineEnablePerState[DC__VOLTAGE_STATES + 1][DC__NUM_DPP__MAX];
+	enum odm_combine_mode ODMCombineEnablePerState[DC__VOLTAGE_STATES][DC__NUM_DPP__MAX];
 	double SwathWidthYThisState[DC__NUM_DPP__MAX];
-	unsigned int SwathHeightCPerState[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
+	unsigned int SwathHeightCPerState[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
 	unsigned int SwathHeightYThisState[DC__NUM_DPP__MAX];
 	unsigned int SwathHeightCThisState[DC__NUM_DPP__MAX];
-	double VRatioPreY[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double VRatioPreC[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double RequiredPrefetchPixelDataBWLuma[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double RequiredPrefetchPixelDataBWChroma[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double RequiredDPPCLK[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
+	double VRatioPreY[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double VRatioPreC[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double RequiredPrefetchPixelDataBWLuma[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double RequiredPrefetchPixelDataBWChroma[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double RequiredDPPCLK[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
 	double RequiredDPPCLKThisState[DC__NUM_DPP__MAX];
-	bool PTEBufferSizeNotExceededY[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	bool PTEBufferSizeNotExceededC[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	bool BandwidthWithoutPrefetchSupported[DC__VOLTAGE_STATES + 1][2];
-	bool PrefetchSupported[DC__VOLTAGE_STATES + 1][2];
-	bool VRatioInPrefetchSupported[DC__VOLTAGE_STATES + 1][2];
-	double RequiredDISPCLK[DC__VOLTAGE_STATES + 1][2];
-	bool DISPCLK_DPPCLK_Support[DC__VOLTAGE_STATES + 1][2];
-	bool TotalAvailablePipesSupport[DC__VOLTAGE_STATES + 1][2];
-	unsigned int TotalNumberOfActiveDPP[DC__VOLTAGE_STATES + 1][2];
-	unsigned int TotalNumberOfDCCActiveDPP[DC__VOLTAGE_STATES + 1][2];
-	bool ModeSupport[DC__VOLTAGE_STATES + 1][2];
-	double ReturnBWPerState[DC__VOLTAGE_STATES + 1][2];
-	bool DIOSupport[DC__VOLTAGE_STATES + 1];
-	bool NotEnoughDSCUnits[DC__VOLTAGE_STATES + 1];
-	bool DSCCLKRequiredMoreThanSupported[DC__VOLTAGE_STATES + 1];
-	bool DTBCLKRequiredMoreThanSupported[DC__VOLTAGE_STATES + 1];
-	double UrgentRoundTripAndOutOfOrderLatencyPerState[DC__VOLTAGE_STATES + 1];
-	bool ROBSupport[DC__VOLTAGE_STATES + 1][2];
-	bool PTEBufferSizeNotExceeded[DC__VOLTAGE_STATES + 1][2];
-	bool TotalVerticalActiveBandwidthSupport[DC__VOLTAGE_STATES + 1][2];
-	double MaxTotalVerticalActiveAvailableBandwidth[DC__VOLTAGE_STATES + 1][2];
+	bool PTEBufferSizeNotExceededY[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	bool PTEBufferSizeNotExceededC[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	bool BandwidthWithoutPrefetchSupported[DC__VOLTAGE_STATES][2];
+	bool PrefetchSupported[DC__VOLTAGE_STATES][2];
+	bool VRatioInPrefetchSupported[DC__VOLTAGE_STATES][2];
+	double RequiredDISPCLK[DC__VOLTAGE_STATES][2];
+	bool DISPCLK_DPPCLK_Support[DC__VOLTAGE_STATES][2];
+	bool TotalAvailablePipesSupport[DC__VOLTAGE_STATES][2];
+	unsigned int TotalNumberOfActiveDPP[DC__VOLTAGE_STATES][2];
+	unsigned int TotalNumberOfDCCActiveDPP[DC__VOLTAGE_STATES][2];
+	bool ModeSupport[DC__VOLTAGE_STATES][2];
+	double ReturnBWPerState[DC__VOLTAGE_STATES][2];
+	bool DIOSupport[DC__VOLTAGE_STATES];
+	bool NotEnoughDSCUnits[DC__VOLTAGE_STATES];
+	bool DSCCLKRequiredMoreThanSupported[DC__VOLTAGE_STATES];
+	bool DTBCLKRequiredMoreThanSupported[DC__VOLTAGE_STATES];
+	double UrgentRoundTripAndOutOfOrderLatencyPerState[DC__VOLTAGE_STATES];
+	bool ROBSupport[DC__VOLTAGE_STATES][2];
+	bool PTEBufferSizeNotExceeded[DC__VOLTAGE_STATES][2];
+	bool TotalVerticalActiveBandwidthSupport[DC__VOLTAGE_STATES][2];
+	double MaxTotalVerticalActiveAvailableBandwidth[DC__VOLTAGE_STATES][2];
 	double PrefetchBW[DC__NUM_DPP__MAX];
-	double PDEAndMetaPTEBytesPerFrame[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double MetaRowBytes[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double DPTEBytesPerRow[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double PrefetchLinesY[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double PrefetchLinesC[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
+	double PDEAndMetaPTEBytesPerFrame[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double MetaRowBytes[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double DPTEBytesPerRow[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double PrefetchLinesY[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double PrefetchLinesC[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
 	unsigned int MaxNumSwY[DC__NUM_DPP__MAX];
 	unsigned int MaxNumSwC[DC__NUM_DPP__MAX];
 	double PrefillY[DC__NUM_DPP__MAX];
@@ -540,12 +562,12 @@ struct vba_vars_st {
 	double SwathWidthYSingleDPP[DC__NUM_DPP__MAX];
 	double BytePerPixelInDETY[DC__NUM_DPP__MAX];
 	double BytePerPixelInDETC[DC__NUM_DPP__MAX];
-	bool RequiresDSC[DC__VOLTAGE_STATES + 1][DC__NUM_DPP__MAX];
-	unsigned int NumberOfDSCSlice[DC__VOLTAGE_STATES + 1][DC__NUM_DPP__MAX];
-	double RequiresFEC[DC__VOLTAGE_STATES + 1][DC__NUM_DPP__MAX];
-	double OutputBppPerState[DC__VOLTAGE_STATES + 1][DC__NUM_DPP__MAX];
-	double DSCDelayPerState[DC__VOLTAGE_STATES + 1][DC__NUM_DPP__MAX];
-	bool ViewportSizeSupport[DC__VOLTAGE_STATES + 1][2];
+	bool RequiresDSC[DC__VOLTAGE_STATES][DC__NUM_DPP__MAX];
+	unsigned int NumberOfDSCSlice[DC__VOLTAGE_STATES][DC__NUM_DPP__MAX];
+	double RequiresFEC[DC__VOLTAGE_STATES][DC__NUM_DPP__MAX];
+	double OutputBppPerState[DC__VOLTAGE_STATES][DC__NUM_DPP__MAX];
+	double DSCDelayPerState[DC__VOLTAGE_STATES][DC__NUM_DPP__MAX];
+	bool ViewportSizeSupport[DC__VOLTAGE_STATES][2];
 	unsigned int Read256BlockHeightY[DC__NUM_DPP__MAX];
 	unsigned int Read256BlockWidthY[DC__NUM_DPP__MAX];
 	unsigned int Read256BlockHeightC[DC__NUM_DPP__MAX];
@@ -560,7 +582,7 @@ struct vba_vars_st {
 	double WriteBandwidth[DC__NUM_DPP__MAX];
 	double PSCL_FACTOR[DC__NUM_DPP__MAX];
 	double PSCL_FACTOR_CHROMA[DC__NUM_DPP__MAX];
-	double MaximumVStartup[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
+	double MaximumVStartup[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
 	unsigned int MacroTileWidthY[DC__NUM_DPP__MAX];
 	unsigned int MacroTileWidthC[DC__NUM_DPP__MAX];
 	double AlignedDCCMetaPitch[DC__NUM_DPP__MAX];
@@ -574,8 +596,8 @@ struct vba_vars_st {
 	double DestinationLinesToRequestVMInImmediateFlip[DC__NUM_DPP__MAX];
 	double DestinationLinesToRequestRowInImmediateFlip[DC__NUM_DPP__MAX];
 	double final_flip_bw[DC__NUM_DPP__MAX];
-	bool ImmediateFlipSupportedForState[DC__VOLTAGE_STATES + 1][2];
-	double WritebackDelay[DC__VOLTAGE_STATES + 1][DC__NUM_DPP__MAX];
+	bool ImmediateFlipSupportedForState[DC__VOLTAGE_STATES][2];
+	double WritebackDelay[DC__VOLTAGE_STATES][DC__NUM_DPP__MAX];
 	unsigned int vm_group_bytes[DC__NUM_DPP__MAX];
 	unsigned int dpte_group_bytes[DC__NUM_DPP__MAX];
 	unsigned int dpte_row_height[DC__NUM_DPP__MAX];
@@ -595,7 +617,7 @@ struct vba_vars_st {
 	double DisplayPipeLineDeliveryTimeChroma[DC__NUM_DPP__MAX];                     // WM
 	double DisplayPipeRequestDeliveryTimeLuma[DC__NUM_DPP__MAX];
 	double DisplayPipeRequestDeliveryTimeChroma[DC__NUM_DPP__MAX];
-	enum clock_change_support DRAMClockChangeSupport[DC__VOLTAGE_STATES + 1][2];
+	enum clock_change_support DRAMClockChangeSupport[DC__VOLTAGE_STATES][2];
 	double UrgentBurstFactorCursor[DC__NUM_DPP__MAX];
 	double UrgentBurstFactorCursorPre[DC__NUM_DPP__MAX];
 	double UrgentBurstFactorLuma[DC__NUM_DPP__MAX];
@@ -604,7 +626,7 @@ struct vba_vars_st {
 	double UrgentBurstFactorChromaPre[DC__NUM_DPP__MAX];
 
 
-	bool           MPCCombine[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
+	bool           MPCCombine[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
 	double         SwathWidthCSingleDPP[DC__NUM_DPP__MAX];
 	double         MaximumSwathWidthInLineBufferLuma;
 	double         MaximumSwathWidthInLineBufferChroma;
@@ -619,6 +641,7 @@ struct vba_vars_st {
 	double         dummy6;
 	double         dummy7[DC__NUM_DPP__MAX];
 	double         dummy8[DC__NUM_DPP__MAX];
+	double         dummy13[DC__NUM_DPP__MAX];
 	unsigned int        dummyinteger1ms[DC__NUM_DPP__MAX];
 	double        dummyinteger2ms[DC__NUM_DPP__MAX];
 	unsigned int        dummyinteger3[DC__NUM_DPP__MAX];
@@ -631,6 +654,9 @@ struct vba_vars_st {
 	unsigned int        dummyinteger10;
 	unsigned int        dummyinteger11;
 	unsigned int        dummyinteger12;
+	unsigned int        dummyinteger30;
+	unsigned int        dummyinteger31;
+	unsigned int        dummyinteger32;
 	unsigned int        dummyintegerarr1[DC__NUM_DPP__MAX];
 	unsigned int        dummyintegerarr2[DC__NUM_DPP__MAX];
 	unsigned int        dummyintegerarr3[DC__NUM_DPP__MAX];
@@ -639,9 +665,9 @@ struct vba_vars_st {
 	bool           SingleDPPViewportSizeSupportPerPlane[DC__NUM_DPP__MAX];
 	double         PlaneRequiredDISPCLKWithODMCombine2To1;
 	double         PlaneRequiredDISPCLKWithODMCombine4To1;
-	unsigned int   TotalNumberOfSingleDPPPlanes[DC__VOLTAGE_STATES + 1][2];
+	unsigned int   TotalNumberOfSingleDPPPlanes[DC__VOLTAGE_STATES][2];
 	bool           LinkDSCEnable;
-	bool           ODMCombine4To1SupportCheckOK[DC__VOLTAGE_STATES + 1];
+	bool           ODMCombine4To1SupportCheckOK[DC__VOLTAGE_STATES];
 	enum odm_combine_mode ODMCombineEnableThisState[DC__NUM_DPP__MAX];
 	double   SwathWidthCThisState[DC__NUM_DPP__MAX];
 	bool           ViewportSizeSupportPerPlane[DC__NUM_DPP__MAX];
@@ -765,6 +791,7 @@ struct vba_vars_st {
 	double FinalDRAMClockChangeLatency;
 	double Tdmdl_vm[DC__NUM_DPP__MAX];
 	double Tdmdl[DC__NUM_DPP__MAX];
+	double TSetup[DC__NUM_DPP__MAX];
 	unsigned int ThisVStartup;
 	bool WritebackAllowDRAMClockChangeEndPosition[DC__NUM_DPP__MAX];
 	double DST_Y_PER_META_ROW_NOM_C[DC__NUM_DPP__MAX];
@@ -785,12 +812,12 @@ struct vba_vars_st {
 	unsigned int ImmediateFlipBytes[DC__NUM_DPP__MAX];
 	unsigned int LinesInDETC[DC__NUM_DPP__MAX];
 	unsigned int LinesInDETCRoundedDownToSwath[DC__NUM_DPP__MAX];
-	double UrgentLatencySupportUsPerState[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
+	double UrgentLatencySupportUsPerState[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
 	double UrgentLatencySupportUs[DC__NUM_DPP__MAX];
-	double FabricAndDRAMBandwidthPerState[DC__VOLTAGE_STATES + 1];
-	bool UrgentLatencySupport[DC__VOLTAGE_STATES + 1][2];
-	unsigned int SwathWidthYPerState[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	unsigned int SwathHeightYPerState[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
+	double FabricAndDRAMBandwidthPerState[DC__VOLTAGE_STATES];
+	bool UrgentLatencySupport[DC__VOLTAGE_STATES][2];
+	unsigned int SwathWidthYPerState[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	unsigned int SwathHeightYPerState[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
 	double qual_row_bw[DC__NUM_DPP__MAX];
 	double prefetch_row_bw[DC__NUM_DPP__MAX];
 	double prefetch_vm_bw[DC__NUM_DPP__MAX];
@@ -838,7 +865,7 @@ struct vba_vars_st {
 	double DCCRateLuma[DC__NUM_DPP__MAX];
 	double DCCRateChroma[DC__NUM_DPP__MAX];
 
-	double PHYCLKD18PerState[DC__VOLTAGE_STATES + 1];
+	double PHYCLKD18PerState[DC__VOLTAGE_STATES];
 
 	bool WritebackSupportInterleaveAndUsingWholeBufferForASingleStream;
 	bool NumberOfHDMIFRLSupport;
@@ -847,7 +874,7 @@ struct vba_vars_st {
 	int    AudioSampleLayout[DC__NUM_DPP__MAX];
 
 	int PercentMarginOverMinimumRequiredDCFCLK;
-	bool DynamicMetadataSupported[DC__VOLTAGE_STATES + 1][2];
+	bool DynamicMetadataSupported[DC__VOLTAGE_STATES][2];
 	enum immediate_flip_requirement ImmediateFlipRequirement;
 	double DETBufferSizeYThisState[DC__NUM_DPP__MAX];
 	double DETBufferSizeCThisState[DC__NUM_DPP__MAX];
@@ -855,26 +882,26 @@ struct vba_vars_st {
 	bool NoUrgentLatencyHidingPre[DC__NUM_DPP__MAX];
 	int swath_width_luma_ub_this_state[DC__NUM_DPP__MAX];
 	int swath_width_chroma_ub_this_state[DC__NUM_DPP__MAX];
-	double UrgLatency[DC__VOLTAGE_STATES + 1];
-	double VActiveCursorBandwidth[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double VActivePixelBandwidth[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	bool NoTimeForPrefetch[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	bool NoTimeForDynamicMetadata[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double dpte_row_bandwidth[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double meta_row_bandwidth[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double DETBufferSizeYAllStates[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double DETBufferSizeCAllStates[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	int swath_width_luma_ub_all_states[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	int swath_width_chroma_ub_all_states[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	bool NotUrgentLatencyHiding[DC__VOLTAGE_STATES + 1][2];
-	unsigned int SwathHeightYAllStates[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	unsigned int SwathHeightCAllStates[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	unsigned int SwathWidthYAllStates[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	unsigned int SwathWidthCAllStates[DC__VOLTAGE_STATES + 1][2][DC__NUM_DPP__MAX];
-	double TotalDPTERowBandwidth[DC__VOLTAGE_STATES + 1][2];
-	double TotalMetaRowBandwidth[DC__VOLTAGE_STATES + 1][2];
-	double TotalVActiveCursorBandwidth[DC__VOLTAGE_STATES + 1][2];
-	double TotalVActivePixelBandwidth[DC__VOLTAGE_STATES + 1][2];
+	double UrgLatency[DC__VOLTAGE_STATES];
+	double VActiveCursorBandwidth[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double VActivePixelBandwidth[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	bool NoTimeForPrefetch[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	bool NoTimeForDynamicMetadata[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double dpte_row_bandwidth[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double meta_row_bandwidth[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double DETBufferSizeYAllStates[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double DETBufferSizeCAllStates[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	int swath_width_luma_ub_all_states[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	int swath_width_chroma_ub_all_states[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	bool NotUrgentLatencyHiding[DC__VOLTAGE_STATES][2];
+	unsigned int SwathHeightYAllStates[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	unsigned int SwathHeightCAllStates[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	unsigned int SwathWidthYAllStates[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	unsigned int SwathWidthCAllStates[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	double TotalDPTERowBandwidth[DC__VOLTAGE_STATES][2];
+	double TotalMetaRowBandwidth[DC__VOLTAGE_STATES][2];
+	double TotalVActiveCursorBandwidth[DC__VOLTAGE_STATES][2];
+	double TotalVActivePixelBandwidth[DC__VOLTAGE_STATES][2];
 	double WritebackDelayTime[DC__NUM_DPP__MAX];
 	unsigned int DCCYIndependentBlock[DC__NUM_DPP__MAX];
 	unsigned int DCCCIndependentBlock[DC__NUM_DPP__MAX];
@@ -898,6 +925,8 @@ struct vba_vars_st {
 	enum odm_combine_policy ODMCombinePolicy;
 	bool UseMinimumRequiredDCFCLK;
 	bool AllowDramClockChangeOneDisplayVactive;
+	bool SynchronizeTimingsIfSingleRefreshRate;
+
 };
 
 bool CalculateMinAndMaxPrefetchMode(
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 15/27] drm/amd/display: DP link layer test 4.2.1.1 fix due to specs update
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (13 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 14/27] drm/amd/display: update dml interfaces and variables Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 16/27] drm/amd/display: vbios data table packing Rodrigo Siqueira
                   ` (11 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Wenjing Liu,
	Aurabindo.Pillai, Jun Lei, Bhawanpreet.Lakha

From: Wenjing Liu <wenjing.liu@amd.com>

[why]
DP link layer CTS specs updated to change the test parameters in test
4.2.1.1.
Before it requires source to delay 400us on aux no reply.
With the specs updates Errata5, it requires source to delay 3.2ms
(based on LTTPR aux timeout)
This causes our test to fail after updating with the latest test
equipment firmware.

[how]
the change is to allow LTTPR 3.2ms aux timeout delay by default.
And only set to 400us if LTTPR is not present.
Before this piece of logic is interwined with LTTPR support.
Now we will default to 3.2ms aux timeout even if LTTPR support is not
enabled by driver.

Signed-off-by: Wenjing Liu <wenjing.liu@amd.com>
Reviewed-by: Jun Lei <Jun.Lei@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc_link.c |  3 +-
 .../gpu/drm/amd/display/dc/core/dc_link_ddc.c | 13 ++--
 .../gpu/drm/amd/display/dc/core/dc_link_dp.c  | 59 +++++++++----------
 .../drm/amd/display/dc/core/dc_link_hwss.c    |  2 +-
 drivers/gpu/drm/amd/display/dc/dc.h           |  2 +-
 drivers/gpu/drm/amd/display/dc/dc_link.h      |  1 +
 .../gpu/drm/amd/display/dc/inc/dc_link_ddc.h  |  2 +-
 .../gpu/drm/amd/display/dc/inc/dc_link_dp.h   |  2 +-
 8 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index c08de6823db4..e920d046f026 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -691,10 +691,9 @@ static bool detect_dp(struct dc_link *link,
 	if (sink_caps->transaction_type == DDC_TRANSACTION_TYPE_I2C_OVER_AUX) {
 		sink_caps->signal = SIGNAL_TYPE_DISPLAY_PORT;
 
-		dpcd_set_source_specific_data(link);
-
 		if (!detect_dp_sink_caps(link))
 			return false;
+		dpcd_set_source_specific_data(link);
 
 		if (is_mst_supported(link)) {
 			sink_caps->signal = SIGNAL_TYPE_DISPLAY_PORT_MST;
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
index aefd29a440b5..242ed5976cdb 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
@@ -648,16 +648,17 @@ bool dc_link_aux_transfer_with_retries(struct ddc_service *ddc,
 }
 
 
-uint32_t dc_link_aux_configure_timeout(struct ddc_service *ddc,
+bool dc_link_aux_try_to_configure_timeout(struct ddc_service *ddc,
 		uint32_t timeout)
 {
-	uint32_t prev_timeout = 0;
+	bool result = false;
 	struct ddc *ddc_pin = ddc->ddc_pin;
 
-	if (ddc->ctx->dc->res_pool->engines[ddc_pin->pin_data->en]->funcs->configure_timeout)
-		prev_timeout =
-				ddc->ctx->dc->res_pool->engines[ddc_pin->pin_data->en]->funcs->configure_timeout(ddc, timeout);
-	return prev_timeout;
+	if (ddc->ctx->dc->res_pool->engines[ddc_pin->pin_data->en]->funcs->configure_timeout) {
+		ddc->ctx->dc->res_pool->engines[ddc_pin->pin_data->en]->funcs->configure_timeout(ddc, timeout);
+		result = true;
+	}
+	return result;
 }
 
 /*test only function*/
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index 6db1f16957ac..b578687f2b38 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -245,7 +245,7 @@ static uint8_t dc_dp_initialize_scrambling_data_symbols(
 
 static inline bool is_repeater(struct dc_link *link, uint32_t offset)
 {
-	return (!link->is_lttpr_mode_transparent && offset != 0);
+	return (link->lttpr_non_transparent_mode && offset != 0);
 }
 
 static void dpcd_set_lt_pattern_and_lane_settings(
@@ -1038,7 +1038,7 @@ static enum link_training_result perform_clock_recovery_sequence(
 		/* 3. wait receiver to lock-on*/
 		wait_time_microsec = lt_settings->cr_pattern_time;
 
-		if (!link->is_lttpr_mode_transparent)
+		if (link->lttpr_non_transparent_mode)
 			wait_time_microsec = TRAINING_AUX_RD_INTERVAL;
 
 		wait_for_training_aux_rd_interval(
@@ -1268,7 +1268,7 @@ static void configure_lttpr_mode(struct dc_link *link)
 		link->dpcd_caps.lttpr_caps.mode = repeater_mode;
 	}
 
-	if (!link->is_lttpr_mode_transparent) {
+	if (link->lttpr_non_transparent_mode) {
 
 		DC_LOG_HW_LINK_TRAINING("%s\n Set LTTPR to Non Transparent Mode\n", __func__);
 
@@ -1473,7 +1473,7 @@ enum link_training_result dc_link_dp_perform_link_training(
 			&lt_settings);
 
 	/* Configure lttpr mode */
-	if (!link->is_lttpr_mode_transparent)
+	if (link->lttpr_non_transparent_mode)
 		configure_lttpr_mode(link);
 
 	if (link->ctx->dc->work_arounds.lt_early_cr_pattern)
@@ -1489,7 +1489,7 @@ enum link_training_result dc_link_dp_perform_link_training(
 
 	dp_set_fec_ready(link, fec_enable);
 
-	if (!link->is_lttpr_mode_transparent) {
+	if (link->lttpr_non_transparent_mode) {
 
 		/* 2. perform link training (set link training done
 		 *  to false is done as well)
@@ -1756,7 +1756,7 @@ static struct dc_link_settings get_max_link_cap(struct dc_link *link)
 	 * account for lttpr repeaters cap
 	 * notes: repeaters do not snoop in the DPRX Capabilities addresses (3.6.3).
 	 */
-	if (!link->is_lttpr_mode_transparent) {
+	if (link->lttpr_non_transparent_mode) {
 		if (link->dpcd_caps.lttpr_caps.max_lane_count < max_link_cap.lane_count)
 			max_link_cap.lane_count = link->dpcd_caps.lttpr_caps.max_lane_count;
 
@@ -1914,7 +1914,7 @@ bool dp_verify_link_cap(
 	max_link_cap = get_max_link_cap(link);
 
 	/* Grant extended timeout request */
-	if (!link->is_lttpr_mode_transparent && link->dpcd_caps.lttpr_caps.max_ext_timeout > 0) {
+	if (link->lttpr_non_transparent_mode && link->dpcd_caps.lttpr_caps.max_ext_timeout > 0) {
 		uint8_t grant = link->dpcd_caps.lttpr_caps.max_ext_timeout & 0x80;
 
 		core_link_write_dpcd(link, DP_PHY_REPEATER_EXTENDED_WAIT_TIMEOUT, &grant, sizeof(grant));
@@ -3255,17 +3255,7 @@ static bool retrieve_link_cap(struct dc_link *link)
 	uint32_t read_dpcd_retry_cnt = 3;
 	int i;
 	struct dp_sink_hw_fw_revision dp_hw_fw_revision;
-
-	/* Set default timeout to 3.2ms and read LTTPR capabilities */
-	bool ext_timeout_support = link->dc->caps.extended_aux_timeout_support &&
-			!link->dc->config.disable_extended_timeout_support;
-
-	link->is_lttpr_mode_transparent = true;
-
-	if (ext_timeout_support) {
-		dc_link_aux_configure_timeout(link->ddc,
-					LINK_AUX_DEFAULT_EXTENDED_TIMEOUT_PERIOD);
-	}
+	bool is_lttpr_present = false;
 
 	memset(dpcd_data, '\0', sizeof(dpcd_data));
 	memset(lttpr_dpcd_data, '\0', sizeof(lttpr_dpcd_data));
@@ -3274,6 +3264,13 @@ static bool retrieve_link_cap(struct dc_link *link)
 	memset(&edp_config_cap, '\0',
 		sizeof(union edp_configuration_cap));
 
+	/* if extended timeout is supported in hardware,
+	 * default to LTTPR timeout (3.2ms) first as a W/A for DP link layer
+	 * CTS 4.2.1.1 regression introduced by CTS specs requirement update.
+	 */
+	dc_link_aux_try_to_configure_timeout(link->ddc,
+			LINK_AUX_DEFAULT_LTTPR_TIMEOUT_PERIOD);
+
 	status = core_link_read_dpcd(link, DP_SET_POWER,
 				&dpcd_power_state, sizeof(dpcd_power_state));
 
@@ -3300,8 +3297,9 @@ static bool retrieve_link_cap(struct dc_link *link)
 		return false;
 	}
 
-	if (ext_timeout_support) {
-
+	if (link->dc->caps.extended_aux_timeout_support) {
+		/* By reading LTTPR capability, RX assumes that we will enable LTTPR extended aux timeout if LTTPR is present.
+		 * Therefore, only query LTTPR capability when LTTPR extended aux timeout is supported by hardware */
 		status = core_link_read_dpcd(
 				link,
 				DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV,
@@ -3332,20 +3330,21 @@ static bool retrieve_link_cap(struct dc_link *link)
 				lttpr_dpcd_data[DP_PHY_REPEATER_EXTENDED_WAIT_TIMEOUT -
 								DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV];
 
-		if (link->dpcd_caps.lttpr_caps.phy_repeater_cnt > 0 &&
+		is_lttpr_present = (link->dpcd_caps.lttpr_caps.phy_repeater_cnt > 0 &&
 				link->dpcd_caps.lttpr_caps.max_lane_count > 0 &&
 				link->dpcd_caps.lttpr_caps.max_lane_count <= 4 &&
-				link->dpcd_caps.lttpr_caps.revision.raw >= 0x14) {
-			link->is_lttpr_mode_transparent = false;
-		} else {
-			/*No lttpr reset timeout to its default value*/
-			link->is_lttpr_mode_transparent = true;
-			dc_link_aux_configure_timeout(link->ddc, LINK_AUX_DEFAULT_TIMEOUT_PERIOD);
-		}
-
-		CONN_DATA_DETECT(link, lttpr_dpcd_data, sizeof(lttpr_dpcd_data), "LTTPR Caps: ");
+				link->dpcd_caps.lttpr_caps.revision.raw >= 0x14);
+		if (is_lttpr_present)
+			CONN_DATA_DETECT(link, lttpr_dpcd_data, sizeof(lttpr_dpcd_data), "LTTPR Caps: ");
 	}
 
+	/* decide lttpr non transparent mode */
+	link->lttpr_non_transparent_mode = is_lttpr_present && link->dc->config.allow_lttpr_non_transparent_mode;
+
+	if (!is_lttpr_present)
+		dc_link_aux_try_to_configure_timeout(link->ddc, LINK_AUX_DEFAULT_TIMEOUT_PERIOD);
+
+
 	{
 		union training_aux_rd_interval aux_rd_interval;
 
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
index 6590f51caefa..6bbe4e775832 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
@@ -281,7 +281,7 @@ void dp_set_hw_lane_settings(
 {
 	struct link_encoder *encoder = link->link_enc;
 
-	if (!link->is_lttpr_mode_transparent && !is_immediate_downstream(link, offset))
+	if (link->lttpr_non_transparent_mode && !is_immediate_downstream(link, offset))
 		return;
 
 	/* call Encoder to set lane settings */
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index a4b30233aee3..391691c70805 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -274,7 +274,7 @@ struct dc_config {
 	bool edp_not_connected;
 	bool force_enum_edp;
 	bool forced_clocks;
-	bool disable_extended_timeout_support; // Used to disable extended timeout and lttpr feature as well
+	bool allow_lttpr_non_transparent_mode;
 	bool multi_mon_pp_mclk_switch;
 	bool disable_dmcu;
 	bool enable_4to1MPC;
diff --git a/drivers/gpu/drm/amd/display/dc/dc_link.h b/drivers/gpu/drm/amd/display/dc/dc_link.h
index f63fc25aa6c5..5c60c2f9779a 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_link.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_link.h
@@ -101,6 +101,7 @@ struct dc_link {
 	bool aux_access_disabled;
 	bool sync_lt_in_progress;
 	bool is_lttpr_mode_transparent;
+	bool lttpr_non_transparent_mode;
 
 	/* caps is the same as reported_link_cap. link_traing use
 	 * reported_link_cap. Will clean up.  TODO
diff --git a/drivers/gpu/drm/amd/display/dc/inc/dc_link_ddc.h b/drivers/gpu/drm/amd/display/dc/inc/dc_link_ddc.h
index de2d160114db..b324e13f3f78 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/dc_link_ddc.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/dc_link_ddc.h
@@ -105,7 +105,7 @@ int dc_link_aux_transfer_raw(struct ddc_service *ddc,
 bool dc_link_aux_transfer_with_retries(struct ddc_service *ddc,
 		struct aux_payload *payload);
 
-uint32_t dc_link_aux_configure_timeout(struct ddc_service *ddc,
+bool dc_link_aux_try_to_configure_timeout(struct ddc_service *ddc,
 		uint32_t timeout);
 
 void dal_ddc_service_write_scdc_data(
diff --git a/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h b/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
index e94e5fbf2aa2..b970a32177af 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
@@ -28,7 +28,7 @@
 
 #define LINK_TRAINING_ATTEMPTS 4
 #define LINK_TRAINING_RETRY_DELAY 50 /* ms */
-#define LINK_AUX_DEFAULT_EXTENDED_TIMEOUT_PERIOD 3200 /*us*/
+#define LINK_AUX_DEFAULT_LTTPR_TIMEOUT_PERIOD 3200 /*us*/
 #define LINK_AUX_DEFAULT_TIMEOUT_PERIOD 552 /*us*/
 
 struct dc_link;
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 16/27] drm/amd/display: vbios data table packing
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (14 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 15/27] drm/amd/display: DP link layer test 4.2.1.1 fix due to specs update Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 17/27] drm/amd/display: Defer cursor lock until after VUPDATE Rodrigo Siqueira
                   ` (10 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Jake Wang, Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira,
	Aurabindo.Pillai, Tony Cheng, Bhawanpreet.Lakha,
	Nicholas Kazlauskas

From: Jake Wang <haonan.wang2@amd.com>

[WHY]
Currently we're copying the entire bios image into vbios.  Loading time
for FW with entire bios(54272 bytes) is 105138us.  By copying only the
sections of bios we're using(4436 bytes), loading time drops to 104326us
which saves us 812us.

[HOW]
ROM header, master data table, and all data tables will be packed in
contiguous manner. The offsets for the data tables are remapped to their
newly packed location.

Signed-off-by: Jake Wang <haonan.wang2@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Nicholas Kazlauskas <Nicholas.Kazlauskas@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 .../drm/amd/display/dc/bios/bios_parser2.c    | 98 +++++++++++++++++++
 .../gpu/drm/amd/display/dc/dc_bios_types.h    |  4 +-
 2 files changed, 101 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
index 37fa7b48250e..7fb62780e8cf 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
@@ -1877,6 +1877,103 @@ static enum bp_result bios_get_board_layout_info(
 	return BP_RESULT_OK;
 }
 
+static uint16_t bios_parser_pack_data_tables(
+	struct dc_bios *dcb,
+	void *dst)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	struct atom_rom_header_v2_2 *rom_header = NULL;
+	struct atom_rom_header_v2_2 *packed_rom_header = NULL;
+	struct atom_common_table_header *data_tbl_header = NULL;
+	struct atom_master_list_of_data_tables_v2_1 *data_tbl_list = NULL;
+	struct atom_master_data_table_v2_1 *packed_master_data_tbl = NULL;
+	struct atom_data_revision tbl_rev = {0};
+	uint16_t *rom_header_offset = NULL;
+	const uint8_t *bios = bp->base.bios;
+	uint8_t *bios_dst = (uint8_t *)dst;
+	uint16_t packed_rom_header_offset;
+	uint16_t packed_masterdatatable_offset;
+	uint16_t packed_data_tbl_offset;
+	uint16_t data_tbl_offset;
+	unsigned int i;
+
+	rom_header_offset =
+		GET_IMAGE(uint16_t, OFFSET_TO_ATOM_ROM_HEADER_POINTER);
+
+	if (!rom_header_offset)
+		return 0;
+
+	rom_header = GET_IMAGE(struct atom_rom_header_v2_2, *rom_header_offset);
+
+	if (!rom_header)
+		return 0;
+
+	get_atom_data_table_revision(&rom_header->table_header, &tbl_rev);
+	if (!(tbl_rev.major >= 2 && tbl_rev.minor >= 2))
+		return 0;
+
+	get_atom_data_table_revision(&bp->master_data_tbl->table_header, &tbl_rev);
+	if (!(tbl_rev.major >= 2 && tbl_rev.minor >= 1))
+		return 0;
+
+	packed_rom_header_offset =
+		OFFSET_TO_ATOM_ROM_HEADER_POINTER + sizeof(*rom_header_offset);
+
+	packed_masterdatatable_offset =
+		packed_rom_header_offset + rom_header->table_header.structuresize;
+
+	packed_data_tbl_offset =
+		packed_masterdatatable_offset +
+		bp->master_data_tbl->table_header.structuresize;
+
+	packed_rom_header =
+		(struct atom_rom_header_v2_2 *)(bios_dst + packed_rom_header_offset);
+
+	packed_master_data_tbl =
+		(struct atom_master_data_table_v2_1 *)(bios_dst +
+		packed_masterdatatable_offset);
+
+	memcpy(bios_dst, bios, OFFSET_TO_ATOM_ROM_HEADER_POINTER);
+
+	*((uint16_t *)(bios_dst + OFFSET_TO_ATOM_ROM_HEADER_POINTER)) =
+		packed_rom_header_offset;
+
+	memcpy(bios_dst + packed_rom_header_offset, rom_header,
+		rom_header->table_header.structuresize);
+
+	packed_rom_header->masterdatatable_offset = packed_masterdatatable_offset;
+
+	memcpy(&packed_master_data_tbl->table_header,
+		&bp->master_data_tbl->table_header,
+		sizeof(bp->master_data_tbl->table_header));
+
+	data_tbl_list = &bp->master_data_tbl->listOfdatatables;
+
+	/* Each data table offset in data table list is 2 bytes,
+	 * we can use that to iterate through listOfdatatables
+	 * without knowing the name of each member.
+	 */
+	for (i = 0; i < sizeof(*data_tbl_list)/sizeof(uint16_t); i++) {
+		data_tbl_offset = *((uint16_t *)data_tbl_list + i);
+
+		if (data_tbl_offset) {
+			data_tbl_header =
+				(struct atom_common_table_header *)(bios + data_tbl_offset);
+
+			memcpy(bios_dst + packed_data_tbl_offset, data_tbl_header,
+				data_tbl_header->structuresize);
+
+			*((uint16_t *)&packed_master_data_tbl->listOfdatatables + i) =
+				packed_data_tbl_offset;
+
+			packed_data_tbl_offset += data_tbl_header->structuresize;
+		} else {
+			*((uint16_t *)&packed_master_data_tbl->listOfdatatables + i) = 0;
+		}
+	}
+	return packed_data_tbl_offset;
+}
+
 static const struct dc_vbios_funcs vbios_funcs = {
 	.get_connectors_number = bios_parser_get_connectors_number,
 
@@ -1925,6 +2022,7 @@ static const struct dc_vbios_funcs vbios_funcs = {
 	.bios_parser_destroy = firmware_parser_destroy,
 
 	.get_board_layout_info = bios_get_board_layout_info,
+	.pack_data_tables = bios_parser_pack_data_tables,
 };
 
 static bool bios_parser2_construct(
diff --git a/drivers/gpu/drm/amd/display/dc/dc_bios_types.h b/drivers/gpu/drm/amd/display/dc/dc_bios_types.h
index b1dd0d60d98e..441768aa53ff 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_bios_types.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_bios_types.h
@@ -89,7 +89,6 @@ struct dc_vbios_funcs {
 	bool (*is_device_id_supported)(
 		struct dc_bios *bios,
 		struct device_id id);
-
 	/* COMMANDS */
 
 	enum bp_result (*encoder_control)(
@@ -131,6 +130,9 @@ struct dc_vbios_funcs {
 	enum bp_result (*get_board_layout_info)(
 		struct dc_bios *dcb,
 		struct board_layout_info *board_layout_info);
+	uint16_t (*pack_data_tables)(
+		struct dc_bios *dcb,
+		void *dst);
 };
 
 struct bios_registers {
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 17/27] drm/amd/display: Defer cursor lock until after VUPDATE
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (15 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 16/27] drm/amd/display: vbios data table packing Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 18/27] drm/amd/display: Avoid pipe split when plane is too small Rodrigo Siqueira
                   ` (9 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Aric Cyr, Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira,
	Aurabindo.Pillai, Bhawanpreet.Lakha, Nicholas Kazlauskas

From: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>

[Why]
We dropped the delay after changed the cursor functions locking the
entire pipe to locking just the CURSOR registers to fix page flip
stuttering - this introduced cursor stuttering instead, and an underflow
issue.

The cursor update can be delayed indefinitely if the cursor update
repeatedly happens right around VUPDATE.

The underflow issue can happen if we do a viewport update on a pipe
on the same frame where a cursor update happens around VUPDATE - the
old cursor registers are retained which can be in an invalid position.

This can cause a pipe hang and indefinite underflow.

[How]
The complex, ideal solution to the problem would be a software
triple buffering mechanism from the DM layer to program only one cursor
update per frame just before VUPDATE.

The simple workaround until we have that infrastructure in place is
this change - bring back the delay until VUPDATE before locking, but
with some corrections to the calculations.

This didn't work for all timings before because the calculation for
VUPDATE was wrong - it was using the offset from VSTARTUP instead and
didn't correctly handle the case where VUPDATE could be in the back
porch.

Add a new hardware sequencer function to use the existing helper to
calculate the real VUPDATE start and VUPDATE end - VUPDATE can last
multiple lines after all.

Change the udelay to incorporate the width of VUPDATE as well.

Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Reviewed-by: Aric Cyr <Aric.Cyr@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 69 ++++++++++++++++++-
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.h |  5 ++
 .../gpu/drm/amd/display/dc/dcn10/dcn10_init.c |  1 +
 .../gpu/drm/amd/display/dc/dcn20/dcn20_init.c |  1 +
 .../gpu/drm/amd/display/dc/dcn21/dcn21_init.c |  1 +
 .../gpu/drm/amd/display/dc/inc/hw_sequencer.h |  5 ++
 6 files changed, 81 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index 27cae98936ea..0512a60c43b2 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -1682,12 +1682,79 @@ void dcn10_pipe_control_lock(
 		hws->funcs.verify_allow_pstate_change_high(dc);
 }
 
+/**
+ * delay_cursor_until_vupdate() - Delay cursor update if too close to VUPDATE.
+ *
+ * Software keepout workaround to prevent cursor update locking from stalling
+ * out cursor updates indefinitely or from old values from being retained in
+ * the case where the viewport changes in the same frame as the cursor.
+ *
+ * The idea is to calculate the remaining time from VPOS to VUPDATE. If it's
+ * too close to VUPDATE, then stall out until VUPDATE finishes.
+ *
+ * TODO: Optimize cursor programming to be once per frame before VUPDATE
+ *       to avoid the need for this workaround.
+ */
+static void delay_cursor_until_vupdate(struct dc *dc, struct pipe_ctx *pipe_ctx)
+{
+	struct dc_stream_state *stream = pipe_ctx->stream;
+	struct crtc_position position;
+	uint32_t vupdate_start, vupdate_end;
+	unsigned int lines_to_vupdate, us_to_vupdate, vpos;
+	unsigned int us_per_line, us_vupdate;
+
+	if (!dc->hwss.calc_vupdate_position || !dc->hwss.get_position)
+		return;
+
+	if (!pipe_ctx->stream_res.stream_enc || !pipe_ctx->stream_res.tg)
+		return;
+
+	dc->hwss.calc_vupdate_position(dc, pipe_ctx, &vupdate_start,
+				       &vupdate_end);
+
+	dc->hwss.get_position(&pipe_ctx, 1, &position);
+	vpos = position.vertical_count;
+
+	/* Avoid wraparound calculation issues */
+	vupdate_start += stream->timing.v_total;
+	vupdate_end += stream->timing.v_total;
+	vpos += stream->timing.v_total;
+
+	if (vpos <= vupdate_start) {
+		/* VPOS is in VACTIVE or back porch. */
+		lines_to_vupdate = vupdate_start - vpos;
+	} else if (vpos > vupdate_end) {
+		/* VPOS is in the front porch. */
+		return;
+	} else {
+		/* VPOS is in VUPDATE. */
+		lines_to_vupdate = 0;
+	}
+
+	/* Calculate time until VUPDATE in microseconds. */
+	us_per_line =
+		stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz;
+	us_to_vupdate = lines_to_vupdate * us_per_line;
+
+	/* 70 us is a conservative estimate of cursor update time*/
+	if (us_to_vupdate > 70)
+		return;
+
+	/* Stall out until the cursor update completes. */
+	us_vupdate = (vupdate_end - vupdate_start + 1) * us_per_line;
+	udelay(us_to_vupdate + us_vupdate);
+}
+
 void dcn10_cursor_lock(struct dc *dc, struct pipe_ctx *pipe, bool lock)
 {
 	/* cursor lock is per MPCC tree, so only need to lock one pipe per stream */
 	if (!pipe || pipe->top_pipe)
 		return;
 
+	/* Prevent cursor lock from stalling out cursor updates. */
+	if (lock)
+		delay_cursor_until_vupdate(dc, pipe);
+
 	dc->res_pool->mpc->funcs->cursor_lock(dc->res_pool->mpc,
 			pipe->stream_res.opp->inst, lock);
 }
@@ -3300,7 +3367,7 @@ int dcn10_get_vupdate_offset_from_vsync(struct pipe_ctx *pipe_ctx)
 	return vertical_line_start;
 }
 
-static void dcn10_calc_vupdate_position(
+void dcn10_calc_vupdate_position(
 		struct dc *dc,
 		struct pipe_ctx *pipe_ctx,
 		uint32_t *start_line,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
index af51424315d5..42b6e016d71e 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
@@ -34,6 +34,11 @@ struct dc;
 void dcn10_hw_sequencer_construct(struct dc *dc);
 
 int dcn10_get_vupdate_offset_from_vsync(struct pipe_ctx *pipe_ctx);
+void dcn10_calc_vupdate_position(
+		struct dc *dc,
+		struct pipe_ctx *pipe_ctx,
+		uint32_t *start_line,
+		uint32_t *end_line);
 void dcn10_setup_vupdate_interrupt(struct dc *dc, struct pipe_ctx *pipe_ctx);
 enum dc_status dcn10_enable_stream_timing(
 		struct pipe_ctx *pipe_ctx,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
index 9f8c89b6a763..f6a790c49321 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
@@ -72,6 +72,7 @@ static const struct hw_sequencer_funcs dcn10_funcs = {
 	.set_clock = dcn10_set_clock,
 	.get_clock = dcn10_get_clock,
 	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
+	.calc_vupdate_position = dcn10_calc_vupdate_position,
 	.set_backlight_level = dce110_set_backlight_level,
 	.set_abm_immediate_disable = dce110_set_abm_immediate_disable,
 	.set_pipe = dce110_set_pipe,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
index e20760fa11ff..bb9e9bec2f28 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
@@ -83,6 +83,7 @@ static const struct hw_sequencer_funcs dcn20_funcs = {
 	.init_vm_ctx = dcn20_init_vm_ctx,
 	.set_flip_control_gsl = dcn20_set_flip_control_gsl,
 	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
+	.calc_vupdate_position = dcn10_calc_vupdate_position,
 	.set_backlight_level = dce110_set_backlight_level,
 	.set_abm_immediate_disable = dce110_set_abm_immediate_disable,
 	.set_pipe = dce110_set_pipe,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
index 9a2d1f755839..8575de1a8ad2 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
@@ -86,6 +86,7 @@ static const struct hw_sequencer_funcs dcn21_funcs = {
 	.optimize_pwr_state = dcn21_optimize_pwr_state,
 	.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
 	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
+	.calc_vupdate_position = dcn10_calc_vupdate_position,
 	.power_down = dce110_power_down,
 	.set_backlight_level = dcn21_set_backlight_level,
 	.set_abm_immediate_disable = dcn21_set_abm_immediate_disable,
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
index 2e8f3fecc6a3..4f9216c96e59 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
@@ -96,6 +96,11 @@ struct hw_sequencer_funcs {
 	void (*get_position)(struct pipe_ctx **pipe_ctx, int num_pipes,
 			struct crtc_position *position);
 	int (*get_vupdate_offset_from_vsync)(struct pipe_ctx *pipe_ctx);
+	void (*calc_vupdate_position)(
+			struct dc *dc,
+			struct pipe_ctx *pipe_ctx,
+			uint32_t *start_line,
+			uint32_t *end_line);
 	void (*enable_per_frame_crtc_position_reset)(struct dc *dc,
 			int group_size, struct pipe_ctx *grouped_pipes[]);
 	void (*enable_timing_synchronization)(struct dc *dc,
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 18/27] drm/amd/display: Avoid pipe split when plane is too small
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (16 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 17/27] drm/amd/display: Defer cursor lock until after VUPDATE Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 19/27] drm/amd/display: correct rn NUM_VMID Rodrigo Siqueira
                   ` (8 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Aric Cyr, Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira,
	Aurabindo.Pillai, Bhawanpreet.Lakha, Nicholas Kazlauskas

From: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>

[Why]
The minimum plane size we can support in DML is 16x16. If we try to pass
a 16x16 plane with dynamic pipe split then validation will fail since it
tries to split it into two pipes, each 8x8.

Some userspace doesn't check that the commit fails and because the
commit fails the old state is retained, resulting in corruption.

[How]
Add a workaround to avoid pipe split if any plane is 16x16 or smaller.

Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Reviewed-by: Aric Cyr <Aric.Cyr@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 .../gpu/drm/amd/display/dc/calcs/dcn_calcs.c  | 21 ++++++++++++++++++-
 .../drm/amd/display/dc/dcn20/dcn20_resource.c | 14 ++++++++++++-
 2 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
index 3960a8db94cb..1e5a92b192a1 100644
--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
@@ -690,6 +690,26 @@ static void hack_bounding_box(struct dcn_bw_internal_vars *v,
 		struct dc_debug_options *dbg,
 		struct dc_state *context)
 {
+	int i;
+
+	for (i = 0; i < MAX_PIPES; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+
+		/**
+		 * Workaround for avoiding pipe-split in cases where we'd split
+		 * planes that are too small, resulting in splits that aren't
+		 * valid for the scaler.
+		 */
+		if (pipe->plane_state &&
+		    (pipe->plane_state->dst_rect.width <= 16 ||
+		     pipe->plane_state->dst_rect.height <= 16 ||
+		     pipe->plane_state->src_rect.width <= 16 ||
+		     pipe->plane_state->src_rect.height <= 16)) {
+			hack_disable_optional_pipe_split(v);
+			return;
+		}
+	}
+
 	if (dbg->pipe_split_policy == MPC_SPLIT_AVOID)
 		hack_disable_optional_pipe_split(v);
 
@@ -702,7 +722,6 @@ static void hack_bounding_box(struct dcn_bw_internal_vars *v,
 		hack_force_pipe_split(v, context->streams[0]->timing.pix_clk_100hz);
 }
 
-
 unsigned int get_highest_allowed_voltage_level(uint32_t hw_internal_rev, uint32_t pci_revision_id)
 {
 	/* for low power RV2 variants, the highest voltage level we want is 0 */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
index d00de61ac720..99925079a55d 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
@@ -2606,10 +2606,22 @@ int dcn20_validate_apply_pipe_split_flags(
 	} else if (dc->debug.force_single_disp_pipe_split)
 			force_split = true;
 
-	/* TODO: fix dc bugs and remove this split threshold thing */
 	for (i = 0; i < dc->res_pool->pipe_count; i++) {
 		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
 
+		/**
+		 * Workaround for avoiding pipe-split in cases where we'd split
+		 * planes that are too small, resulting in splits that aren't
+		 * valid for the scaler.
+		 */
+		if (pipe->plane_state &&
+		    (pipe->plane_state->dst_rect.width <= 16 ||
+		     pipe->plane_state->dst_rect.height <= 16 ||
+		     pipe->plane_state->src_rect.width <= 16 ||
+		     pipe->plane_state->src_rect.height <= 16))
+			avoid_split = true;
+
+		/* TODO: fix dc bugs and remove this split threshold thing */
 		if (pipe->stream && !pipe->prev_odm_pipe &&
 				(!pipe->top_pipe || pipe->top_pipe->plane_state != pipe->plane_state))
 			++plane_count;
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 19/27] drm/amd/display: correct rn NUM_VMID
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (17 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 18/27] drm/amd/display: Avoid pipe split when plane is too small Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 20/27] drm/amd/display: Fix incorrectly pruned modes with deep color Rodrigo Siqueira
                   ` (7 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Dmytro Laktyushkin,
	Eric Bernstein, Aurabindo.Pillai, Bhawanpreet.Lakha

From: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>

Save the correct num vmid during resource creation and fix RN gpuvm
level from 1 to 16 vmid entries.

Signed-off-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
Reviewed-by: Eric Bernstein <Eric.Bernstein@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubbub.h   | 1 +
 drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c   | 7 +------
 drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c | 3 ++-
 drivers/gpu/drm/amd/display/modules/vmid/vmid.c       | 7 +++++--
 4 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubbub.h b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubbub.h
index 501532dd523a..c478213ba7ad 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubbub.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubbub.h
@@ -80,6 +80,7 @@ struct dcn20_hubbub {
 	const struct dcn_hubbub_mask *masks;
 	unsigned int debug_test_index_pstate;
 	struct dcn_watermark_set watermarks;
+	int num_vmid;
 	struct dcn20_vmid vmid[16];
 	unsigned int detile_buf_size;
 };
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c
index 5e2d14b897af..129f0b62f751 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubbub.c
@@ -49,11 +49,6 @@
 #define FN(reg_name, field_name) \
 	hubbub1->shifts->field_name, hubbub1->masks->field_name
 
-#ifdef NUM_VMID
-#undef NUM_VMID
-#endif
-#define NUM_VMID 16
-
 static uint32_t convert_and_clamp(
 	uint32_t wm_ns,
 	uint32_t refclk_mhz,
@@ -138,7 +133,7 @@ int hubbub21_init_dchub(struct hubbub *hubbub,
 
 	dcn21_dchvm_init(hubbub);
 
-	return NUM_VMID;
+	return hubbub1->num_vmid;
 }
 
 bool hubbub21_program_urgent_watermarks(
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
index 419cdde624f5..f00a56835084 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
@@ -805,7 +805,7 @@ static const struct resource_caps res_cap_rn = {
 		.num_pll = 5,  // maybe 3 because the last two used for USB-c
 		.num_dwb = 1,
 		.num_ddc = 5,
-		.num_vmid = 1,
+		.num_vmid = 16,
 		.num_dsc = 3,
 };
 
@@ -1295,6 +1295,7 @@ static struct hubbub *dcn21_hubbub_create(struct dc_context *ctx)
 		vmid->shifts = &vmid_shifts;
 		vmid->masks = &vmid_masks;
 	}
+	hubbub->num_vmid = res_cap_rn.num_vmid;
 
 	return &hubbub->base;
 }
diff --git a/drivers/gpu/drm/amd/display/modules/vmid/vmid.c b/drivers/gpu/drm/amd/display/modules/vmid/vmid.c
index 00f132f8ad55..61ee4be35d27 100644
--- a/drivers/gpu/drm/amd/display/modules/vmid/vmid.c
+++ b/drivers/gpu/drm/amd/display/modules/vmid/vmid.c
@@ -112,9 +112,12 @@ uint8_t mod_vmid_get_for_ptb(struct mod_vmid *mod_vmid, uint64_t ptb)
 			evict_vmids(core_vmid);
 
 		vmid = get_next_available_vmid(core_vmid);
-		add_ptb_to_table(core_vmid, vmid, ptb);
+		if (vmid != -1) {
+			add_ptb_to_table(core_vmid, vmid, ptb);
 
-		dc_setup_vm_context(core_vmid->dc, &va_config, vmid);
+			dc_setup_vm_context(core_vmid->dc, &va_config, vmid);
+		} else
+			ASSERT(0);
 	}
 
 	return vmid;
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 20/27] drm/amd/display: Fix incorrectly pruned modes with deep color
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (18 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 19/27] drm/amd/display: correct rn NUM_VMID Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 21/27] drm/amd/display: Add DMUB firmware version helpers in DMUB service Rodrigo Siqueira
                   ` (6 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Stylon Wang, Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira,
	Aurabindo.Pillai, Bhawanpreet.Lakha, Nicholas Kazlauskas

From: Stylon Wang <stylon.wang@amd.com>

[Why]
When "max bpc" is set to enable deep color, some modes are removed from
the list if they fail validation on max bpc. These modes should be kept
if they validates fine with lower bpc.

[How]
- Retry with lower bpc in mode validation.
- Same in atomic commit to apply working bpc, not necessarily max bpc.

Signed-off-by: Stylon Wang <stylon.wang@amd.com>
Reviewed-by: Nicholas Kazlauskas <Nicholas.Kazlauskas@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 102 +++++++++++-------
 1 file changed, 64 insertions(+), 38 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index d2bb0d9839c9..2c3a443771ea 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3895,8 +3895,7 @@ static void update_stream_scaling_settings(const struct drm_display_mode *mode,
 
 static enum dc_color_depth
 convert_color_depth_from_display_info(const struct drm_connector *connector,
-				      const struct drm_connector_state *state,
-				      bool is_y420)
+				      bool is_y420, int requested_bpc)
 {
 	uint8_t bpc;
 
@@ -3916,10 +3915,7 @@ convert_color_depth_from_display_info(const struct drm_connector *connector,
 		bpc = bpc ? bpc : 8;
 	}
 
-	if (!state)
-		state = connector->state;
-
-	if (state) {
+	if (requested_bpc > 0) {
 		/*
 		 * Cap display bpc based on the user requested value.
 		 *
@@ -3928,7 +3924,7 @@ convert_color_depth_from_display_info(const struct drm_connector *connector,
 		 * or if this was called outside of atomic check, so it
 		 * can't be used directly.
 		 */
-		bpc = min(bpc, state->max_requested_bpc);
+		bpc = min_t(u8, bpc, requested_bpc);
 
 		/* Round down to the nearest even number. */
 		bpc = bpc - (bpc & 1);
@@ -4050,7 +4046,8 @@ static void fill_stream_properties_from_drm_display_mode(
 	const struct drm_display_mode *mode_in,
 	const struct drm_connector *connector,
 	const struct drm_connector_state *connector_state,
-	const struct dc_stream_state *old_stream)
+	const struct dc_stream_state *old_stream,
+	int requested_bpc)
 {
 	struct dc_crtc_timing *timing_out = &stream->timing;
 	const struct drm_display_info *info = &connector->display_info;
@@ -4080,8 +4077,9 @@ static void fill_stream_properties_from_drm_display_mode(
 
 	timing_out->timing_3d_format = TIMING_3D_FORMAT_NONE;
 	timing_out->display_color_depth = convert_color_depth_from_display_info(
-		connector, connector_state,
-		(timing_out->pixel_encoding == PIXEL_ENCODING_YCBCR420));
+		connector,
+		(timing_out->pixel_encoding == PIXEL_ENCODING_YCBCR420),
+		requested_bpc);
 	timing_out->scan_type = SCANNING_TYPE_NODATA;
 	timing_out->hdmi_vic = 0;
 
@@ -4287,7 +4285,8 @@ static struct dc_stream_state *
 create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
 		       const struct drm_display_mode *drm_mode,
 		       const struct dm_connector_state *dm_state,
-		       const struct dc_stream_state *old_stream)
+		       const struct dc_stream_state *old_stream,
+		       int requested_bpc)
 {
 	struct drm_display_mode *preferred_mode = NULL;
 	struct drm_connector *drm_connector;
@@ -4372,10 +4371,10 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
 	*/
 	if (!scale || mode_refresh != preferred_refresh)
 		fill_stream_properties_from_drm_display_mode(stream,
-			&mode, &aconnector->base, con_state, NULL);
+			&mode, &aconnector->base, con_state, NULL, requested_bpc);
 	else
 		fill_stream_properties_from_drm_display_mode(stream,
-			&mode, &aconnector->base, con_state, old_stream);
+			&mode, &aconnector->base, con_state, old_stream, requested_bpc);
 
 	stream->timing.flags.DSC = 0;
 
@@ -4907,16 +4906,54 @@ static void handle_edid_mgmt(struct amdgpu_dm_connector *aconnector)
 	create_eml_sink(aconnector);
 }
 
+static struct dc_stream_state *
+create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+				const struct drm_display_mode *drm_mode,
+				const struct dm_connector_state *dm_state,
+				const struct dc_stream_state *old_stream)
+{
+	struct drm_connector *connector = &aconnector->base;
+	struct amdgpu_device *adev = connector->dev->dev_private;
+	struct dc_stream_state *stream;
+	int requested_bpc = connector->state ? connector->state->max_requested_bpc : 8;
+	enum dc_status dc_result = DC_OK;
+
+	do {
+		stream = create_stream_for_sink(aconnector, drm_mode,
+						dm_state, old_stream,
+						requested_bpc);
+		if (stream == NULL) {
+			DRM_ERROR("Failed to create stream for sink!\n");
+			break;
+		}
+
+		dc_result = dc_validate_stream(adev->dm.dc, stream);
+
+		if (dc_result != DC_OK) {
+			DRM_DEBUG_KMS("Mode %dx%d (clk %d) failed DC validation with error %d\n",
+				      drm_mode->hdisplay,
+				      drm_mode->vdisplay,
+				      drm_mode->clock,
+				      dc_result);
+
+			dc_stream_release(stream);
+			stream = NULL;
+			requested_bpc -= 2; /* lower bpc to retry validation */
+		}
+
+	} while (stream == NULL && requested_bpc >= 6);
+
+	return stream;
+}
+
 enum drm_mode_status amdgpu_dm_connector_mode_valid(struct drm_connector *connector,
 				   struct drm_display_mode *mode)
 {
 	int result = MODE_ERROR;
 	struct dc_sink *dc_sink;
-	struct amdgpu_device *adev = connector->dev->dev_private;
 	/* TODO: Unhardcode stream count */
 	struct dc_stream_state *stream;
 	struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
-	enum dc_status dc_result = DC_OK;
 
 	if ((mode->flags & DRM_MODE_FLAG_INTERLACE) ||
 			(mode->flags & DRM_MODE_FLAG_DBLSCAN))
@@ -4937,24 +4974,11 @@ enum drm_mode_status amdgpu_dm_connector_mode_valid(struct drm_connector *connec
 		goto fail;
 	}
 
-	stream = create_stream_for_sink(aconnector, mode, NULL, NULL);
-	if (stream == NULL) {
-		DRM_ERROR("Failed to create stream for sink!\n");
-		goto fail;
-	}
-
-	dc_result = dc_validate_stream(adev->dm.dc, stream);
-
-	if (dc_result == DC_OK)
+	stream = create_validate_stream_for_sink(aconnector, mode, NULL, NULL);
+	if (stream) {
+		dc_stream_release(stream);
 		result = MODE_OK;
-	else
-		DRM_DEBUG_KMS("Mode %dx%d (clk %d) failed DC validation with error %d\n",
-			      mode->hdisplay,
-			      mode->vdisplay,
-			      mode->clock,
-			      dc_result);
-
-	dc_stream_release(stream);
+	}
 
 fail:
 	/* TODO: error handling*/
@@ -5277,10 +5301,12 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
 		return 0;
 
 	if (!state->duplicated) {
+		int max_bpc = conn_state->max_requested_bpc;
 		is_y420 = drm_mode_is_420_also(&connector->display_info, adjusted_mode) &&
 				aconnector->force_yuv420_output;
-		color_depth = convert_color_depth_from_display_info(connector, conn_state,
-								    is_y420);
+		color_depth = convert_color_depth_from_display_info(connector,
+								    is_y420,
+								    max_bpc);
 		bpp = convert_dc_color_depth_into_bpc(color_depth) * 3;
 		clock = adjusted_mode->clock;
 		dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp, false);
@@ -7711,10 +7737,10 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
 		if (!drm_atomic_crtc_needs_modeset(new_crtc_state))
 			goto skip_modeset;
 
-		new_stream = create_stream_for_sink(aconnector,
-						     &new_crtc_state->mode,
-						    dm_new_conn_state,
-						    dm_old_crtc_state->stream);
+		new_stream = create_validate_stream_for_sink(aconnector,
+							     &new_crtc_state->mode,
+							     dm_new_conn_state,
+							     dm_old_crtc_state->stream);
 
 		/*
 		 * we can have no stream on ACTION_SET if a display
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 21/27] drm/amd/display: Add DMUB firmware version helpers in DMUB service
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (19 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 20/27] drm/amd/display: Fix incorrectly pruned modes with deep color Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 22/27] drm/amd/display: Support CW4 for DMUB ringbuffer inbox Rodrigo Siqueira
                   ` (5 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Aurabindo.Pillai,
	Tony Cheng, Bhawanpreet.Lakha, Nicholas Kazlauskas

From: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>

[Why]
In order to switch over the inbox from region4 to cw4 we need to know if
the firmware is capable of properly invalidating the cache before
reading the commands.

Easiest way is to just check the firmware version, but we don't have the
helper macros or a way for the dmub_srv to know what version it is.

[How]
Add a new fw_version field to the creation parameters that driver can
optional pass in. Assumes a version of 0x00000000 is invalid.

Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 drivers/gpu/drm/amd/display/dmub/dmub_srv.h     | 11 +++++++++++
 drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c |  1 +
 2 files changed, 12 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
index 26d94eb5ab58..73b5d500ccf6 100644
--- a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
+++ b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
@@ -280,6 +280,7 @@ struct dmub_srv_hw_funcs {
  * @hw_funcs: optional overrides for hw funcs
  * @user_ctx: context data for callback funcs
  * @asic: driver supplied asic
+ * @fw_version: the current firmware version, if any
  * @is_virtual: false for hw support only
  */
 struct dmub_srv_create_params {
@@ -287,6 +288,7 @@ struct dmub_srv_create_params {
 	struct dmub_srv_hw_funcs *hw_funcs;
 	void *user_ctx;
 	enum dmub_asic asic;
+	uint32_t fw_version;
 	bool is_virtual;
 };
 
@@ -310,12 +312,14 @@ struct dmub_srv_hw_params {
  * struct dmub_srv - software state for dmcub
  * @asic: dmub asic identifier
  * @user_ctx: user provided context for the dmub_srv
+ * @fw_version: the current firmware version, if any
  * @is_virtual: false if hardware support only
  * @fw_state: dmub firmware state pointer
  */
 struct dmub_srv {
 	enum dmub_asic asic;
 	void *user_ctx;
+	uint32_t fw_version;
 	bool is_virtual;
 	struct dmub_fb scratch_mem_fb;
 	volatile const struct dmub_fw_state *fw_state;
@@ -335,6 +339,13 @@ struct dmub_srv {
 	uint32_t psp_version;
 };
 
+/**
+ * DMUB firmware version helper macro - useful for checking if the version
+ * of a firmware to know if feature or functionality is supported or present.
+ */
+#define DMUB_FW_VERSION(major, minor, revision) \
+	((((major) & 0xFF) << 24) | (((minor) & 0xFF) << 16) | ((revision) & 0xFFFF))
+
 /**
  * dmub_srv_create() - creates the DMUB service.
  * @dmub: the dmub service
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
index 3cfbc27f3eab..3559093027ee 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
@@ -172,6 +172,7 @@ enum dmub_status dmub_srv_create(struct dmub_srv *dmub,
 	dmub->funcs = params->funcs;
 	dmub->user_ctx = params->user_ctx;
 	dmub->asic = params->asic;
+	dmub->fw_version = params->fw_version;
 	dmub->is_virtual = params->is_virtual;
 
 	/* Setup asic dependent hardware funcs. */
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 22/27] drm/amd/display: Support CW4 for DMUB ringbuffer inbox
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (20 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 21/27] drm/amd/display: Add DMUB firmware version helpers in DMUB service Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 23/27] drm/amd/display: fix dml log2 function Rodrigo Siqueira
                   ` (4 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Aurabindo.Pillai,
	Tony Cheng, Bhawanpreet.Lakha, Nicholas Kazlauskas

From: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>

[Why]
Region 4 is non cacheable and slower than using cache window 4.

[How]
Check the firmware version to determine how we should program the
base address and memory windows.

Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 .../gpu/drm/amd/display/dmub/src/dmub_dcn20.c | 28 ++++++++++++++-----
 .../gpu/drm/amd/display/dmub/src/dmub_srv.c   |  3 +-
 2 files changed, 23 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
index edc73d6d7ba2..1e03f6fdabd6 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
@@ -215,11 +215,22 @@ void dmub_dcn20_setup_windows(struct dmub_srv *dmub,
 	/* TODO: Move this to CW4. */
 	dmub_dcn20_translate_addr(&cw4->offset, fb_base, fb_offset, &offset);
 
-	REG_WRITE(DMCUB_REGION4_OFFSET, offset.u.low_part);
-	REG_WRITE(DMCUB_REGION4_OFFSET_HIGH, offset.u.high_part);
-	REG_SET_2(DMCUB_REGION4_TOP_ADDRESS, 0, DMCUB_REGION4_TOP_ADDRESS,
-		  cw4->region.top - cw4->region.base - 1, DMCUB_REGION4_ENABLE,
-		  1);
+	/* New firmware can support CW4. */
+	if (dmub->fw_version > DMUB_FW_VERSION(1, 0, 10)) {
+		REG_WRITE(DMCUB_REGION3_CW4_OFFSET, offset.u.low_part);
+		REG_WRITE(DMCUB_REGION3_CW4_OFFSET_HIGH, offset.u.high_part);
+		REG_WRITE(DMCUB_REGION3_CW4_BASE_ADDRESS, cw4->region.base);
+		REG_SET_2(DMCUB_REGION3_CW4_TOP_ADDRESS, 0,
+			  DMCUB_REGION3_CW4_TOP_ADDRESS, cw4->region.top,
+			  DMCUB_REGION3_CW4_ENABLE, 1);
+	} else {
+		REG_WRITE(DMCUB_REGION4_OFFSET, offset.u.low_part);
+		REG_WRITE(DMCUB_REGION4_OFFSET_HIGH, offset.u.high_part);
+		REG_SET_2(DMCUB_REGION4_TOP_ADDRESS, 0,
+			  DMCUB_REGION4_TOP_ADDRESS,
+			  cw4->region.top - cw4->region.base - 1,
+			  DMCUB_REGION4_ENABLE, 1);
+	}
 
 	dmub_dcn20_translate_addr(&cw5->offset, fb_base, fb_offset, &offset);
 
@@ -243,9 +254,12 @@ void dmub_dcn20_setup_windows(struct dmub_srv *dmub,
 void dmub_dcn20_setup_mailbox(struct dmub_srv *dmub,
 			      const struct dmub_region *inbox1)
 {
-	/* TODO: Use CW4 instead of region 4. */
+	/* New firmware can support CW4 for the inbox. */
+	if (dmub->fw_version > DMUB_FW_VERSION(1, 0, 10))
+		REG_WRITE(DMCUB_INBOX1_BASE_ADDRESS, inbox1->base);
+	else
+		REG_WRITE(DMCUB_INBOX1_BASE_ADDRESS, 0x80000000);
 
-	REG_WRITE(DMCUB_INBOX1_BASE_ADDRESS, 0x80000000);
 	REG_WRITE(DMCUB_INBOX1_SIZE, inbox1->top - inbox1->base);
 }
 
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
index 3559093027ee..d128b0639572 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
@@ -62,6 +62,7 @@
 #define DMUB_CW0_BASE (0x60000000)
 #define DMUB_CW1_BASE (0x61000000)
 #define DMUB_CW3_BASE (0x63000000)
+#define DMUB_CW4_BASE (0x64000000)
 #define DMUB_CW5_BASE (0x65000000)
 #define DMUB_CW6_BASE (0x66000000)
 
@@ -403,7 +404,7 @@ enum dmub_status dmub_srv_hw_init(struct dmub_srv *dmub,
 		cw3.region.top = cw3.region.base + bios_fb->size;
 
 		cw4.offset.quad_part = mail_fb->gpu_addr;
-		cw4.region.base = cw3.region.top + 1;
+		cw4.region.base = DMUB_CW4_BASE;
 		cw4.region.top = cw4.region.base + mail_fb->size;
 
 		inbox1.base = cw4.region.base;
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 23/27] drm/amd/display: fix dml log2 function
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (21 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 22/27] drm/amd/display: Support CW4 for DMUB ringbuffer inbox Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 24/27] drm/amd/display: fix dml immediate flip input Rodrigo Siqueira
                   ` (3 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Dmytro Laktyushkin,
	Eric Bernstein, Aurabindo.Pillai, Bhawanpreet.Lakha

From: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>

This change removes internal rounding in dml_log2 function.

Dml_log2 is expected to return a float output. In case an int is needed
dml will floor the output on it's own.

Signed-off-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
Reviewed-by: Eric Bernstein <Eric.Bernstein@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h b/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
index 02e06c9b3230..ab0870e2a103 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
@@ -86,9 +86,9 @@ static inline double dml_round(double a)
 		return floor;
 }
 
-static inline int dml_log2(double x)
+static inline double dml_log2(double x)
 {
-	return dml_round((double)dcn_bw_log(x, 2));
+	return (double) dcn_bw_log(x, 2);
 }
 
 static inline double dml_pow(double a, int exp)
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 24/27] drm/amd/display: fix dml immediate flip input
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (22 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 23/27] drm/amd/display: fix dml log2 function Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 25/27] drm/amd/display: Remove nv12 work around Rodrigo Siqueira
                   ` (2 subsequent siblings)
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Samson Tam,
	Dmytro Laktyushkin, Aurabindo.Pillai, Bhawanpreet.Lakha

From: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>

Set the correct value to immediate flip required field.

Signed-off-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
Reviewed-by: Samson Tam <Samson.Tam@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h | 2 +-
 drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c   | 6 +++++-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h
index 5baaefd29ba6..64f9c735f74d 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h
@@ -177,8 +177,8 @@ enum odm_combine_policy {
 };
 
 enum immediate_flip_requirement {
-	dm_immediate_flip_required,
 	dm_immediate_flip_not_required,
+	dm_immediate_flip_required,
 };
 
 enum unbounded_requesting_policy {
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
index 2d549736f9b8..7fc06ea1f647 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
@@ -385,6 +385,8 @@ static void fetch_pipe_params(struct display_mode_lib *mode_lib)
 		visited[k] = false;
 
 	mode_lib->vba.NumberOfActivePlanes = 0;
+	mode_lib->vba.ImmediateFlipSupport = false;
+	mode_lib->vba.ImmediateFlipRequirement = dm_immediate_flip_not_required;
 	for (j = 0; j < mode_lib->vba.cache_num_pipes; ++j) {
 		display_pipe_source_params_st *src = &pipes[j].pipe.src;
 		display_pipe_dest_params_st *dst = &pipes[j].pipe.dest;
@@ -635,8 +637,10 @@ static void fetch_pipe_params(struct display_mode_lib *mode_lib)
 			}
 		}
 
-		if (pipes[k].pipe.src.immediate_flip)
+		if (pipes[k].pipe.src.immediate_flip) {
 			mode_lib->vba.ImmediateFlipSupport = true;
+			mode_lib->vba.ImmediateFlipRequirement = dm_immediate_flip_required;
+		}
 
 		mode_lib->vba.NumberOfActivePlanes++;
 	}
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 25/27] drm/amd/display: Remove nv12 work around
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (23 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 24/27] drm/amd/display: fix dml immediate flip input Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 26/27] drm/amd/display: Set/Reset avmute when disable/enable stream Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 27/27] drm/amd/display: FW Release 1.0.11 Rodrigo Siqueira
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Aurabindo.Pillai,
	Tony Cheng, Yongqiang Sun, Bhawanpreet.Lakha

From: Yongqiang Sun <yongqiang.sun@amd.com>

[Why]
dal side nv12 wa has a lot of side effects.
KMD side wa is used, so this should be remove.

[How]
Removed wa from dal side.

Signed-off-by: Yongqiang Sun <yongqiang.sun@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 .../drm/amd/display/dc/core/dc_vm_helper.c    |   3 -
 drivers/gpu/drm/amd/display/dc/dc.h           |   1 -
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.c    |   3 -
 .../gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c | 121 +-----------------
 .../drm/amd/display/dc/dcn21/dcn21_resource.c |   1 -
 drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h  |   3 -
 6 files changed, 2 insertions(+), 130 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_vm_helper.c b/drivers/gpu/drm/amd/display/dc/core/dc_vm_helper.c
index 64cf24a9ab08..f2b39ec35c89 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_vm_helper.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_vm_helper.c
@@ -47,9 +47,6 @@ int dc_setup_system_context(struct dc *dc, struct dc_phy_addr_space_config *pa_c
 		 */
 		memcpy(&dc->vm_pa_config, pa_config, sizeof(struct dc_phy_addr_space_config));
 		dc->vm_pa_config.valid = true;
-
-		if (pa_config->is_hvm_enabled == 0)
-			dc->debug.nv12_iflip_vm_wa = false;
 	}
 
 	return num_vmids;
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index 391691c70805..11ac4b7ab174 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -471,7 +471,6 @@ struct dc_debug_options {
 	bool cm_in_bypass;
 	int force_clock_mode;/*every mode change.*/
 
-	bool nv12_iflip_vm_wa;
 	bool disable_dram_clock_change_vactive_support;
 	bool validate_dml_output;
 	bool enable_dmcub_surface_flip;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
index 258dcd33787e..26cac587c56b 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
@@ -1435,9 +1435,6 @@ static void dcn20_update_dchubp_dpp(
 		hubp->power_gated = false;
 	}
 
-	if (hubp->funcs->apply_PLAT_54186_wa && viewport_changed)
-		hubp->funcs->apply_PLAT_54186_wa(hubp, &plane_state->address);
-
 	if (pipe_ctx->update_flags.bits.enable || plane_state->update_flags.bits.addr_update)
 		hws->funcs.update_plane_addr(dc, pipe_ctx);
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
index 960a0716dde5..f9045852728f 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
@@ -225,116 +225,6 @@ void hubp21_set_viewport(
 		  SEC_VIEWPORT_Y_START_C, viewport_c->y);
 }
 
-static void hubp21_apply_PLAT_54186_wa(
-		struct hubp *hubp,
-		const struct dc_plane_address *address)
-{
-	struct dcn21_hubp *hubp21 = TO_DCN21_HUBP(hubp);
-	struct dc_debug_options *debug = &hubp->ctx->dc->debug;
-	unsigned int chroma_bpe = 2;
-	unsigned int luma_addr_high_part = 0;
-	unsigned int row_height = 0;
-	unsigned int chroma_pitch = 0;
-	unsigned int viewport_c_height = 0;
-	unsigned int viewport_c_width = 0;
-	unsigned int patched_viewport_height = 0;
-	unsigned int patched_viewport_width = 0;
-	unsigned int rotation_angle = 0;
-	unsigned int pix_format = 0;
-	unsigned int h_mirror_en = 0;
-	unsigned int tile_blk_size = 64 * 1024; /* 64KB for 64KB SW, 4KB for 4KB SW */
-
-
-	if (!debug->nv12_iflip_vm_wa)
-		return;
-
-	REG_GET(DCHUBP_REQ_SIZE_CONFIG_C,
-		PTE_ROW_HEIGHT_LINEAR_C, &row_height);
-
-	REG_GET_2(DCSURF_PRI_VIEWPORT_DIMENSION_C,
-			PRI_VIEWPORT_WIDTH_C, &viewport_c_width,
-			PRI_VIEWPORT_HEIGHT_C, &viewport_c_height);
-
-	REG_GET(DCSURF_PRIMARY_SURFACE_ADDRESS_HIGH_C,
-			PRIMARY_SURFACE_ADDRESS_HIGH_C, &luma_addr_high_part);
-
-	REG_GET(DCSURF_SURFACE_PITCH_C,
-			PITCH_C, &chroma_pitch);
-
-	chroma_pitch += 1;
-
-	REG_GET_3(DCSURF_SURFACE_CONFIG,
-			SURFACE_PIXEL_FORMAT, &pix_format,
-			ROTATION_ANGLE, &rotation_angle,
-			H_MIRROR_EN, &h_mirror_en);
-
-	/* reset persistent cached data */
-	hubp21->PLAT_54186_wa_chroma_addr_offset = 0;
-	/* apply wa only for NV12 surface with scatter gather enabled with viewport > 512 along
-	 * the vertical direction*/
-	if (address->type != PLN_ADDR_TYPE_VIDEO_PROGRESSIVE ||
-			address->video_progressive.luma_addr.high_part == 0xf4)
-		return;
-
-	if ((rotation_angle == ROTATION_ANGLE_0 || rotation_angle == ROTATION_ANGLE_180)
-			&& viewport_c_height <= 512)
-		return;
-
-	if ((rotation_angle == ROTATION_ANGLE_90 || rotation_angle == ROTATION_ANGLE_270)
-				&& viewport_c_width <= 512)
-		return;
-
-	switch (rotation_angle) {
-	case ROTATION_ANGLE_0: /* 0 degree rotation */
-		row_height = 128;
-		patched_viewport_height = (viewport_c_height / row_height + 1) * row_height + 1;
-		patched_viewport_width = viewport_c_width;
-		hubp21->PLAT_54186_wa_chroma_addr_offset = 0;
-		break;
-	case ROTATION_ANGLE_180: /* 180 degree rotation */
-		row_height = 128;
-		patched_viewport_height = viewport_c_height + row_height;
-		patched_viewport_width = viewport_c_width;
-		hubp21->PLAT_54186_wa_chroma_addr_offset = 0 - chroma_pitch * row_height * chroma_bpe;
-		break;
-	case ROTATION_ANGLE_90: /* 90 degree rotation */
-		row_height = 256;
-		if (h_mirror_en) {
-			patched_viewport_height = viewport_c_height;
-			patched_viewport_width = viewport_c_width + row_height;
-			hubp21->PLAT_54186_wa_chroma_addr_offset = 0;
-		} else {
-			patched_viewport_height = viewport_c_height;
-			patched_viewport_width = viewport_c_width + row_height;
-			hubp21->PLAT_54186_wa_chroma_addr_offset = 0 - tile_blk_size;
-		}
-		break;
-	case ROTATION_ANGLE_270: /* 270 degree rotation */
-		row_height = 256;
-		if (h_mirror_en) {
-			patched_viewport_height = viewport_c_height;
-			patched_viewport_width = viewport_c_width + row_height;
-			hubp21->PLAT_54186_wa_chroma_addr_offset = 0 - tile_blk_size;
-		} else {
-			patched_viewport_height = viewport_c_height;
-			patched_viewport_width = viewport_c_width + row_height;
-			hubp21->PLAT_54186_wa_chroma_addr_offset = 0;
-		}
-		break;
-	default:
-		ASSERT(0);
-		break;
-	}
-
-	/* catch cases where viewport keep growing */
-	ASSERT(patched_viewport_height && patched_viewport_height < 5000);
-	ASSERT(patched_viewport_width && patched_viewport_width < 5000);
-
-	REG_UPDATE_2(DCSURF_PRI_VIEWPORT_DIMENSION_C,
-			PRI_VIEWPORT_WIDTH_C, patched_viewport_width,
-			PRI_VIEWPORT_HEIGHT_C, patched_viewport_height);
-}
-
 void hubp21_set_vm_system_aperture_settings(struct hubp *hubp,
 		struct vm_system_aperture_param *apt)
 {
@@ -812,8 +702,6 @@ bool hubp21_program_surface_flip_and_addr(
 		const struct dc_plane_address *address,
 		bool flip_immediate)
 {
-	struct dc_debug_options *debug = &hubp->ctx->dc->debug;
-	struct dcn21_hubp *hubp21 = TO_DCN21_HUBP(hubp);
 	struct surface_flip_registers flip_regs = { 0 };
 
 	flip_regs.vmid = address->vmid;
@@ -859,12 +747,8 @@ bool hubp21_program_surface_flip_and_addr(
 		flip_regs.DCSURF_PRIMARY_SURFACE_ADDRESS_HIGH =
 				address->video_progressive.luma_addr.high_part;
 
-		if (debug->nv12_iflip_vm_wa) {
-			flip_regs.DCSURF_PRIMARY_SURFACE_ADDRESS_C =
-					address->video_progressive.chroma_addr.low_part + hubp21->PLAT_54186_wa_chroma_addr_offset;
-		} else
-			flip_regs.DCSURF_PRIMARY_SURFACE_ADDRESS_C =
-					address->video_progressive.chroma_addr.low_part;
+		flip_regs.DCSURF_PRIMARY_SURFACE_ADDRESS_C =
+				address->video_progressive.chroma_addr.low_part;
 
 		flip_regs.DCSURF_PRIMARY_SURFACE_ADDRESS_HIGH_C =
 				address->video_progressive.chroma_addr.high_part;
@@ -942,7 +826,6 @@ static struct hubp_funcs dcn21_hubp_funcs = {
 	.set_blank = hubp1_set_blank,
 	.dcc_control = hubp1_dcc_control,
 	.mem_program_viewport = hubp21_set_viewport,
-	.apply_PLAT_54186_wa = hubp21_apply_PLAT_54186_wa,
 	.set_cursor_attributes	= hubp2_cursor_set_attributes,
 	.set_cursor_position	= hubp1_cursor_set_position,
 	.hubp_clk_cntl = hubp1_clk_cntl,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
index f00a56835084..00436654c584 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
@@ -878,7 +878,6 @@ static const struct dc_debug_options debug_defaults_drv = {
 		.scl_reset_length10 = true,
 		.sanity_checks = true,
 		.disable_48mhz_pwrdwn = false,
-		.nv12_iflip_vm_wa = true,
 		.usbc_combo_phy_reset_wa = true
 };
 
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
index 2cb8466e657b..efce08e4c0ca 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
@@ -104,9 +104,6 @@ struct hubp_funcs {
 			const struct rect *viewport,
 			const struct rect *viewport_c);
 
-	void (*apply_PLAT_54186_wa)(struct hubp *hubp,
-			const struct dc_plane_address *address);
-
 	bool (*hubp_program_surface_flip_and_addr)(
 		struct hubp *hubp,
 		const struct dc_plane_address *address,
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 26/27] drm/amd/display: Set/Reset avmute when disable/enable stream
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (24 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 25/27] drm/amd/display: Remove nv12 work around Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  2020-05-15 18:13 ` [PATCH 27/27] drm/amd/display: FW Release 1.0.11 Rodrigo Siqueira
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Jinze Xu, Rodrigo.Siqueira,
	Aurabindo.Pillai, Tony Cheng, Bhawanpreet.Lakha, Anthony Koo

From: Jinze Xu <jinze.xu@amd.com>

[Why]
When disconnect fe from be, something such as unstable clock may cause
garbage occurs.

[How]
Send set avmute at the beginning of disable stream and send reset avmute
at the end of enable stream.

Signed-off-by: Jinze Xu <jinze.xu@amd.com>
Reviewed-by: Anthony Koo <Anthony.Koo@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
Acked-by: Tony Cheng <Tony.Cheng@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc_link.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index e920d046f026..d80b2de3ee82 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -3244,6 +3244,10 @@ void core_link_enable_stream(
 			dp_set_dsc_enable(pipe_ctx, true);
 
 	}
+
+	if (pipe_ctx->stream->signal == SIGNAL_TYPE_HDMI_TYPE_A) {
+		core_link_set_avmute(pipe_ctx, false);
+	}
 }
 
 void core_link_disable_stream(struct pipe_ctx *pipe_ctx)
@@ -3256,6 +3260,10 @@ void core_link_disable_stream(struct pipe_ctx *pipe_ctx)
 			dc_is_virtual_signal(pipe_ctx->stream->signal))
 		return;
 
+	if (pipe_ctx->stream->signal == SIGNAL_TYPE_HDMI_TYPE_A) {
+		core_link_set_avmute(pipe_ctx, true);
+	}
+
 #if defined(CONFIG_DRM_AMD_DC_HDCP)
 	update_psp_stream_config(pipe_ctx, true);
 #endif
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 27/27] drm/amd/display: FW Release 1.0.11
  2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
                   ` (25 preceding siblings ...)
  2020-05-15 18:13 ` [PATCH 26/27] drm/amd/display: Set/Reset avmute when disable/enable stream Rodrigo Siqueira
@ 2020-05-15 18:13 ` Rodrigo Siqueira
  26 siblings, 0 replies; 28+ messages in thread
From: Rodrigo Siqueira @ 2020-05-15 18:13 UTC (permalink / raw)
  To: amd-gfx
  Cc: Sunpeng.Li, Harry.Wentland, Rodrigo.Siqueira, Aurabindo.Pillai,
	Bhawanpreet.Lakha, Anthony Koo

From: Anthony Koo <Anthony.Koo@amd.com>

Signed-off-by: Anthony Koo <Anthony.Koo@amd.com>
Reviewed-by: Anthony Koo <Anthony.Koo@amd.com>
Acked-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_meta.h | 2 ++
 drivers/gpu/drm/amd/display/dmub/inc/dmub_rb.h      | 6 ++++--
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_meta.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_meta.h
index 242ec257998c..b657c51c9ac9 100644
--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_meta.h
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_meta.h
@@ -45,11 +45,13 @@
  * @magic_value: magic value identifying DMUB firmware meta info
  * @fw_region_size: size of the firmware state region
  * @trace_buffer_size: size of the tracebuffer region
+ * @fw_version: the firmware version information
  */
 struct dmub_fw_meta_info {
 	uint32_t magic_value;
 	uint32_t fw_region_size;
 	uint32_t trace_buffer_size;
+	uint32_t fw_version;
 };
 
 /* Ensure that the structure remains 64 bytes. */
diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_rb.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_rb.h
index 2ae48c18bb5b..31f471f549a6 100644
--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_rb.h
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_rb.h
@@ -37,6 +37,8 @@ struct dmub_rb_init_params {
 	void *ctx;
 	void *base_address;
 	uint32_t capacity;
+	uint32_t read_ptr;
+	uint32_t write_ptr;
 };
 
 struct dmub_rb {
@@ -141,8 +143,8 @@ static inline void dmub_rb_init(struct dmub_rb *rb,
 {
 	rb->base_address = init_params->base_address;
 	rb->capacity = init_params->capacity;
-	rb->rptr = 0;
-	rb->wrpt = 0;
+	rb->rptr = init_params->read_ptr;
+	rb->wrpt = init_params->write_ptr;
 }
 
 #if defined(__cplusplus)
-- 
2.26.2

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2020-05-15 18:14 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-15 18:12 [PATCH 00/27] DC Patches May 15, 2020 Rodrigo Siqueira
2020-05-15 18:12 ` [PATCH 01/27] drm/amd/display: Minimize DSC resource re-assignment Rodrigo Siqueira
2020-05-15 18:12 ` [PATCH 02/27] drm/amd/display: fix and simplify pipe split logic Rodrigo Siqueira
2020-05-15 18:12 ` [PATCH 03/27] drm/amd/display: Handle persistence in DM Rodrigo Siqueira
2020-05-15 18:12 ` [PATCH 04/27] drm/amd/display: Do not fail if build scaling params fails Rodrigo Siqueira
2020-05-15 18:12 ` [PATCH 05/27] drm/amd/display: DP training to set properly SCRAMBLING_DISABLE Rodrigo Siqueira
2020-05-15 18:12 ` [PATCH 06/27] drm/amd/display: Check bss_data_size before going down legacy DMUB load path Rodrigo Siqueira
2020-05-15 18:12 ` [PATCH 07/27] drm/amd/display: Don't pass invalid fw_bss_data pointer into DMUB srv Rodrigo Siqueira
2020-05-15 18:12 ` [PATCH 08/27] drm/amd/display: Add bit swap helper based on endianness Rodrigo Siqueira
2020-05-15 18:12 ` [PATCH 09/27] drm/amd/display: Implement some asic specific abm call backs Rodrigo Siqueira
2020-05-15 18:12 ` [PATCH 10/27] drm/amd/display: FW release 1.0.10 Rodrigo Siqueira
2020-05-15 18:12 ` [PATCH 11/27] drm/amd/display: Fix ABM memory alignment issue Rodrigo Siqueira
2020-05-15 18:12 ` [PATCH 12/27] drm/amd/display: 3.2.85 Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 13/27] drm/amd/display: Remove dml_common_def file Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 14/27] drm/amd/display: update dml interfaces and variables Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 15/27] drm/amd/display: DP link layer test 4.2.1.1 fix due to specs update Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 16/27] drm/amd/display: vbios data table packing Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 17/27] drm/amd/display: Defer cursor lock until after VUPDATE Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 18/27] drm/amd/display: Avoid pipe split when plane is too small Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 19/27] drm/amd/display: correct rn NUM_VMID Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 20/27] drm/amd/display: Fix incorrectly pruned modes with deep color Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 21/27] drm/amd/display: Add DMUB firmware version helpers in DMUB service Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 22/27] drm/amd/display: Support CW4 for DMUB ringbuffer inbox Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 23/27] drm/amd/display: fix dml log2 function Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 24/27] drm/amd/display: fix dml immediate flip input Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 25/27] drm/amd/display: Remove nv12 work around Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 26/27] drm/amd/display: Set/Reset avmute when disable/enable stream Rodrigo Siqueira
2020-05-15 18:13 ` [PATCH 27/27] drm/amd/display: FW Release 1.0.11 Rodrigo Siqueira

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.