All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/51] DC Patches - 2 Dec 2019
@ 2019-12-02 17:33 sunpeng.li
  2019-12-02 17:33 ` [PATCH 01/51] drm/amd/display: update sr and pstate latencies for Renoir sunpeng.li
                   ` (50 more replies)
  0 siblings, 51 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx; +Cc: bhawanpreet.lakha, rodrigo.siqueira, Leo Li, harry.wentland

From: Leo Li <sunpeng.li@amd.com>

Summary of change:

* More DMCUB updates for Renoir
* Cleanup and refactor of DC hardware sequencer interface

Amanda Liu (1):
  drm/amd/display: Fix screen tearing on vrr tests

Anthony Koo (4):
  drm/amd/display: rename core_dc to dc
  drm/amd/display: add separate of private hwss functions
  drm/amd/display: add DP protocol version
  drm/amd/display: Limit NV12 chroma workaround

Aric Cyr (3):
  drm/amd/display: 3.2.61
  drm/amd/display: fix cursor positioning for multiplane cases
  drm/amd/display: 3.2.62

Brandon Syu (1):
  drm/amd/display: fixed that I2C over AUX didn't read data issue

David Galiffi (1):
  drm/amd/display: Fixed kernel panic when booting with DP-to-HDMI
    dongle

Dmytro Laktyushkin (2):
  drm/amd/display: fix dml20 min_dst_y_next_start calculation
  drm/amd/display: update dml related structs

Eric Yang (3):
  drm/amd/display: update sr and pstate latencies for Renoir
  drm/amd/display: fix dprefclk and ss percentage reading on RN
  drm/amd/display: update dispclk and dppclk vco frequency

George Shen (1):
  drm/amd/display: Increase the number of retries after AUX DEFER

Hugo Hu (1):
  drm/amd/display: Save/restore link setting for disable phy when link
    retraining

Jaehyun Chung (1):
  drm/amd/display: Wrong ifdef guards were used around DML validation

Joseph Gravenor (5):
  drm/amd/display: fix DalDramClockChangeLatencyNs override
  drm/amd/display: populate bios integrated info for renoir
  drm/amd/display: have two different sr and pstate latency tables for
    renoir
  drm/amd/display: update p-state latency for renoir when using lpddr4
  drm/amd/display: update sr latency for renoir when using lpddr4

Krunoslav Kovac (1):
  drm/amd/display: Change HDR_MULT check

Leo (Hanghong) Ma (1):
  drm/amd/display: Change the delay time before enabling FEC

Lucy Li (1):
  drm/amd/display: Disable link before reenable

Michael Strauss (2):
  drm/amd/display: Fix Dali clk mgr construct
  drm/amd/display: Disable chroma viewport w/a when rotated 180 degrees

Mikita Lipski (1):
  drm/amd/display: Return a correct error value

Nicholas Kazlauskas (6):
  drm/amd/display: Only wait for DMUB phy init on dcn21
  drm/amd/display: Return DMUB_STATUS_OK when autoload unsupported
  drm/amd/display: Program CW5 for tracebuffer for dcn20
  drm/amd/display: Split DMUB cmd type into type/subtype
  drm/amd/display: Add shared DMCUB/driver firmware state cache window
  drm/amd/display: Extend DMCUB offload testing into dcn20/21

Nikola Cornij (2):
  drm/amd/display: Map DSC resources 1-to-1 if numbers of OPPs and DSCs
    are equal
  drm/amd/display: Reset steer fifo before unblanking the stream

Noah Abradjian (3):
  drm/amd/display: Remove flag check in mpcc update
  drm/amd/display: Modify logic for when to wait for mpcc idle
  drm/amd/display: Remove redundant call

Paul Hsieh (1):
  drm/amd/display: Reset PHY in link re-training

Reza Amini (2):
  drm/amd/display: Implement DePQ for DCN1
  drm/amd/display: Implement DePQ for DCN2

Wenjing Liu (3):
  drm/amd/display: add dc dsc functions to return bpp range for pixel
    encoding
  drm/amd/display: remove spam DSC log
  drm/amd/display: add dsc policy getter

Yongqiang Sun (2):
  drm/amd/display: Add DMCUB__PG_DONE trace code enum
  drm/amd/display: Compare clock state member to determine optimization.

abdoulaye berthe (3):
  drm/amd/display: add log for lttpr
  drm/amd/display: check for repeater when setting aux_rd_interval.
  drm/amd/display: correct log message for lttpr

 .../drm/amd/display/dc/bios/bios_parser2.c    |   2 +
 .../drm/amd/display/dc/bios/command_table2.c  |  13 +-
 .../gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c  |   7 +
 .../dc/clk_mgr/dce112/dce112_clk_mgr.c        |  12 +-
 .../dc/clk_mgr/dcn10/rv1_clk_mgr_vbios_smu.c  |   6 +-
 .../amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 146 +++++--
 .../dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c   |   6 +-
 drivers/gpu/drm/amd/display/dc/core/dc.c      |  12 +-
 .../gpu/drm/amd/display/dc/core/dc_debug.c    |   8 +-
 drivers/gpu/drm/amd/display/dc/core/dc_link.c | 198 +++++-----
 .../gpu/drm/amd/display/dc/core/dc_link_ddc.c |   2 +-
 .../gpu/drm/amd/display/dc/core/dc_link_dp.c  | 209 +++++++---
 .../drm/amd/display/dc/core/dc_link_hwss.c    |  40 +-
 .../gpu/drm/amd/display/dc/core/dc_resource.c |   9 +-
 .../gpu/drm/amd/display/dc/core/dc_stream.c   |  43 +-
 .../gpu/drm/amd/display/dc/core/dc_surface.c  |  22 +-
 drivers/gpu/drm/amd/display/dc/dc.h           |   7 +-
 drivers/gpu/drm/amd/display/dc/dc_dsc.h       |  16 +-
 drivers/gpu/drm/amd/display/dc/dc_helper.c    |   3 +
 drivers/gpu/drm/amd/display/dc/dce/dce_aux.c  |  32 +-
 .../gpu/drm/amd/display/dc/dce/dce_hwseq.c    |   2 +-
 .../gpu/drm/amd/display/dc/dce/dce_hwseq.h    |   6 +-
 .../display/dc/dce100/dce100_hw_sequencer.c   |   3 +-
 .../display/dc/dce100/dce100_hw_sequencer.h   |   1 +
 .../display/dc/dce110/dce110_hw_sequencer.c   |  85 ++--
 .../display/dc/dce110/dce110_hw_sequencer.h   |   1 +
 .../amd/display/dc/dce110/dce110_resource.c   |   3 +-
 .../display/dc/dce112/dce112_hw_sequencer.c   |   2 +-
 .../display/dc/dce112/dce112_hw_sequencer.h   |   1 +
 .../display/dc/dce120/dce120_hw_sequencer.c   |   2 +-
 .../display/dc/dce120/dce120_hw_sequencer.h   |   1 +
 .../amd/display/dc/dce80/dce80_hw_sequencer.c |   2 +-
 .../amd/display/dc/dce80/dce80_hw_sequencer.h |   1 +
 .../drm/amd/display/dc/dcn10/dcn10_dpp_cm.c   |   3 +
 .../gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c |   3 +-
 .../gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h |   4 +-
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 179 +++++----
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.h |   1 +
 .../gpu/drm/amd/display/dc/dcn10/dcn10_init.c |  38 +-
 .../drm/amd/display/dc/dcn20/dcn20_dpp_cm.c   |   3 +
 .../gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c |   1 +
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.c    | 103 ++---
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.h    |   3 +
 .../gpu/drm/amd/display/dc/dcn20/dcn20_init.c |  54 +--
 .../gpu/drm/amd/display/dc/dcn20/dcn20_optc.c |   5 +
 .../drm/amd/display/dc/dcn20/dcn20_resource.c |  16 +-
 .../display/dc/dcn20/dcn20_stream_encoder.c   |  12 +-
 .../gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c |   8 +-
 .../drm/amd/display/dc/dcn21/dcn21_hwseq.c    |   1 +
 .../drm/amd/display/dc/dcn21/dcn21_hwseq.h    |   2 +
 .../gpu/drm/amd/display/dc/dcn21/dcn21_init.c |  63 +--
 .../drm/amd/display/dc/dcn21/dcn21_resource.c |  22 +-
 .../dc/dml/dcn20/display_rq_dlg_calc_20.c     |   3 +-
 .../amd/display/dc/dml/display_mode_structs.h |   3 +
 .../drm/amd/display/dc/dml/display_mode_vba.c |   2 +-
 drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c   |  97 +++--
 .../gpu/drm/amd/display/dc/inc/dc_link_dp.h   |   5 +-
 .../gpu/drm/amd/display/dc/inc/hw/clk_mgr.h   |   3 +
 drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h  |   4 +-
 .../gpu/drm/amd/display/dc/inc/hw_sequencer.h | 370 +++++-------------
 .../amd/display/dc/inc/hw_sequencer_private.h | 156 ++++++++
 .../dc/irq/dce110/irq_service_dce110.c        |   4 +-
 .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   |  48 +--
 .../drm/amd/display/dmub/inc/dmub_cmd_dal.h   |  41 ++
 .../drm/amd/display/dmub/inc/dmub_cmd_vbios.h |  41 ++
 .../drm/amd/display/dmub/inc/dmub_fw_state.h  |  73 ++++
 .../gpu/drm/amd/display/dmub/inc/dmub_srv.h   |   8 +-
 .../amd/display/dmub/inc/dmub_trace_buffer.h  |   1 +
 .../gpu/drm/amd/display/dmub/src/dmub_dcn20.c |  22 +-
 .../gpu/drm/amd/display/dmub/src/dmub_dcn20.h |   5 +-
 .../gpu/drm/amd/display/dmub/src/dmub_dcn21.c |  17 +-
 .../gpu/drm/amd/display/dmub/src/dmub_dcn21.h |   5 +-
 .../gpu/drm/amd/display/dmub/src/dmub_srv.c   |  39 +-
 .../gpu/drm/amd/display/include/dal_asic_id.h |  12 +-
 .../amd/display/include/i2caux_interface.h    |   2 +-
 .../amd/display/modules/color/color_gamma.c   |  39 +-
 .../amd/display/modules/freesync/freesync.c   |  32 +-
 .../amd/display/modules/inc/mod_freesync.h    |   1 -
 78 files changed, 1516 insertions(+), 941 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
 create mode 100644 drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_dal.h
 create mode 100644 drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_vbios.h
 create mode 100644 drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_state.h

--
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 52+ messages in thread

* [PATCH 01/51] drm/amd/display: update sr and pstate latencies for Renoir
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 02/51] drm/amd/display: rename core_dc to dc sunpeng.li
                   ` (49 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Eric Yang, Leo Li, Tony Cheng, rodrigo.siqueira, harry.wentland,
	bhawanpreet.lakha

From: Eric Yang <Eric.Yang2@amd.com>

[Why]
DF team has produced more optimized latency numbers.

[How]
Add sr latencies to the wm table, use different latencies
for different wm sets.
Also fix bb override from registery key for these latencies.

Signed-off-by: Eric Yang <Eric.Yang2@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c    | 16 ++++++++++++----
 .../drm/amd/display/dc/dcn21/dcn21_resource.c    | 15 ++++++++++++---
 drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h  |  2 ++
 3 files changed, 26 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
index 790a2d211bd6..841095d09d3c 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
@@ -523,25 +523,33 @@ struct clk_bw_params rn_bw_params = {
 			{
 				.wm_inst = WM_A,
 				.wm_type = WM_TYPE_PSTATE_CHG,
-				.pstate_latency_us = 23.84,
+				.pstate_latency_us = 11.72,
+				.sr_exit_time_us = 6.09,
+				.sr_enter_plus_exit_time_us = 7.14,
 				.valid = true,
 			},
 			{
 				.wm_inst = WM_B,
 				.wm_type = WM_TYPE_PSTATE_CHG,
-				.pstate_latency_us = 23.84,
+				.pstate_latency_us = 11.72,
+				.sr_exit_time_us = 10.12,
+				.sr_enter_plus_exit_time_us = 11.48,
 				.valid = true,
 			},
 			{
 				.wm_inst = WM_C,
 				.wm_type = WM_TYPE_PSTATE_CHG,
-				.pstate_latency_us = 23.84,
+				.pstate_latency_us = 11.72,
+				.sr_exit_time_us = 10.12,
+				.sr_enter_plus_exit_time_us = 11.48,
 				.valid = true,
 			},
 			{
 				.wm_inst = WM_D,
 				.wm_type = WM_TYPE_PSTATE_CHG,
-				.pstate_latency_us = 23.84,
+				.pstate_latency_us = 11.72,
+				.sr_exit_time_us = 10.12,
+				.sr_enter_plus_exit_time_us = 11.48,
 				.valid = true,
 			},
 		},
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
index dd3bc37d4eb9..818c7a629484 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
@@ -972,6 +972,8 @@ static void calculate_wm_set_for_vlevel(
 	pipes[0].clks_cfg.socclk_mhz = dml->soc.clock_limits[vlevel].socclk_mhz;
 
 	dml->soc.dram_clock_change_latency_us = table_entry->pstate_latency_us;
+	dml->soc.sr_exit_time_us = table_entry->sr_exit_time_us;
+	dml->soc.sr_enter_plus_exit_time_us = table_entry->sr_enter_plus_exit_time_us;
 
 	wm_set->urgent_ns = get_wm_urgent(dml, pipes, pipe_cnt) * 1000;
 	wm_set->cstate_pstate.cstate_enter_plus_exit_ns = get_wm_stutter_enter_exit(dml, pipes, pipe_cnt) * 1000;
@@ -987,14 +989,21 @@ static void calculate_wm_set_for_vlevel(
 
 static void patch_bounding_box(struct dc *dc, struct _vcs_dpi_soc_bounding_box_st *bb)
 {
+	int i;
+
 	kernel_fpu_begin();
 	if (dc->bb_overrides.sr_exit_time_ns) {
-		bb->sr_exit_time_us = dc->bb_overrides.sr_exit_time_ns / 1000.0;
+		for (i = 0; i < WM_SET_COUNT; i++) {
+			  dc->clk_mgr->bw_params->wm_table.entries[i].sr_exit_time_us =
+					  dc->bb_overrides.sr_exit_time_ns / 1000.0;
+		}
 	}
 
 	if (dc->bb_overrides.sr_enter_plus_exit_time_ns) {
-		bb->sr_enter_plus_exit_time_us =
-				dc->bb_overrides.sr_enter_plus_exit_time_ns / 1000.0;
+		for (i = 0; i < WM_SET_COUNT; i++) {
+			  dc->clk_mgr->bw_params->wm_table.entries[i].sr_enter_plus_exit_time_us =
+					  dc->bb_overrides.sr_enter_plus_exit_time_ns / 1000.0;
+		}
 	}
 
 	if (dc->bb_overrides.urgent_latency_ns) {
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
index f55203e427de..4aa09fe954c5 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
@@ -66,6 +66,8 @@ struct wm_range_table_entry {
 	unsigned int wm_inst;
 	unsigned int wm_type;
 	double pstate_latency_us;
+	double sr_exit_time_us;
+	double sr_enter_plus_exit_time_us;
 	bool valid;
 };
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 02/51] drm/amd/display: rename core_dc to dc
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
  2019-12-02 17:33 ` [PATCH 01/51] drm/amd/display: update sr and pstate latencies for Renoir sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 03/51] drm/amd/display: add separate of private hwss functions sunpeng.li
                   ` (48 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Aric Cyr, Leo Li, harry.wentland, rodrigo.siqueira,
	bhawanpreet.lakha, Anthony Koo

From: Anthony Koo <Anthony.Koo@amd.com>

[Why]
First, to make code more consistent
Second, to get rid of those scenario where we create a second
local pointer to dc when it's already passed in.

[How]
Rename core_dc to dc
Remove duplicate local pointers to dc

Signed-off-by: Anthony Koo <Anthony.Koo@amd.com>
Reviewed-by: Aric Cyr <Aric.Cyr@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../dc/clk_mgr/dce112/dce112_clk_mgr.c        | 12 ++--
 .../dc/clk_mgr/dcn10/rv1_clk_mgr_vbios_smu.c  |  6 +-
 .../dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c   |  6 +-
 .../gpu/drm/amd/display/dc/core/dc_debug.c    |  7 +-
 drivers/gpu/drm/amd/display/dc/core/dc_link.c | 65 +++++++++----------
 .../drm/amd/display/dc/core/dc_link_hwss.c    | 26 ++++----
 .../gpu/drm/amd/display/dc/core/dc_resource.c |  3 +-
 .../gpu/drm/amd/display/dc/core/dc_stream.c   | 40 ++++++------
 .../gpu/drm/amd/display/dc/core/dc_surface.c  | 22 +++----
 .../display/dc/dce110/dce110_hw_sequencer.c   |  8 +--
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 10 +--
 .../dc/irq/dce110/irq_service_dce110.c        |  4 +-
 12 files changed, 102 insertions(+), 107 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c
index a6c46e903ff9..d031bd3d3072 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce112/dce112_clk_mgr.c
@@ -72,8 +72,8 @@ int dce112_set_clock(struct clk_mgr *clk_mgr_base, int requested_clk_khz)
 	struct clk_mgr_internal *clk_mgr_dce = TO_CLK_MGR_INTERNAL(clk_mgr_base);
 	struct bp_set_dce_clock_parameters dce_clk_params;
 	struct dc_bios *bp = clk_mgr_base->ctx->dc_bios;
-	struct dc *core_dc = clk_mgr_base->ctx->dc;
-	struct dmcu *dmcu = core_dc->res_pool->dmcu;
+	struct dc *dc = clk_mgr_base->ctx->dc;
+	struct dmcu *dmcu = dc->res_pool->dmcu;
 	int actual_clock = requested_clk_khz;
 	/* Prepare to program display clock*/
 	memset(&dce_clk_params, 0, sizeof(dce_clk_params));
@@ -110,7 +110,7 @@ int dce112_set_clock(struct clk_mgr *clk_mgr_base, int requested_clk_khz)
 
 	bp->funcs->set_dce_clock(bp, &dce_clk_params);
 
-	if (!IS_FPGA_MAXIMUS_DC(core_dc->ctx->dce_environment)) {
+	if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
 		if (dmcu && dmcu->funcs->is_dmcu_initialized(dmcu)) {
 			if (clk_mgr_dce->dfs_bypass_disp_clk != actual_clock)
 				dmcu->funcs->set_psr_wait_loop(dmcu,
@@ -126,8 +126,8 @@ int dce112_set_dispclk(struct clk_mgr_internal *clk_mgr, int requested_clk_khz)
 {
 	struct bp_set_dce_clock_parameters dce_clk_params;
 	struct dc_bios *bp = clk_mgr->base.ctx->dc_bios;
-	struct dc *core_dc = clk_mgr->base.ctx->dc;
-	struct dmcu *dmcu = core_dc->res_pool->dmcu;
+	struct dc *dc = clk_mgr->base.ctx->dc;
+	struct dmcu *dmcu = dc->res_pool->dmcu;
 	int actual_clock = requested_clk_khz;
 	/* Prepare to program display clock*/
 	memset(&dce_clk_params, 0, sizeof(dce_clk_params));
@@ -152,7 +152,7 @@ int dce112_set_dispclk(struct clk_mgr_internal *clk_mgr, int requested_clk_khz)
 		clk_mgr->cur_min_clks_state = DM_PP_CLOCKS_STATE_NOMINAL;
 
 
-	if (!IS_FPGA_MAXIMUS_DC(core_dc->ctx->dce_environment)) {
+	if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
 		if (dmcu && dmcu->funcs->is_dmcu_initialized(dmcu)) {
 			if (clk_mgr->dfs_bypass_disp_clk != actual_clock)
 				dmcu->funcs->set_psr_wait_loop(dmcu,
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr_vbios_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr_vbios_smu.c
index 1897e91c8ccb..97b7f32294fd 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr_vbios_smu.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn10/rv1_clk_mgr_vbios_smu.c
@@ -88,8 +88,8 @@ int rv1_vbios_smu_send_msg_with_param(struct clk_mgr_internal *clk_mgr, unsigned
 int rv1_vbios_smu_set_dispclk(struct clk_mgr_internal *clk_mgr, int requested_dispclk_khz)
 {
 	int actual_dispclk_set_mhz = -1;
-	struct dc *core_dc = clk_mgr->base.ctx->dc;
-	struct dmcu *dmcu = core_dc->res_pool->dmcu;
+	struct dc *dc = clk_mgr->base.ctx->dc;
+	struct dmcu *dmcu = dc->res_pool->dmcu;
 
 	/*  Unit of SMU msg parameter is Mhz */
 	actual_dispclk_set_mhz = rv1_vbios_smu_send_msg_with_param(
@@ -100,7 +100,7 @@ int rv1_vbios_smu_set_dispclk(struct clk_mgr_internal *clk_mgr, int requested_di
 	/* Actual dispclk set is returned in the parameter register */
 	actual_dispclk_set_mhz = REG_READ(MP1_SMN_C2PMSG_83) * 1000;
 
-	if (!IS_FPGA_MAXIMUS_DC(core_dc->ctx->dce_environment)) {
+	if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
 		if (dmcu && dmcu->funcs->is_dmcu_initialized(dmcu)) {
 			if (clk_mgr->dfs_bypass_disp_clk != actual_dispclk_set_mhz)
 				dmcu->funcs->set_psr_wait_loop(dmcu,
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c
index cb7c0e8b7e1b..6878aedf1d3e 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr_vbios_smu.c
@@ -82,8 +82,8 @@ int rn_vbios_smu_get_smu_version(struct clk_mgr_internal *clk_mgr)
 int rn_vbios_smu_set_dispclk(struct clk_mgr_internal *clk_mgr, int requested_dispclk_khz)
 {
 	int actual_dispclk_set_mhz = -1;
-	struct dc *core_dc = clk_mgr->base.ctx->dc;
-	struct dmcu *dmcu = core_dc->res_pool->dmcu;
+	struct dc *dc = clk_mgr->base.ctx->dc;
+	struct dmcu *dmcu = dc->res_pool->dmcu;
 
 	/*  Unit of SMU msg parameter is Mhz */
 	actual_dispclk_set_mhz = rn_vbios_smu_send_msg_with_param(
@@ -91,7 +91,7 @@ int rn_vbios_smu_set_dispclk(struct clk_mgr_internal *clk_mgr, int requested_dis
 			VBIOSSMC_MSG_SetDispclkFreq,
 			requested_dispclk_khz / 1000);
 
-	if (!IS_FPGA_MAXIMUS_DC(core_dc->ctx->dce_environment)) {
+	if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
 		if (dmcu && dmcu->funcs->is_dmcu_initialized(dmcu)) {
 			if (clk_mgr->dfs_bypass_disp_clk != actual_dispclk_set_mhz)
 				dmcu->funcs->set_psr_wait_loop(dmcu,
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_debug.c b/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
index 85a52a16295a..bf13cffed703 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
@@ -310,14 +310,13 @@ void context_timing_trace(
 		struct resource_context *res_ctx)
 {
 	int i;
-	struct dc  *core_dc = dc;
 	int h_pos[MAX_PIPES] = {0}, v_pos[MAX_PIPES] = {0};
 	struct crtc_position position;
-	unsigned int underlay_idx = core_dc->res_pool->underlay_pipe_index;
+	unsigned int underlay_idx = dc->res_pool->underlay_pipe_index;
 	DC_LOGGER_INIT(dc->ctx->logger);
 
 
-	for (i = 0; i < core_dc->res_pool->pipe_count; i++) {
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
 		struct pipe_ctx *pipe_ctx = &res_ctx->pipe_ctx[i];
 		/* get_position() returns CRTC vertical/horizontal counter
 		 * hence not applicable for underlay pipe
@@ -329,7 +328,7 @@ void context_timing_trace(
 		h_pos[i] = position.horizontal_count;
 		v_pos[i] = position.vertical_count;
 	}
-	for (i = 0; i < core_dc->res_pool->pipe_count; i++) {
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
 		struct pipe_ctx *pipe_ctx = &res_ctx->pipe_ctx[i];
 
 		if (pipe_ctx->stream == NULL || pipe_ctx->pipe_idx == underlay_idx)
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index 123b79dcd8e4..093f6c808876 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -2355,9 +2355,9 @@ bool dc_link_set_backlight_level(const struct dc_link *link,
 		uint32_t backlight_pwm_u16_16,
 		uint32_t frame_ramp)
 {
-	struct dc  *core_dc = link->ctx->dc;
-	struct abm *abm = core_dc->res_pool->abm;
-	struct dmcu *dmcu = core_dc->res_pool->dmcu;
+	struct dc  *dc = link->ctx->dc;
+	struct abm *abm = dc->res_pool->abm;
+	struct dmcu *dmcu = dc->res_pool->dmcu;
 	unsigned int controller_id = 0;
 	bool use_smooth_brightness = true;
 	int i;
@@ -2375,22 +2375,22 @@ bool dc_link_set_backlight_level(const struct dc_link *link,
 
 	if (dc_is_embedded_signal(link->connector_signal)) {
 		for (i = 0; i < MAX_PIPES; i++) {
-			if (core_dc->current_state->res_ctx.pipe_ctx[i].stream) {
-				if (core_dc->current_state->res_ctx.
+			if (dc->current_state->res_ctx.pipe_ctx[i].stream) {
+				if (dc->current_state->res_ctx.
 						pipe_ctx[i].stream->link
 						== link) {
 					/* DMCU -1 for all controller id values,
 					 * therefore +1 here
 					 */
 					controller_id =
-						core_dc->current_state->
+						dc->current_state->
 						res_ctx.pipe_ctx[i].stream_res.tg->inst +
 						1;
 
 					/* Disable brightness ramping when the display is blanked
 					 * as it can hang the DMCU
 					 */
-					if (core_dc->current_state->res_ctx.pipe_ctx[i].plane_state == NULL)
+					if (dc->current_state->res_ctx.pipe_ctx[i].plane_state == NULL)
 						frame_ramp = 0;
 				}
 			}
@@ -2408,8 +2408,8 @@ bool dc_link_set_backlight_level(const struct dc_link *link,
 
 bool dc_link_set_abm_disable(const struct dc_link *link)
 {
-	struct dc  *core_dc = link->ctx->dc;
-	struct abm *abm = core_dc->res_pool->abm;
+	struct dc  *dc = link->ctx->dc;
+	struct abm *abm = dc->res_pool->abm;
 
 	if ((abm == NULL) || (abm->funcs->set_backlight_level_pwm == NULL))
 		return false;
@@ -2421,8 +2421,8 @@ bool dc_link_set_abm_disable(const struct dc_link *link)
 
 bool dc_link_set_psr_allow_active(struct dc_link *link, bool allow_active, bool wait)
 {
-	struct dc  *core_dc = link->ctx->dc;
-	struct dmcu *dmcu = core_dc->res_pool->dmcu;
+	struct dc  *dc = link->ctx->dc;
+	struct dmcu *dmcu = dc->res_pool->dmcu;
 
 
 
@@ -2436,8 +2436,8 @@ bool dc_link_set_psr_allow_active(struct dc_link *link, bool allow_active, bool
 
 bool dc_link_get_psr_state(const struct dc_link *link, uint32_t *psr_state)
 {
-	struct dc  *core_dc = link->ctx->dc;
-	struct dmcu *dmcu = core_dc->res_pool->dmcu;
+	struct dc  *dc = link->ctx->dc;
+	struct dmcu *dmcu = dc->res_pool->dmcu;
 
 	if (dmcu != NULL && link->psr_feature_enabled)
 		dmcu->funcs->get_psr_state(dmcu, psr_state);
@@ -2484,7 +2484,7 @@ bool dc_link_setup_psr(struct dc_link *link,
 		const struct dc_stream_state *stream, struct psr_config *psr_config,
 		struct psr_context *psr_context)
 {
-	struct dc *core_dc;
+	struct dc *dc;
 	struct dmcu *dmcu;
 	int i;
 	/* updateSinkPsrDpcdConfig*/
@@ -2495,8 +2495,8 @@ bool dc_link_setup_psr(struct dc_link *link,
 	if (!link)
 		return false;
 
-	core_dc = link->ctx->dc;
-	dmcu = core_dc->res_pool->dmcu;
+	dc = link->ctx->dc;
+	dmcu = dc->res_pool->dmcu;
 
 	if (!dmcu)
 		return false;
@@ -2535,13 +2535,13 @@ bool dc_link_setup_psr(struct dc_link *link,
 	psr_context->engineId = link->link_enc->preferred_engine;
 
 	for (i = 0; i < MAX_PIPES; i++) {
-		if (core_dc->current_state->res_ctx.pipe_ctx[i].stream
+		if (dc->current_state->res_ctx.pipe_ctx[i].stream
 				== stream) {
 			/* dmcu -1 for all controller id values,
 			 * therefore +1 here
 			 */
 			psr_context->controllerId =
-				core_dc->current_state->res_ctx.
+				dc->current_state->res_ctx.
 				pipe_ctx[i].stream_res.tg->inst + 1;
 			break;
 		}
@@ -2905,12 +2905,12 @@ void core_link_enable_stream(
 		struct dc_state *state,
 		struct pipe_ctx *pipe_ctx)
 {
-	struct dc *core_dc = pipe_ctx->stream->ctx->dc;
+	struct dc *dc = pipe_ctx->stream->ctx->dc;
 	struct dc_stream_state *stream = pipe_ctx->stream;
 	enum dc_status status;
 	DC_LOGGER_INIT(pipe_ctx->stream->ctx->logger);
 
-	if (!IS_FPGA_MAXIMUS_DC(core_dc->ctx->dce_environment) &&
+	if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment) &&
 			dc_is_virtual_signal(pipe_ctx->stream->signal))
 		return;
 
@@ -2953,14 +2953,14 @@ void core_link_enable_stream(
 			pipe_ctx->stream_res.stream_enc,
 			&stream->timing);
 
-	if (!IS_FPGA_MAXIMUS_DC(core_dc->ctx->dce_environment)) {
+	if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
 		bool apply_edp_fast_boot_optimization =
 			pipe_ctx->stream->apply_edp_fast_boot_optimization;
 
 		pipe_ctx->stream->apply_edp_fast_boot_optimization = false;
 
 		resource_build_info_frame(pipe_ctx);
-		core_dc->hwss.update_info_frame(pipe_ctx);
+		dc->hwss.update_info_frame(pipe_ctx);
 
 		/* Do not touch link on seamless boot optimization. */
 		if (pipe_ctx->stream->apply_seamless_boot_optimization) {
@@ -3003,7 +3003,7 @@ void core_link_enable_stream(
 			}
 		}
 
-		core_dc->hwss.enable_audio_stream(pipe_ctx);
+		dc->hwss.enable_audio_stream(pipe_ctx);
 
 		/* turn off otg test pattern if enable */
 		if (pipe_ctx->stream_res.tg->funcs->set_test_pattern)
@@ -3016,7 +3016,7 @@ void core_link_enable_stream(
 					dc_is_virtual_signal(pipe_ctx->stream->signal))
 				dp_set_dsc_enable(pipe_ctx, true);
 		}
-		core_dc->hwss.enable_stream(pipe_ctx);
+		dc->hwss.enable_stream(pipe_ctx);
 
 		/* Set DPS PPS SDP (AKA "info frames") */
 		if (pipe_ctx->stream->timing.flags.DSC) {
@@ -3028,7 +3028,7 @@ void core_link_enable_stream(
 		if (pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
 			dc_link_allocate_mst_payload(pipe_ctx);
 
-		core_dc->hwss.unblank_stream(pipe_ctx,
+		dc->hwss.unblank_stream(pipe_ctx,
 			&pipe_ctx->stream->link->cur_link_settings);
 
 		if (dc_is_dp_signal(pipe_ctx->stream->signal))
@@ -3036,8 +3036,7 @@ void core_link_enable_stream(
 #if defined(CONFIG_DRM_AMD_DC_HDCP)
 		update_psp_stream_config(pipe_ctx, false);
 #endif
-	}
-	else { // if (IS_FPGA_MAXIMUS_DC(core_dc->ctx->dce_environment))
+	} else { // if (IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment))
 		if (dc_is_dp_signal(pipe_ctx->stream->signal) ||
 				dc_is_virtual_signal(pipe_ctx->stream->signal))
 			dp_set_dsc_enable(pipe_ctx, true);
@@ -3047,11 +3046,11 @@ void core_link_enable_stream(
 
 void core_link_disable_stream(struct pipe_ctx *pipe_ctx)
 {
-	struct dc  *core_dc = pipe_ctx->stream->ctx->dc;
+	struct dc  *dc = pipe_ctx->stream->ctx->dc;
 	struct dc_stream_state *stream = pipe_ctx->stream;
 	struct dc_link *link = stream->sink->link;
 
-	if (!IS_FPGA_MAXIMUS_DC(core_dc->ctx->dce_environment) &&
+	if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment) &&
 			dc_is_virtual_signal(pipe_ctx->stream->signal))
 		return;
 
@@ -3059,7 +3058,7 @@ void core_link_disable_stream(struct pipe_ctx *pipe_ctx)
 	update_psp_stream_config(pipe_ctx, true);
 #endif
 
-	core_dc->hwss.blank_stream(pipe_ctx);
+	dc->hwss.blank_stream(pipe_ctx);
 
 	if (pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
 		deallocate_mst_payload(pipe_ctx);
@@ -3088,7 +3087,7 @@ void core_link_disable_stream(struct pipe_ctx *pipe_ctx)
 			write_i2c_redriver_setting(pipe_ctx, false);
 		}
 	}
-	core_dc->hwss.disable_stream(pipe_ctx);
+	dc->hwss.disable_stream(pipe_ctx);
 
 	disable_link(pipe_ctx->stream->link, pipe_ctx->stream->signal);
 	if (pipe_ctx->stream->timing.flags.DSC) {
@@ -3099,12 +3098,12 @@ void core_link_disable_stream(struct pipe_ctx *pipe_ctx)
 
 void core_link_set_avmute(struct pipe_ctx *pipe_ctx, bool enable)
 {
-	struct dc  *core_dc = pipe_ctx->stream->ctx->dc;
+	struct dc  *dc = pipe_ctx->stream->ctx->dc;
 
 	if (!dc_is_hdmi_signal(pipe_ctx->stream->signal))
 		return;
 
-	core_dc->hwss.set_avmute(pipe_ctx, enable);
+	dc->hwss.set_avmute(pipe_ctx, enable);
 }
 
 /**
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
index bb1e8e5b5252..67ce12df23f1 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
@@ -95,8 +95,8 @@ void dp_enable_link_phy(
 	const struct dc_link_settings *link_settings)
 {
 	struct link_encoder *link_enc = link->link_enc;
-	struct dc  *core_dc = link->ctx->dc;
-	struct dmcu *dmcu = core_dc->res_pool->dmcu;
+	struct dc  *dc = link->ctx->dc;
+	struct dmcu *dmcu = dc->res_pool->dmcu;
 
 	struct pipe_ctx *pipes =
 			link->dc->current_state->res_ctx.pipe_ctx;
@@ -200,8 +200,8 @@ bool edp_receiver_ready_T7(struct dc_link *link)
 
 void dp_disable_link_phy(struct dc_link *link, enum signal_type signal)
 {
-	struct dc  *core_dc = link->ctx->dc;
-	struct dmcu *dmcu = core_dc->res_pool->dmcu;
+	struct dc  *dc = link->ctx->dc;
+	struct dmcu *dmcu = dc->res_pool->dmcu;
 
 	if (!link->wa_flags.dp_keep_receiver_powered)
 		dp_receiver_power_ctrl(link, false);
@@ -395,14 +395,14 @@ static void dsc_optc_config_log(struct display_stream_compressor *dsc,
 
 static bool dp_set_dsc_on_rx(struct pipe_ctx *pipe_ctx, bool enable)
 {
-	struct dc *core_dc = pipe_ctx->stream->ctx->dc;
+	struct dc *dc = pipe_ctx->stream->ctx->dc;
 	struct dc_stream_state *stream = pipe_ctx->stream;
 	bool result = false;
 
-	if (IS_FPGA_MAXIMUS_DC(core_dc->ctx->dce_environment))
+	if (IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment))
 		result = true;
 	else
-		result = dm_helpers_dp_write_dsc_enable(core_dc->ctx, stream, enable);
+		result = dm_helpers_dp_write_dsc_enable(dc->ctx, stream, enable);
 	return result;
 }
 
@@ -412,7 +412,7 @@ static bool dp_set_dsc_on_rx(struct pipe_ctx *pipe_ctx, bool enable)
 void dp_set_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
 {
 	struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc;
-	struct dc *core_dc = pipe_ctx->stream->ctx->dc;
+	struct dc *dc = pipe_ctx->stream->ctx->dc;
 	struct dc_stream_state *stream = pipe_ctx->stream;
 	struct pipe_ctx *odm_pipe;
 	int opp_cnt = 1;
@@ -448,7 +448,7 @@ void dp_set_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
 		optc_dsc_mode = dsc_optc_cfg.is_pixel_format_444 ? OPTC_DSC_ENABLED_444 : OPTC_DSC_ENABLED_NATIVE_SUBSAMPLED;
 
 		/* Enable DSC in encoder */
-		if (dc_is_dp_signal(stream->signal) && !IS_FPGA_MAXIMUS_DC(core_dc->ctx->dce_environment)) {
+		if (dc_is_dp_signal(stream->signal) && !IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
 			DC_LOG_DSC("Setting stream encoder DSC config for engine %d:", (int)pipe_ctx->stream_res.stream_enc->id);
 			dsc_optc_config_log(dsc, &dsc_optc_cfg);
 			pipe_ctx->stream_res.stream_enc->funcs->dp_set_dsc_config(pipe_ctx->stream_res.stream_enc,
@@ -473,7 +473,7 @@ void dp_set_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
 				OPTC_DSC_DISABLED, 0, 0);
 
 		/* disable DSC in stream encoder */
-		if (dc_is_dp_signal(stream->signal) && !IS_FPGA_MAXIMUS_DC(core_dc->ctx->dce_environment)) {
+		if (dc_is_dp_signal(stream->signal) && !IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
 			pipe_ctx->stream_res.stream_enc->funcs->dp_set_dsc_config(
 					pipe_ctx->stream_res.stream_enc,
 					OPTC_DSC_DISABLED, 0, 0);
@@ -516,7 +516,7 @@ bool dp_set_dsc_enable(struct pipe_ctx *pipe_ctx, bool enable)
 bool dp_set_dsc_pps_sdp(struct pipe_ctx *pipe_ctx, bool enable)
 {
 	struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc;
-	struct dc *core_dc = pipe_ctx->stream->ctx->dc;
+	struct dc *dc = pipe_ctx->stream->ctx->dc;
 	struct dc_stream_state *stream = pipe_ctx->stream;
 
 	if (!pipe_ctx->stream->timing.flags.DSC || !dsc)
@@ -535,7 +535,7 @@ bool dp_set_dsc_pps_sdp(struct pipe_ctx *pipe_ctx, bool enable)
 
 		DC_LOG_DSC(" ");
 		dsc->funcs->dsc_get_packed_pps(dsc, &dsc_cfg, &dsc_packed_pps[0]);
-		if (dc_is_dp_signal(stream->signal) && !IS_FPGA_MAXIMUS_DC(core_dc->ctx->dce_environment)) {
+		if (dc_is_dp_signal(stream->signal) && !IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
 			DC_LOG_DSC("Setting stream encoder DSC PPS SDP for engine %d\n", (int)pipe_ctx->stream_res.stream_enc->id);
 			pipe_ctx->stream_res.stream_enc->funcs->dp_set_dsc_pps_info_packet(
 									pipe_ctx->stream_res.stream_enc,
@@ -544,7 +544,7 @@ bool dp_set_dsc_pps_sdp(struct pipe_ctx *pipe_ctx, bool enable)
 		}
 	} else {
 		/* disable DSC PPS in stream encoder */
-		if (dc_is_dp_signal(stream->signal) && !IS_FPGA_MAXIMUS_DC(core_dc->ctx->dce_environment)) {
+		if (dc_is_dp_signal(stream->signal) && !IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
 			pipe_ctx->stream_res.stream_enc->funcs->dp_set_dsc_pps_info_packet(
 						pipe_ctx->stream_res.stream_enc, false, NULL);
 		}
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
index 89b5f86cd40b..a9412720c860 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
@@ -2747,9 +2747,8 @@ void resource_build_bit_depth_reduction_params(struct dc_stream_state *stream,
 
 enum dc_status dc_validate_stream(struct dc *dc, struct dc_stream_state *stream)
 {
-	struct dc  *core_dc = dc;
 	struct dc_link *link = stream->link;
-	struct timing_generator *tg = core_dc->res_pool->timing_generators[0];
+	struct timing_generator *tg = dc->res_pool->timing_generators[0];
 	enum dc_status res = DC_OK;
 
 	calculate_phy_pix_clks(stream);
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
index d9afd834c146..70b7c1eb8a8f 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
@@ -271,7 +271,7 @@ bool dc_stream_set_cursor_attributes(
 	const struct dc_cursor_attributes *attributes)
 {
 	int i;
-	struct dc  *core_dc;
+	struct dc  *dc;
 	struct resource_context *res_ctx;
 	struct pipe_ctx *pipe_to_program = NULL;
 
@@ -289,8 +289,8 @@ bool dc_stream_set_cursor_attributes(
 		return false;
 	}
 
-	core_dc = stream->ctx->dc;
-	res_ctx = &core_dc->current_state->res_ctx;
+	dc = stream->ctx->dc;
+	res_ctx = &dc->current_state->res_ctx;
 	stream->cursor_attributes = *attributes;
 
 	for (i = 0; i < MAX_PIPES; i++) {
@@ -302,17 +302,17 @@ bool dc_stream_set_cursor_attributes(
 		if (!pipe_to_program) {
 			pipe_to_program = pipe_ctx;
 
-			delay_cursor_until_vupdate(pipe_ctx, core_dc);
-			core_dc->hwss.pipe_control_lock(core_dc, pipe_to_program, true);
+			delay_cursor_until_vupdate(pipe_ctx, dc);
+			dc->hwss.pipe_control_lock(dc, pipe_to_program, true);
 		}
 
-		core_dc->hwss.set_cursor_attribute(pipe_ctx);
-		if (core_dc->hwss.set_cursor_sdr_white_level)
-			core_dc->hwss.set_cursor_sdr_white_level(pipe_ctx);
+		dc->hwss.set_cursor_attribute(pipe_ctx);
+		if (dc->hwss.set_cursor_sdr_white_level)
+			dc->hwss.set_cursor_sdr_white_level(pipe_ctx);
 	}
 
 	if (pipe_to_program)
-		core_dc->hwss.pipe_control_lock(core_dc, pipe_to_program, false);
+		dc->hwss.pipe_control_lock(dc, pipe_to_program, false);
 
 	return true;
 }
@@ -322,7 +322,7 @@ bool dc_stream_set_cursor_position(
 	const struct dc_cursor_position *position)
 {
 	int i;
-	struct dc  *core_dc;
+	struct dc  *dc;
 	struct resource_context *res_ctx;
 	struct pipe_ctx *pipe_to_program = NULL;
 
@@ -336,8 +336,8 @@ bool dc_stream_set_cursor_position(
 		return false;
 	}
 
-	core_dc = stream->ctx->dc;
-	res_ctx = &core_dc->current_state->res_ctx;
+	dc = stream->ctx->dc;
+	res_ctx = &dc->current_state->res_ctx;
 	stream->cursor_position = *position;
 
 	for (i = 0; i < MAX_PIPES; i++) {
@@ -353,15 +353,15 @@ bool dc_stream_set_cursor_position(
 		if (!pipe_to_program) {
 			pipe_to_program = pipe_ctx;
 
-			delay_cursor_until_vupdate(pipe_ctx, core_dc);
-			core_dc->hwss.pipe_control_lock(core_dc, pipe_to_program, true);
+			delay_cursor_until_vupdate(pipe_ctx, dc);
+			dc->hwss.pipe_control_lock(dc, pipe_to_program, true);
 		}
 
-		core_dc->hwss.set_cursor_position(pipe_ctx);
+		dc->hwss.set_cursor_position(pipe_ctx);
 	}
 
 	if (pipe_to_program)
-		core_dc->hwss.pipe_control_lock(core_dc, pipe_to_program, false);
+		dc->hwss.pipe_control_lock(dc, pipe_to_program, false);
 
 	return true;
 }
@@ -482,9 +482,9 @@ bool dc_stream_remove_writeback(struct dc *dc,
 uint32_t dc_stream_get_vblank_counter(const struct dc_stream_state *stream)
 {
 	uint8_t i;
-	struct dc  *core_dc = stream->ctx->dc;
+	struct dc  *dc = stream->ctx->dc;
 	struct resource_context *res_ctx =
-		&core_dc->current_state->res_ctx;
+		&dc->current_state->res_ctx;
 
 	for (i = 0; i < MAX_PIPES; i++) {
 		struct timing_generator *tg = res_ctx->pipe_ctx[i].stream_res.tg;
@@ -541,9 +541,9 @@ bool dc_stream_get_scanoutpos(const struct dc_stream_state *stream,
 {
 	uint8_t i;
 	bool ret = false;
-	struct dc  *core_dc = stream->ctx->dc;
+	struct dc  *dc = stream->ctx->dc;
 	struct resource_context *res_ctx =
-		&core_dc->current_state->res_ctx;
+		&dc->current_state->res_ctx;
 
 	for (i = 0; i < MAX_PIPES; i++) {
 		struct timing_generator *tg = res_ctx->pipe_ctx[i].stream_res.tg;
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_surface.c b/drivers/gpu/drm/amd/display/dc/core/dc_surface.c
index e60aff46d510..ea1229a3e2b2 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_surface.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_surface.c
@@ -108,16 +108,14 @@ void enable_surface_flip_reporting(struct dc_plane_state *plane_state,
 
 struct dc_plane_state *dc_create_plane_state(struct dc *dc)
 {
-	struct dc *core_dc = dc;
-
 	struct dc_plane_state *plane_state = kvzalloc(sizeof(*plane_state),
-						      GFP_KERNEL);
+							GFP_KERNEL);
 
 	if (NULL == plane_state)
 		return NULL;
 
 	kref_init(&plane_state->refcount);
-	dc_plane_construct(core_dc->ctx, plane_state);
+	dc_plane_construct(dc->ctx, plane_state);
 
 	return plane_state;
 }
@@ -137,7 +135,7 @@ const struct dc_plane_status *dc_plane_get_status(
 		const struct dc_plane_state *plane_state)
 {
 	const struct dc_plane_status *plane_status;
-	struct dc  *core_dc;
+	struct dc  *dc;
 	int i;
 
 	if (!plane_state ||
@@ -148,15 +146,15 @@ const struct dc_plane_status *dc_plane_get_status(
 	}
 
 	plane_status = &plane_state->status;
-	core_dc = plane_state->ctx->dc;
+	dc = plane_state->ctx->dc;
 
-	if (core_dc->current_state == NULL)
+	if (dc->current_state == NULL)
 		return NULL;
 
 	/* Find the current plane state and set its pending bit to false */
-	for (i = 0; i < core_dc->res_pool->pipe_count; i++) {
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
 		struct pipe_ctx *pipe_ctx =
-				&core_dc->current_state->res_ctx.pipe_ctx[i];
+				&dc->current_state->res_ctx.pipe_ctx[i];
 
 		if (pipe_ctx->plane_state != plane_state)
 			continue;
@@ -166,14 +164,14 @@ const struct dc_plane_status *dc_plane_get_status(
 		break;
 	}
 
-	for (i = 0; i < core_dc->res_pool->pipe_count; i++) {
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
 		struct pipe_ctx *pipe_ctx =
-				&core_dc->current_state->res_ctx.pipe_ctx[i];
+				&dc->current_state->res_ctx.pipe_ctx[i];
 
 		if (pipe_ctx->plane_state != plane_state)
 			continue;
 
-		core_dc->hwss.update_pending_status(pipe_ctx);
+		dc->hwss.update_pending_status(pipe_ctx);
 	}
 
 	return plane_status;
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
index 1dc065f1125c..2b2ee6893e25 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
@@ -945,15 +945,15 @@ void dce110_edp_backlight_control(
 void dce110_enable_audio_stream(struct pipe_ctx *pipe_ctx)
 {
 	/* notify audio driver for audio modes of monitor */
-	struct dc *core_dc;
+	struct dc *dc;
 	struct clk_mgr *clk_mgr;
 	unsigned int i, num_audio = 1;
 
 	if (!pipe_ctx->stream)
 		return;
 
-	core_dc = pipe_ctx->stream->ctx->dc;
-	clk_mgr = core_dc->clk_mgr;
+	dc = pipe_ctx->stream->ctx->dc;
+	clk_mgr = dc->clk_mgr;
 
 	if (pipe_ctx->stream_res.audio && pipe_ctx->stream_res.audio->enabled == true)
 		return;
@@ -961,7 +961,7 @@ void dce110_enable_audio_stream(struct pipe_ctx *pipe_ctx)
 	if (pipe_ctx->stream_res.audio) {
 		for (i = 0; i < MAX_PIPES; i++) {
 			/*current_state not updated yet*/
-			if (core_dc->current_state->res_ctx.pipe_ctx[i].stream_res.audio != NULL)
+			if (dc->current_state->res_ctx.pipe_ctx[i].stream_res.audio != NULL)
 				num_audio++;
 		}
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index 251bb59c271a..bd6cdb6b38f6 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -1655,10 +1655,10 @@ void dcn10_enable_per_frame_crtc_position_reset(
 }
 
 /*static void print_rq_dlg_ttu(
-		struct dc *core_dc,
+		struct dc *dc,
 		struct pipe_ctx *pipe_ctx)
 {
-	DC_LOG_BANDWIDTH_CALCS(core_dc->ctx->logger,
+	DC_LOG_BANDWIDTH_CALCS(dc->ctx->logger,
 			"\n============== DML TTU Output parameters [%d] ==============\n"
 			"qos_level_low_wm: %d, \n"
 			"qos_level_high_wm: %d, \n"
@@ -1688,7 +1688,7 @@ void dcn10_enable_per_frame_crtc_position_reset(
 			pipe_ctx->ttu_regs.refcyc_per_req_delivery_pre_c
 			);
 
-	DC_LOG_BANDWIDTH_CALCS(core_dc->ctx->logger,
+	DC_LOG_BANDWIDTH_CALCS(dc->ctx->logger,
 			"\n============== DML DLG Output parameters [%d] ==============\n"
 			"refcyc_h_blank_end: %d, \n"
 			"dlg_vblank_end: %d, \n"
@@ -1723,7 +1723,7 @@ void dcn10_enable_per_frame_crtc_position_reset(
 			pipe_ctx->dlg_regs.refcyc_per_pte_group_nom_l
 			);
 
-	DC_LOG_BANDWIDTH_CALCS(core_dc->ctx->logger,
+	DC_LOG_BANDWIDTH_CALCS(dc->ctx->logger,
 			"\ndst_y_per_meta_row_nom_l: %d, \n"
 			"refcyc_per_meta_chunk_nom_l: %d, \n"
 			"refcyc_per_line_delivery_pre_l: %d, \n"
@@ -1753,7 +1753,7 @@ void dcn10_enable_per_frame_crtc_position_reset(
 			pipe_ctx->dlg_regs.refcyc_per_line_delivery_c
 			);
 
-	DC_LOG_BANDWIDTH_CALCS(core_dc->ctx->logger,
+	DC_LOG_BANDWIDTH_CALCS(dc->ctx->logger,
 			"\n============== DML RQ Output parameters [%d] ==============\n"
 			"chunk_size: %d \n"
 			"min_chunk_size: %d \n"
diff --git a/drivers/gpu/drm/amd/display/dc/irq/dce110/irq_service_dce110.c b/drivers/gpu/drm/amd/display/dc/irq/dce110/irq_service_dce110.c
index 3d4461a70f7d..378cc11aa047 100644
--- a/drivers/gpu/drm/amd/display/dc/irq/dce110/irq_service_dce110.c
+++ b/drivers/gpu/drm/amd/display/dc/irq/dce110/irq_service_dce110.c
@@ -204,7 +204,7 @@ bool dce110_vblank_set(struct irq_service *irq_service,
 		       bool enable)
 {
 	struct dc_context *dc_ctx = irq_service->ctx;
-	struct dc *core_dc = irq_service->ctx->dc;
+	struct dc *dc = irq_service->ctx->dc;
 	enum dc_irq_source dal_irq_src =
 			dc_interrupt_to_irq_source(irq_service->ctx->dc,
 						   info->src_id,
@@ -212,7 +212,7 @@ bool dce110_vblank_set(struct irq_service *irq_service,
 	uint8_t pipe_offset = dal_irq_src - IRQ_TYPE_VBLANK;
 
 	struct timing_generator *tg =
-			core_dc->current_state->res_ctx.pipe_ctx[pipe_offset].stream_res.tg;
+			dc->current_state->res_ctx.pipe_ctx[pipe_offset].stream_res.tg;
 
 	if (enable) {
 		if (!tg || !tg->funcs->arm_vert_intr(tg, 2)) {
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 03/51] drm/amd/display: add separate of private hwss functions
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
  2019-12-02 17:33 ` [PATCH 01/51] drm/amd/display: update sr and pstate latencies for Renoir sunpeng.li
  2019-12-02 17:33 ` [PATCH 02/51] drm/amd/display: rename core_dc to dc sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 04/51] drm/amd/display: Fix Dali clk mgr construct sunpeng.li
                   ` (47 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, harry.wentland,
	bhawanpreet.lakha, Anthony Koo

From: Anthony Koo <Anthony.Koo@amd.com>

[Why]
Some function pointers in the hwss function pointer table are
meant to be hw sequencer entry points to be called from dc.

However some of those function pointers are not meant to
be entry points, but instead used as a code reuse/inheritance
tool called directly by other hwss functions, not by dc.

Therefore, we want a more clear separation of which functions
we determine to be interface functions vs the functions we
use within hwss.

[How]
DC interface functions will be stored in:
    struct hw_sequencer_funcs
Functions used within HWSS will be stored in:
    struct hwseq_private_funcs

Signed-off-by: Anthony Koo <Anthony.Koo@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc.c      |   6 +
 .../gpu/drm/amd/display/dc/core/dc_debug.c    |   1 -
 .../gpu/drm/amd/display/dc/core/dc_stream.c   |   3 -
 .../gpu/drm/amd/display/dc/dce/dce_hwseq.c    |   2 +-
 .../gpu/drm/amd/display/dc/dce/dce_hwseq.h    |   6 +-
 .../display/dc/dce100/dce100_hw_sequencer.c   |   3 +-
 .../display/dc/dce100/dce100_hw_sequencer.h   |   1 +
 .../display/dc/dce110/dce110_hw_sequencer.c   |  77 ++--
 .../display/dc/dce110/dce110_hw_sequencer.h   |   1 +
 .../amd/display/dc/dce110/dce110_resource.c   |   3 +-
 .../display/dc/dce112/dce112_hw_sequencer.c   |   2 +-
 .../display/dc/dce112/dce112_hw_sequencer.h   |   1 +
 .../display/dc/dce120/dce120_hw_sequencer.c   |   2 +-
 .../display/dc/dce120/dce120_hw_sequencer.h   |   1 +
 .../amd/display/dc/dce80/dce80_hw_sequencer.c |   2 +-
 .../amd/display/dc/dce80/dce80_hw_sequencer.h |   1 +
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 124 +++---
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.h |   1 +
 .../gpu/drm/amd/display/dc/dcn10/dcn10_init.c |  38 +-
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.c    |  71 ++--
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.h    |   3 +
 .../gpu/drm/amd/display/dc/dcn20/dcn20_init.c |  54 +--
 .../drm/amd/display/dc/dcn21/dcn21_hwseq.c    |   1 +
 .../drm/amd/display/dc/dcn21/dcn21_hwseq.h    |   2 +
 .../gpu/drm/amd/display/dc/dcn21/dcn21_init.c |  63 +--
 .../gpu/drm/amd/display/dc/inc/hw_sequencer.h | 370 +++++-------------
 .../amd/display/dc/inc/hw_sequencer_private.h | 156 ++++++++
 27 files changed, 525 insertions(+), 470 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 2645d20e8c4c..e384c143bb58 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -2004,6 +2004,12 @@ static void commit_planes_do_stream_update(struct dc *dc,
 				dc->hwss.update_info_frame(pipe_ctx);
 			}
 
+			if (stream_update->hdr_static_metadata &&
+					stream->use_dynamic_meta &&
+					dc->hwss.set_dmdata_attributes &&
+					pipe_ctx->stream->dmdata_address.quad_part != 0)
+				dc->hwss.set_dmdata_attributes(pipe_ctx);
+
 			if (stream_update->gamut_remap)
 				dc_stream_set_gamut_remap(dc, stream);
 
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_debug.c b/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
index bf13cffed703..502ed3c7959d 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
@@ -33,7 +33,6 @@
 
 #include "core_status.h"
 #include "core_types.h"
-#include "hw_sequencer.h"
 
 #include "resource.h"
 
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
index 70b7c1eb8a8f..b43a4b115fd8 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
@@ -33,9 +33,6 @@
 #include "resource.h"
 #include "ipp.h"
 #include "timing_generator.h"
-#if defined(CONFIG_DRM_AMD_DC_DCN)
-#include "dcn10/dcn10_hw_sequencer.h"
-#endif
 
 #define DC_LOGGER dc->ctx->logger
 
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.c b/drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.c
index 0275d6d60da4..e1c5839a80dc 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.c
@@ -25,7 +25,7 @@
 
 #include "dce_hwseq.h"
 #include "reg_helper.h"
-#include "hw_sequencer.h"
+#include "hw_sequencer_private.h"
 #include "core_types.h"
 
 #define CTX \
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h b/drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h
index bff03a68aa01..c5aa1f48593a 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h
@@ -25,7 +25,7 @@
 #ifndef __DCE_HWSEQ_H__
 #define __DCE_HWSEQ_H__
 
-#include "hw_sequencer.h"
+#include "dc_types.h"
 
 #define BL_REG_LIST()\
 	SR(LVTMA_PWRSEQ_CNTL), \
@@ -811,6 +811,10 @@ enum blnd_mode {
 	BLND_MODE_BLENDING,/* Alpha blending - blend 'current' and 'other' */
 };
 
+struct dce_hwseq;
+struct pipe_ctx;
+struct clock_source;
+
 void dce_enable_fe_clock(struct dce_hwseq *hwss,
 		unsigned int inst, bool enable);
 
diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce100/dce100_hw_sequencer.c
index 799d36299c9b..753cb8edd996 100644
--- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_hw_sequencer.c
@@ -26,7 +26,6 @@
 #include "dc.h"
 #include "core_types.h"
 #include "clk_mgr.h"
-#include "hw_sequencer.h"
 #include "dce100_hw_sequencer.h"
 #include "resource.h"
 
@@ -136,7 +135,7 @@ void dce100_hw_sequencer_construct(struct dc *dc)
 {
 	dce110_hw_sequencer_construct(dc);
 
-	dc->hwss.enable_display_power_gating = dce100_enable_display_power_gating;
+	dc->hwseq->funcs.enable_display_power_gating = dce100_enable_display_power_gating;
 	dc->hwss.prepare_bandwidth = dce100_prepare_bandwidth;
 	dc->hwss.optimize_bandwidth = dce100_optimize_bandwidth;
 }
diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dce100/dce100_hw_sequencer.h
index a6b80fdaa666..34518da20009 100644
--- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_hw_sequencer.h
@@ -27,6 +27,7 @@
 #define __DC_HWSS_DCE100_H__
 
 #include "core_types.h"
+#include "hw_sequencer_private.h"
 
 struct dc;
 struct dc_state;
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
index 2b2ee6893e25..4939cf3b316f 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
@@ -653,10 +653,9 @@ void dce110_enable_stream(struct pipe_ctx *pipe_ctx)
 {
 	enum dc_lane_count lane_count =
 		pipe_ctx->stream->link->cur_link_settings.lane_count;
-
 	struct dc_crtc_timing *timing = &pipe_ctx->stream->timing;
 	struct dc_link *link = pipe_ctx->stream->link;
-
+	const struct dc *dc = link->dc;
 
 	uint32_t active_total_with_borders;
 	uint32_t early_control = 0;
@@ -669,7 +668,7 @@ void dce110_enable_stream(struct pipe_ctx *pipe_ctx)
 	link->link_enc->funcs->connect_dig_be_to_fe(link->link_enc,
 						    pipe_ctx->stream_res.stream_enc->id, true);
 
-	link->dc->hwss.update_info_frame(pipe_ctx);
+	dc->hwss.update_info_frame(pipe_ctx);
 
 	/* enable early control to avoid corruption on DP monitor*/
 	active_total_with_borders =
@@ -1049,6 +1048,7 @@ void dce110_unblank_stream(struct pipe_ctx *pipe_ctx,
 	struct encoder_unblank_param params = { { 0 } };
 	struct dc_stream_state *stream = pipe_ctx->stream;
 	struct dc_link *link = stream->link;
+	struct dce_hwseq *hws = link->dc->hwseq;
 
 	/* only 3 items below are used by unblank */
 	params.timing = pipe_ctx->stream->timing;
@@ -1058,7 +1058,7 @@ void dce110_unblank_stream(struct pipe_ctx *pipe_ctx,
 		pipe_ctx->stream_res.stream_enc->funcs->dp_unblank(pipe_ctx->stream_res.stream_enc, &params);
 
 	if (link->local_sink && link->local_sink->sink_signal == SIGNAL_TYPE_EDP) {
-		link->dc->hwss.edp_backlight_control(link, true);
+		hws->funcs.edp_backlight_control(link, true);
 	}
 }
 
@@ -1066,9 +1066,10 @@ void dce110_blank_stream(struct pipe_ctx *pipe_ctx)
 {
 	struct dc_stream_state *stream = pipe_ctx->stream;
 	struct dc_link *link = stream->link;
+	struct dce_hwseq *hws = link->dc->hwseq;
 
 	if (link->local_sink && link->local_sink->sink_signal == SIGNAL_TYPE_EDP) {
-		link->dc->hwss.edp_backlight_control(link, false);
+		hws->funcs.edp_backlight_control(link, false);
 		dc_link_set_abm_disable(link);
 	}
 
@@ -1325,9 +1326,10 @@ static enum dc_status apply_single_controller_ctx_to_hw(
 	struct drr_params params = {0};
 	unsigned int event_triggers = 0;
 	struct pipe_ctx *odm_pipe = pipe_ctx->next_odm_pipe;
+	struct dce_hwseq *hws = dc->hwseq;
 
-	if (dc->hwss.disable_stream_gating) {
-		dc->hwss.disable_stream_gating(dc, pipe_ctx);
+	if (hws->funcs.disable_stream_gating) {
+		hws->funcs.disable_stream_gating(dc, pipe_ctx);
 	}
 
 	if (pipe_ctx->stream_res.audio != NULL) {
@@ -1357,10 +1359,10 @@ static enum dc_status apply_single_controller_ctx_to_hw(
 	/*  */
 	/* Do not touch stream timing on seamless boot optimization. */
 	if (!pipe_ctx->stream->apply_seamless_boot_optimization)
-		dc->hwss.enable_stream_timing(pipe_ctx, context, dc);
+		hws->funcs.enable_stream_timing(pipe_ctx, context, dc);
 
-	if (dc->hwss.setup_vupdate_interrupt)
-		dc->hwss.setup_vupdate_interrupt(dc, pipe_ctx);
+	if (hws->funcs.setup_vupdate_interrupt)
+		hws->funcs.setup_vupdate_interrupt(dc, pipe_ctx);
 
 	params.vertical_total_min = stream->adjust.v_total_min;
 	params.vertical_total_max = stream->adjust.v_total_max;
@@ -1553,9 +1555,10 @@ void dce110_enable_accelerated_mode(struct dc *dc, struct dc_state *context)
 	bool can_apply_edp_fast_boot = false;
 	bool can_apply_seamless_boot = false;
 	bool keep_edp_vdd_on = false;
+	struct dce_hwseq *hws = dc->hwseq;
 
-	if (dc->hwss.init_pipes)
-		dc->hwss.init_pipes(dc, context);
+	if (hws->funcs.init_pipes)
+		hws->funcs.init_pipes(dc, context);
 
 	edp_stream = get_edp_stream(context);
 
@@ -1592,7 +1595,7 @@ void dce110_enable_accelerated_mode(struct dc *dc, struct dc_state *context)
 	if (!can_apply_edp_fast_boot && !can_apply_seamless_boot) {
 		if (edp_link_with_sink && !keep_edp_vdd_on) {
 			/*turn off backlight before DP_blank and encoder powered down*/
-			dc->hwss.edp_backlight_control(edp_link_with_sink, false);
+			hws->funcs.edp_backlight_control(edp_link_with_sink, false);
 		}
 		/*resume from S3, no vbios posting, no need to power down again*/
 		power_down_all_hw_blocks(dc);
@@ -2007,13 +2010,14 @@ enum dc_status dce110_apply_ctx_to_hw(
 		struct dc *dc,
 		struct dc_state *context)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	struct dc_bios *dcb = dc->ctx->dc_bios;
 	enum dc_status status;
 	int i;
 
 	/* Reset old context */
 	/* look up the targets that have been removed since last commit */
-	dc->hwss.reset_hw_ctx_wrap(dc, context);
+	hws->funcs.reset_hw_ctx_wrap(dc, context);
 
 	/* Skip applying if no targets */
 	if (context->stream_count <= 0)
@@ -2038,7 +2042,7 @@ enum dc_status dce110_apply_ctx_to_hw(
 			continue;
 		}
 
-		dc->hwss.enable_display_power_gating(
+		hws->funcs.enable_display_power_gating(
 				dc, i, dc->ctx->dc_bios,
 				PIPE_GATING_CONTROL_DISABLE);
 	}
@@ -2347,19 +2351,20 @@ static void init_hw(struct dc *dc)
 	struct transform *xfm;
 	struct abm *abm;
 	struct dmcu *dmcu;
+	struct dce_hwseq *hws = dc->hwseq;
 
 	bp = dc->ctx->dc_bios;
 	for (i = 0; i < dc->res_pool->pipe_count; i++) {
 		xfm = dc->res_pool->transforms[i];
 		xfm->funcs->transform_reset(xfm);
 
-		dc->hwss.enable_display_power_gating(
+		hws->funcs.enable_display_power_gating(
 				dc, i, bp,
 				PIPE_GATING_CONTROL_INIT);
-		dc->hwss.enable_display_power_gating(
+		hws->funcs.enable_display_power_gating(
 				dc, i, bp,
 				PIPE_GATING_CONTROL_DISABLE);
-		dc->hwss.enable_display_pipe_clock_gating(
+		hws->funcs.enable_display_pipe_clock_gating(
 			dc->ctx,
 			true);
 	}
@@ -2445,6 +2450,8 @@ static void dce110_program_front_end_for_pipe(
 	struct xfm_grph_csc_adjustment adjust;
 	struct out_csc_color_matrix tbl_entry;
 	unsigned int i;
+	struct dce_hwseq *hws = dc->hwseq;
+
 	DC_LOGGER_INIT();
 	memset(&tbl_entry, 0, sizeof(tbl_entry));
 
@@ -2503,10 +2510,10 @@ static void dce110_program_front_end_for_pipe(
 	if (pipe_ctx->plane_state->update_flags.bits.full_update ||
 			pipe_ctx->plane_state->update_flags.bits.in_transfer_func_change ||
 			pipe_ctx->plane_state->update_flags.bits.gamma_change)
-		dc->hwss.set_input_transfer_func(dc, pipe_ctx, pipe_ctx->plane_state);
+		hws->funcs.set_input_transfer_func(dc, pipe_ctx, pipe_ctx->plane_state);
 
 	if (pipe_ctx->plane_state->update_flags.bits.full_update)
-		dc->hwss.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
+		hws->funcs.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
 
 	DC_LOG_SURFACE(
 			"Pipe:%d %p: addr hi:0x%x, "
@@ -2609,6 +2616,7 @@ static void dce110_apply_ctx_for_surface(
 
 static void dce110_power_down_fe(struct dc *dc, struct pipe_ctx *pipe_ctx)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	int fe_idx = pipe_ctx->plane_res.mi ?
 		pipe_ctx->plane_res.mi->inst : pipe_ctx->pipe_idx;
 
@@ -2616,7 +2624,7 @@ static void dce110_power_down_fe(struct dc *dc, struct pipe_ctx *pipe_ctx)
 	if (dc->current_state->res_ctx.pipe_ctx[fe_idx].stream)
 		return;
 
-	dc->hwss.enable_display_power_gating(
+	hws->funcs.enable_display_power_gating(
 		dc, fe_idx, dc->ctx->dc_bios, PIPE_GATING_CONTROL_ENABLE);
 
 	dc->res_pool->transforms[fe_idx]->funcs->transform_reset(
@@ -2705,14 +2713,10 @@ static const struct hw_sequencer_funcs dce110_funcs = {
 	.program_gamut_remap = program_gamut_remap,
 	.program_output_csc = program_output_csc,
 	.init_hw = init_hw,
-	.init_pipes = init_pipes,
 	.apply_ctx_to_hw = dce110_apply_ctx_to_hw,
 	.apply_ctx_for_surface = dce110_apply_ctx_for_surface,
 	.update_plane_addr = update_plane_addr,
 	.update_pending_status = dce110_update_pending_status,
-	.set_input_transfer_func = dce110_set_input_transfer_func,
-	.set_output_transfer_func = dce110_set_output_transfer_func,
-	.power_down = dce110_power_down,
 	.enable_accelerated_mode = dce110_enable_accelerated_mode,
 	.enable_timing_synchronization = dce110_enable_timing_synchronization,
 	.enable_per_frame_crtc_position_reset = dce110_enable_per_frame_crtc_position_reset,
@@ -2723,8 +2727,6 @@ static const struct hw_sequencer_funcs dce110_funcs = {
 	.blank_stream = dce110_blank_stream,
 	.enable_audio_stream = dce110_enable_audio_stream,
 	.disable_audio_stream = dce110_disable_audio_stream,
-	.enable_display_pipe_clock_gating = enable_display_pipe_clock_gating,
-	.enable_display_power_gating = dce110_enable_display_power_gating,
 	.disable_plane = dce110_power_down_fe,
 	.pipe_control_lock = dce_pipe_control_lock,
 	.prepare_bandwidth = dce110_prepare_bandwidth,
@@ -2732,22 +2734,33 @@ static const struct hw_sequencer_funcs dce110_funcs = {
 	.set_drr = set_drr,
 	.get_position = get_position,
 	.set_static_screen_control = set_static_screen_control,
-	.reset_hw_ctx_wrap = dce110_reset_hw_ctx_wrap,
-	.enable_stream_timing = dce110_enable_stream_timing,
-	.disable_stream_gating = NULL,
-	.enable_stream_gating = NULL,
 	.setup_stereo = NULL,
 	.set_avmute = dce110_set_avmute,
 	.wait_for_mpcc_disconnect = dce110_wait_for_mpcc_disconnect,
-	.edp_backlight_control = dce110_edp_backlight_control,
 	.edp_power_control = dce110_edp_power_control,
 	.edp_wait_for_hpd_ready = dce110_edp_wait_for_hpd_ready,
 	.set_cursor_position = dce110_set_cursor_position,
 	.set_cursor_attribute = dce110_set_cursor_attribute
 };
 
+static const struct hwseq_private_funcs dce110_private_funcs = {
+	.init_pipes = init_pipes,
+	.update_plane_addr = update_plane_addr,
+	.set_input_transfer_func = dce110_set_input_transfer_func,
+	.set_output_transfer_func = dce110_set_output_transfer_func,
+	.power_down = dce110_power_down,
+	.enable_display_pipe_clock_gating = enable_display_pipe_clock_gating,
+	.enable_display_power_gating = dce110_enable_display_power_gating,
+	.reset_hw_ctx_wrap = dce110_reset_hw_ctx_wrap,
+	.enable_stream_timing = dce110_enable_stream_timing,
+	.disable_stream_gating = NULL,
+	.enable_stream_gating = NULL,
+	.edp_backlight_control = dce110_edp_backlight_control,
+};
+
 void dce110_hw_sequencer_construct(struct dc *dc)
 {
 	dc->hwss = dce110_funcs;
+	dc->hwseq->funcs = dce110_private_funcs;
 }
 
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.h
index c639e1680b7b..26a9c14a58b1 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.h
@@ -27,6 +27,7 @@
 #define __DC_HWSS_DCE110_H__
 
 #include "core_types.h"
+#include "hw_sequencer_private.h"
 
 struct dc;
 struct dc_state;
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
index a535e2cda694..bf14e9ab040c 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
@@ -1097,6 +1097,7 @@ static struct pipe_ctx *dce110_acquire_underlay(
 		struct dc_stream_state *stream)
 {
 	struct dc *dc = stream->ctx->dc;
+	struct dce_hwseq *hws = dc->hwseq;
 	struct resource_context *res_ctx = &context->res_ctx;
 	unsigned int underlay_idx = pool->underlay_pipe_index;
 	struct pipe_ctx *pipe_ctx = &res_ctx->pipe_ctx[underlay_idx];
@@ -1117,7 +1118,7 @@ static struct pipe_ctx *dce110_acquire_underlay(
 		struct tg_color black_color = {0};
 		struct dc_bios *dcb = dc->ctx->dc_bios;
 
-		dc->hwss.enable_display_power_gating(
+		hws->funcs.enable_display_power_gating(
 				dc,
 				pipe_ctx->stream_res.tg->inst,
 				dcb, PIPE_GATING_CONTROL_DISABLE);
diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.c
index 1e4a7c13f0ed..19873ee1f78d 100644
--- a/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.c
@@ -158,6 +158,6 @@ void dce112_hw_sequencer_construct(struct dc *dc)
 	 * structure
 	 */
 	dce110_hw_sequencer_construct(dc);
-	dc->hwss.enable_display_power_gating = dce112_enable_display_power_gating;
+	dc->hwseq->funcs.enable_display_power_gating = dce112_enable_display_power_gating;
 }
 
diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.h
index e646f4a37fa2..943f1b2c5b2f 100644
--- a/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_hw_sequencer.h
@@ -27,6 +27,7 @@
 #define __DC_HWSS_DCE112_H__
 
 #include "core_types.h"
+#include "hw_sequencer_private.h"
 
 struct dc;
 
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c
index 1ca30928025e..66a13aa39c95 100644
--- a/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c
@@ -265,7 +265,7 @@ void dce120_hw_sequencer_construct(struct dc *dc)
 	 * structure
 	 */
 	dce110_hw_sequencer_construct(dc);
-	dc->hwss.enable_display_power_gating = dce120_enable_display_power_gating;
+	dc->hwseq->funcs.enable_display_power_gating = dce120_enable_display_power_gating;
 	dc->hwss.update_dchub = dce120_update_dchub;
 }
 
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h
index c51afbd0b012..bc024534732f 100644
--- a/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h
@@ -27,6 +27,7 @@
 #define __DC_HWSS_DCE120_H__
 
 #include "core_types.h"
+#include "hw_sequencer_private.h"
 
 struct dc;
 
diff --git a/drivers/gpu/drm/amd/display/dc/dce80/dce80_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce80/dce80_hw_sequencer.c
index c4543178ba20..893261c81854 100644
--- a/drivers/gpu/drm/amd/display/dc/dce80/dce80_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dce80/dce80_hw_sequencer.c
@@ -74,7 +74,7 @@ void dce80_hw_sequencer_construct(struct dc *dc)
 {
 	dce110_hw_sequencer_construct(dc);
 
-	dc->hwss.enable_display_power_gating = dce100_enable_display_power_gating;
+	dc->hwseq->funcs.enable_display_power_gating = dce100_enable_display_power_gating;
 	dc->hwss.pipe_control_lock = dce_pipe_control_lock;
 	dc->hwss.prepare_bandwidth = dce100_prepare_bandwidth;
 	dc->hwss.optimize_bandwidth = dce100_optimize_bandwidth;
diff --git a/drivers/gpu/drm/amd/display/dc/dce80/dce80_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dce80/dce80_hw_sequencer.h
index 7a1b31def66f..e43af832d00c 100644
--- a/drivers/gpu/drm/amd/display/dc/dce80/dce80_hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/dce80/dce80_hw_sequencer.h
@@ -27,6 +27,7 @@
 #define __DC_HWSS_DCE80_H__
 
 #include "core_types.h"
+#include "hw_sequencer_private.h"
 
 struct dc;
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index bd6cdb6b38f6..2b3081ee0e07 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -642,8 +642,8 @@ static void power_on_plane(
 	if (REG(DC_IP_REQUEST_CNTL)) {
 		REG_SET(DC_IP_REQUEST_CNTL, 0,
 				IP_REQUEST_EN, 1);
-		hws->ctx->dc->hwss.dpp_pg_control(hws, plane_id, true);
-		hws->ctx->dc->hwss.hubp_pg_control(hws, plane_id, true);
+		hws->funcs.dpp_pg_control(hws, plane_id, true);
+		hws->funcs.hubp_pg_control(hws, plane_id, true);
 		REG_SET(DC_IP_REQUEST_CNTL, 0,
 				IP_REQUEST_EN, 0);
 		DC_LOG_DEBUG(
@@ -664,7 +664,7 @@ static void undo_DEGVIDCN10_253_wa(struct dc *dc)
 	REG_SET(DC_IP_REQUEST_CNTL, 0,
 			IP_REQUEST_EN, 1);
 
-	dc->hwss.hubp_pg_control(hws, 0, false);
+	hws->funcs.hubp_pg_control(hws, 0, false);
 	REG_SET(DC_IP_REQUEST_CNTL, 0,
 			IP_REQUEST_EN, 0);
 
@@ -693,7 +693,7 @@ static void apply_DEGVIDCN10_253_wa(struct dc *dc)
 	REG_SET(DC_IP_REQUEST_CNTL, 0,
 			IP_REQUEST_EN, 1);
 
-	dc->hwss.hubp_pg_control(hws, 0, true);
+	hws->funcs.hubp_pg_control(hws, 0, true);
 	REG_SET(DC_IP_REQUEST_CNTL, 0,
 			IP_REQUEST_EN, 0);
 
@@ -703,12 +703,14 @@ static void apply_DEGVIDCN10_253_wa(struct dc *dc)
 
 void dcn10_bios_golden_init(struct dc *dc)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	struct dc_bios *bp = dc->ctx->dc_bios;
 	int i;
 	bool allow_self_fresh_force_enable = true;
 
-	if (dc->hwss.s0i3_golden_init_wa && dc->hwss.s0i3_golden_init_wa(dc))
+	if (hws->funcs.s0i3_golden_init_wa && hws->funcs.s0i3_golden_init_wa(dc))
 		return;
+
 	if (dc->res_pool->hubbub->funcs->is_allow_self_refresh_enabled)
 		allow_self_fresh_force_enable =
 				dc->res_pool->hubbub->funcs->is_allow_self_refresh_enabled(dc->res_pool->hubbub);
@@ -1015,6 +1017,7 @@ void dcn10_verify_allow_pstate_change_high(struct dc *dc)
 /* trigger HW to start disconnect plane from stream on the next vsync */
 void dcn10_plane_atomic_disconnect(struct dc *dc, struct pipe_ctx *pipe_ctx)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	struct hubp *hubp = pipe_ctx->plane_res.hubp;
 	int dpp_id = pipe_ctx->plane_res.dpp->inst;
 	struct mpc *mpc = dc->res_pool->mpc;
@@ -1039,7 +1042,7 @@ void dcn10_plane_atomic_disconnect(struct dc *dc, struct pipe_ctx *pipe_ctx)
 		hubp->funcs->hubp_disconnect(hubp);
 
 	if (dc->debug.sanity_checks)
-		dc->hwss.verify_allow_pstate_change_high(dc);
+		hws->funcs.verify_allow_pstate_change_high(dc);
 }
 
 void dcn10_plane_atomic_power_down(struct dc *dc,
@@ -1052,8 +1055,8 @@ void dcn10_plane_atomic_power_down(struct dc *dc,
 	if (REG(DC_IP_REQUEST_CNTL)) {
 		REG_SET(DC_IP_REQUEST_CNTL, 0,
 				IP_REQUEST_EN, 1);
-		dc->hwss.dpp_pg_control(hws, dpp->inst, false);
-		dc->hwss.hubp_pg_control(hws, hubp->inst, false);
+		hws->funcs.dpp_pg_control(hws, dpp->inst, false);
+		hws->funcs.hubp_pg_control(hws, hubp->inst, false);
 		dpp->funcs->dpp_reset(dpp);
 		REG_SET(DC_IP_REQUEST_CNTL, 0,
 				IP_REQUEST_EN, 0);
@@ -1067,6 +1070,7 @@ void dcn10_plane_atomic_power_down(struct dc *dc,
  */
 void dcn10_plane_atomic_disable(struct dc *dc, struct pipe_ctx *pipe_ctx)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	struct hubp *hubp = pipe_ctx->plane_res.hubp;
 	struct dpp *dpp = pipe_ctx->plane_res.dpp;
 	int opp_id = hubp->opp_id;
@@ -1085,7 +1089,7 @@ void dcn10_plane_atomic_disable(struct dc *dc, struct pipe_ctx *pipe_ctx)
 	hubp->power_gated = true;
 	dc->optimized_required = false; /* We're powering off, no need to optimize */
 
-	dc->hwss.plane_atomic_power_down(dc,
+	hws->funcs.plane_atomic_power_down(dc,
 			pipe_ctx->plane_res.dpp,
 			pipe_ctx->plane_res.hubp);
 
@@ -1099,12 +1103,13 @@ void dcn10_plane_atomic_disable(struct dc *dc, struct pipe_ctx *pipe_ctx)
 
 void dcn10_disable_plane(struct dc *dc, struct pipe_ctx *pipe_ctx)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	DC_LOGGER_INIT(dc->ctx->logger);
 
 	if (!pipe_ctx->plane_res.hubp || pipe_ctx->plane_res.hubp->power_gated)
 		return;
 
-	dc->hwss.plane_atomic_disable(dc, pipe_ctx);
+	hws->funcs.plane_atomic_disable(dc, pipe_ctx);
 
 	apply_DEGVIDCN10_253_wa(dc);
 
@@ -1115,6 +1120,7 @@ void dcn10_disable_plane(struct dc *dc, struct pipe_ctx *pipe_ctx)
 void dcn10_init_pipes(struct dc *dc, struct dc_state *context)
 {
 	int i;
+	struct dce_hwseq *hws = dc->hwseq;
 	bool can_apply_seamless_boot = false;
 
 	for (i = 0; i < context->stream_count; i++) {
@@ -1139,8 +1145,8 @@ void dcn10_init_pipes(struct dc *dc, struct dc_state *context)
 		 * command table.
 		 */
 		if (tg->funcs->is_tg_enabled(tg)) {
-			if (dc->hwss.init_blank != NULL) {
-				dc->hwss.init_blank(dc, tg);
+			if (hws->funcs.init_blank != NULL) {
+				hws->funcs.init_blank(dc, tg);
 				tg->funcs->lock(tg);
 			} else {
 				tg->funcs->lock(tg);
@@ -1197,7 +1203,7 @@ void dcn10_init_pipes(struct dc *dc, struct dc_state *context)
 		dc->res_pool->opps[i]->mpcc_disconnect_pending[pipe_ctx->plane_res.mpcc_inst] = true;
 		pipe_ctx->stream_res.opp = dc->res_pool->opps[i];
 
-		dc->hwss.plane_atomic_disconnect(dc, pipe_ctx);
+		hws->funcs.plane_atomic_disconnect(dc, pipe_ctx);
 
 		if (tg->funcs->is_tg_enabled(tg))
 			tg->funcs->unlock(tg);
@@ -1243,15 +1249,15 @@ void dcn10_init_hw(struct dc *dc)
 		}
 
 		//Enable ability to power gate / don't force power on permanently
-		dc->hwss.enable_power_gating_plane(hws, true);
+		hws->funcs.enable_power_gating_plane(hws, true);
 
 		return;
 	}
 
 	if (!dcb->funcs->is_accelerated_mode(dcb))
-		dc->hwss.disable_vga(dc->hwseq);
+		hws->funcs.disable_vga(dc->hwseq);
 
-	dc->hwss.bios_golden_init(dc);
+	hws->funcs.bios_golden_init(dc);
 	if (dc->ctx->dc_bios->fw_info_valid) {
 		res_pool->ref_clocks.xtalin_clock_inKhz =
 				dc->ctx->dc_bios->fw_info.pll_info.crystal_frequency;
@@ -1294,8 +1300,8 @@ void dcn10_init_hw(struct dc *dc)
 
 	/* Power gate DSCs */
 	for (i = 0; i < res_pool->res_cap->num_dsc; i++)
-		if (dc->hwss.dsc_pg_control != NULL)
-			dc->hwss.dsc_pg_control(hws, res_pool->dscs[i]->inst, false);
+		if (hws->funcs.dsc_pg_control != NULL)
+			hws->funcs.dsc_pg_control(hws, res_pool->dscs[i]->inst, false);
 
 	/* If taking control over from VBIOS, we may want to optimize our first
 	 * mode set, so we need to skip powering down pipes until we know which
@@ -1304,7 +1310,7 @@ void dcn10_init_hw(struct dc *dc)
 	 * everything down.
 	 */
 	if (dcb->funcs->is_accelerated_mode(dcb) || dc->config.power_down_display_on_boot) {
-		dc->hwss.init_pipes(dc, dc->current_state);
+		hws->funcs.init_pipes(dc, dc->current_state);
 	}
 
 	for (i = 0; i < res_pool->audio_count; i++) {
@@ -1336,7 +1342,7 @@ void dcn10_init_hw(struct dc *dc)
 		REG_UPDATE(DCFCLK_CNTL, DCFCLK_GATE_DIS, 0);
 	}
 
-	dc->hwss.enable_power_gating_plane(dc->hwseq, true);
+	hws->funcs.enable_power_gating_plane(dc->hwseq, true);
 
 	if (dc->clk_mgr->funcs->notify_wm_ranges)
 		dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
@@ -1348,6 +1354,7 @@ void dcn10_reset_hw_ctx_wrap(
 		struct dc_state *context)
 {
 	int i;
+	struct dce_hwseq *hws = dc->hwseq;
 
 	/* Reset Back End*/
 	for (i = dc->res_pool->pipe_count - 1; i >= 0 ; i--) {
@@ -1366,8 +1373,8 @@ void dcn10_reset_hw_ctx_wrap(
 			struct clock_source *old_clk = pipe_ctx_old->clock_source;
 
 			dcn10_reset_back_end_for_pipe(dc, pipe_ctx_old, dc->current_state);
-			if (dc->hwss.enable_stream_gating)
-				dc->hwss.enable_stream_gating(dc, pipe_ctx);
+			if (hws->funcs.enable_stream_gating)
+				hws->funcs.enable_stream_gating(dc, pipe_ctx);
 			if (old_clk)
 				old_clk->funcs->cs_power_down(old_clk);
 		}
@@ -1545,6 +1552,8 @@ void dcn10_pipe_control_lock(
 	struct pipe_ctx *pipe,
 	bool lock)
 {
+	struct dce_hwseq *hws = dc->hwseq;
+
 	/* use TG master update lock to lock everything on the TG
 	 * therefore only top pipe need to lock
 	 */
@@ -1552,7 +1561,7 @@ void dcn10_pipe_control_lock(
 		return;
 
 	if (dc->debug.sanity_checks)
-		dc->hwss.verify_allow_pstate_change_high(dc);
+		hws->funcs.verify_allow_pstate_change_high(dc);
 
 	if (lock)
 		pipe->stream_res.tg->funcs->lock(pipe->stream_res.tg);
@@ -1560,7 +1569,7 @@ void dcn10_pipe_control_lock(
 		pipe->stream_res.tg->funcs->unlock(pipe->stream_res.tg);
 
 	if (dc->debug.sanity_checks)
-		dc->hwss.verify_allow_pstate_change_high(dc);
+		hws->funcs.verify_allow_pstate_change_high(dc);
 }
 
 static bool wait_for_reset_trigger_to_occur(
@@ -1868,7 +1877,7 @@ static void dcn10_enable_plane(
 	struct dce_hwseq *hws = dc->hwseq;
 
 	if (dc->debug.sanity_checks) {
-		dc->hwss.verify_allow_pstate_change_high(dc);
+		hws->funcs.verify_allow_pstate_change_high(dc);
 	}
 
 	undo_DEGVIDCN10_253_wa(dc);
@@ -1925,7 +1934,7 @@ static void dcn10_enable_plane(
 		dcn10_program_pte_vm(hws, pipe_ctx->plane_res.hubp);
 
 	if (dc->debug.sanity_checks) {
-		dc->hwss.verify_allow_pstate_change_high(dc);
+		hws->funcs.verify_allow_pstate_change_high(dc);
 	}
 }
 
@@ -2102,6 +2111,7 @@ static void dcn10_update_dpp(struct dpp *dpp, struct dc_plane_state *plane_state
 
 void dcn10_update_mpcc(struct dc *dc, struct pipe_ctx *pipe_ctx)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	struct hubp *hubp = pipe_ctx->plane_res.hubp;
 	struct mpcc_blnd_cfg blnd_cfg = {{0}};
 	bool per_pixel_alpha = pipe_ctx->plane_state->per_pixel_alpha && pipe_ctx->bottom_pipe;
@@ -2111,10 +2121,10 @@ void dcn10_update_mpcc(struct dc *dc, struct pipe_ctx *pipe_ctx)
 	struct mpc_tree *mpc_tree_params = &(pipe_ctx->stream_res.opp->mpc_tree_params);
 
 	if (dc->debug.visual_confirm == VISUAL_CONFIRM_HDR) {
-		dc->hwss.get_hdr_visual_confirm_color(
+		hws->funcs.get_hdr_visual_confirm_color(
 				pipe_ctx, &blnd_cfg.black_color);
 	} else if (dc->debug.visual_confirm == VISUAL_CONFIRM_SURFACE) {
-		dc->hwss.get_surface_visual_confirm_color(
+		hws->funcs.get_surface_visual_confirm_color(
 				pipe_ctx, &blnd_cfg.black_color);
 	} else {
 		color_space_to_black_color(
@@ -2201,6 +2211,7 @@ static void dcn10_update_dchubp_dpp(
 	struct pipe_ctx *pipe_ctx,
 	struct dc_state *context)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	struct hubp *hubp = pipe_ctx->plane_res.hubp;
 	struct dpp *dpp = pipe_ctx->plane_res.dpp;
 	struct dc_plane_state *plane_state = pipe_ctx->plane_state;
@@ -2259,7 +2270,7 @@ static void dcn10_update_dchubp_dpp(
 	if (plane_state->update_flags.bits.full_update ||
 		plane_state->update_flags.bits.per_pixel_alpha_change ||
 		plane_state->update_flags.bits.global_alpha_change)
-		dc->hwss.update_mpcc(dc, pipe_ctx);
+		hws->funcs.update_mpcc(dc, pipe_ctx);
 
 	if (plane_state->update_flags.bits.full_update ||
 		plane_state->update_flags.bits.per_pixel_alpha_change ||
@@ -2319,7 +2330,7 @@ static void dcn10_update_dchubp_dpp(
 
 	hubp->power_gated = false;
 
-	dc->hwss.update_plane_addr(dc, pipe_ctx);
+	hws->funcs.update_plane_addr(dc, pipe_ctx);
 
 	if (is_pipe_tree_visible(pipe_ctx))
 		hubp->funcs->set_blank(hubp, false);
@@ -2395,17 +2406,19 @@ void dcn10_program_pipe(
 		struct pipe_ctx *pipe_ctx,
 		struct dc_state *context)
 {
+	struct dce_hwseq *hws = dc->hwseq;
+
 	if (pipe_ctx->plane_state->update_flags.bits.full_update)
 		dcn10_enable_plane(dc, pipe_ctx, context);
 
 	dcn10_update_dchubp_dpp(dc, pipe_ctx, context);
 
-	dc->hwss.set_hdr_multiplier(pipe_ctx);
+	hws->funcs.set_hdr_multiplier(pipe_ctx);
 
 	if (pipe_ctx->plane_state->update_flags.bits.full_update ||
 			pipe_ctx->plane_state->update_flags.bits.in_transfer_func_change ||
 			pipe_ctx->plane_state->update_flags.bits.gamma_change)
-		dc->hwss.set_input_transfer_func(dc, pipe_ctx, pipe_ctx->plane_state);
+		hws->funcs.set_input_transfer_func(dc, pipe_ctx, pipe_ctx->plane_state);
 
 	/* dcn10_translate_regamma_to_hw_format takes 750us to finish
 	 * only do gamma programming for full update.
@@ -2414,7 +2427,7 @@ void dcn10_program_pipe(
 	 * doing heavy calculation and programming
 	 */
 	if (pipe_ctx->plane_state->update_flags.bits.full_update)
-		dc->hwss.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
+		hws->funcs.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
 }
 
 static void dcn10_program_all_pipe_in_tree(
@@ -2422,6 +2435,8 @@ static void dcn10_program_all_pipe_in_tree(
 		struct pipe_ctx *pipe_ctx,
 		struct dc_state *context)
 {
+	struct dce_hwseq *hws = dc->hwseq;
+
 	if (pipe_ctx->top_pipe == NULL) {
 		bool blank = !is_pipe_tree_visible(pipe_ctx);
 
@@ -2435,14 +2450,14 @@ static void dcn10_program_all_pipe_in_tree(
 		pipe_ctx->stream_res.tg->funcs->set_vtg_params(
 				pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing);
 
-		if (dc->hwss.setup_vupdate_interrupt)
-			dc->hwss.setup_vupdate_interrupt(dc, pipe_ctx);
+		if (hws->funcs.setup_vupdate_interrupt)
+			hws->funcs.setup_vupdate_interrupt(dc, pipe_ctx);
 
-		dc->hwss.blank_pixel_data(dc, pipe_ctx, blank);
+		hws->funcs.blank_pixel_data(dc, pipe_ctx, blank);
 	}
 
 	if (pipe_ctx->plane_state != NULL)
-		dc->hwss.program_pipe(dc, pipe_ctx, context);
+		hws->funcs.program_pipe(dc, pipe_ctx, context);
 
 	if (pipe_ctx->bottom_pipe != NULL && pipe_ctx->bottom_pipe != pipe_ctx)
 		dcn10_program_all_pipe_in_tree(dc, pipe_ctx->bottom_pipe, context);
@@ -2478,6 +2493,7 @@ void dcn10_apply_ctx_for_surface(
 		int num_planes,
 		struct dc_state *context)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	int i;
 	struct timing_generator *tg;
 	uint32_t underflow_check_delay_us;
@@ -2497,8 +2513,8 @@ void dcn10_apply_ctx_for_surface(
 
 	underflow_check_delay_us = dc->debug.underflow_assert_delay_us;
 
-	if (underflow_check_delay_us != 0xFFFFFFFF && dc->hwss.did_underflow_occur)
-		ASSERT(dc->hwss.did_underflow_occur(dc, top_pipe_to_program));
+	if (underflow_check_delay_us != 0xFFFFFFFF && hws->funcs.did_underflow_occur)
+		ASSERT(hws->funcs.did_underflow_occur(dc, top_pipe_to_program));
 
 	if (interdependent_update)
 		dcn10_lock_all_pipes(dc, context, true);
@@ -2508,12 +2524,12 @@ void dcn10_apply_ctx_for_surface(
 	if (underflow_check_delay_us != 0xFFFFFFFF)
 		udelay(underflow_check_delay_us);
 
-	if (underflow_check_delay_us != 0xFFFFFFFF && dc->hwss.did_underflow_occur)
-		ASSERT(dc->hwss.did_underflow_occur(dc, top_pipe_to_program));
+	if (underflow_check_delay_us != 0xFFFFFFFF && hws->funcs.did_underflow_occur)
+		ASSERT(hws->funcs.did_underflow_occur(dc, top_pipe_to_program));
 
 	if (num_planes == 0) {
 		/* OTG blank before remove all front end */
-		dc->hwss.blank_pixel_data(dc, top_pipe_to_program, true);
+		hws->funcs.blank_pixel_data(dc, top_pipe_to_program, true);
 	}
 
 	/* Disconnect unused mpcc */
@@ -2539,7 +2555,7 @@ void dcn10_apply_ctx_for_surface(
 		    old_pipe_ctx->plane_state &&
 		    old_pipe_ctx->stream_res.tg == tg) {
 
-			dc->hwss.plane_atomic_disconnect(dc, old_pipe_ctx);
+			hws->funcs.plane_atomic_disconnect(dc, old_pipe_ctx);
 			removed_pipe[i] = true;
 
 			DC_LOG_DC("Reset mpcc for pipe %d\n",
@@ -2551,8 +2567,8 @@ void dcn10_apply_ctx_for_surface(
 		dcn10_program_all_pipe_in_tree(dc, top_pipe_to_program, context);
 
 	/* Program secondary blending tree and writeback pipes */
-	if ((stream->num_wb_info > 0) && (dc->hwss.program_all_writeback_pipes_in_tree))
-		dc->hwss.program_all_writeback_pipes_in_tree(dc, stream, context);
+	if ((stream->num_wb_info > 0) && (hws->funcs.program_all_writeback_pipes_in_tree))
+		hws->funcs.program_all_writeback_pipes_in_tree(dc, stream, context);
 	if (interdependent_update)
 		for (i = 0; i < dc->res_pool->pipe_count; i++) {
 			struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
@@ -2609,10 +2625,11 @@ void dcn10_prepare_bandwidth(
 		struct dc *dc,
 		struct dc_state *context)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	struct hubbub *hubbub = dc->res_pool->hubbub;
 
 	if (dc->debug.sanity_checks)
-		dc->hwss.verify_allow_pstate_change_high(dc);
+		hws->funcs.verify_allow_pstate_change_high(dc);
 
 	if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
 		if (context->stream_count == 0)
@@ -2634,17 +2651,18 @@ void dcn10_prepare_bandwidth(
 		dcn_bw_notify_pplib_of_wm_ranges(dc);
 
 	if (dc->debug.sanity_checks)
-		dc->hwss.verify_allow_pstate_change_high(dc);
+		hws->funcs.verify_allow_pstate_change_high(dc);
 }
 
 void dcn10_optimize_bandwidth(
 		struct dc *dc,
 		struct dc_state *context)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	struct hubbub *hubbub = dc->res_pool->hubbub;
 
 	if (dc->debug.sanity_checks)
-		dc->hwss.verify_allow_pstate_change_high(dc);
+		hws->funcs.verify_allow_pstate_change_high(dc);
 
 	if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
 		if (context->stream_count == 0)
@@ -2666,7 +2684,7 @@ void dcn10_optimize_bandwidth(
 		dcn_bw_notify_pplib_of_wm_ranges(dc);
 
 	if (dc->debug.sanity_checks)
-		dc->hwss.verify_allow_pstate_change_high(dc);
+		hws->funcs.verify_allow_pstate_change_high(dc);
 }
 
 void dcn10_set_drr(struct pipe_ctx **pipe_ctx,
@@ -2808,10 +2826,11 @@ void dcn10_wait_for_mpcc_disconnect(
 		struct resource_pool *res_pool,
 		struct pipe_ctx *pipe_ctx)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	int mpcc_inst;
 
 	if (dc->debug.sanity_checks) {
-		dc->hwss.verify_allow_pstate_change_high(dc);
+		hws->funcs.verify_allow_pstate_change_high(dc);
 	}
 
 	if (!pipe_ctx->stream_res.opp)
@@ -2828,7 +2847,7 @@ void dcn10_wait_for_mpcc_disconnect(
 	}
 
 	if (dc->debug.sanity_checks) {
-		dc->hwss.verify_allow_pstate_change_high(dc);
+		hws->funcs.verify_allow_pstate_change_high(dc);
 	}
 
 }
@@ -3127,6 +3146,7 @@ void dcn10_unblank_stream(struct pipe_ctx *pipe_ctx,
 	struct encoder_unblank_param params = { { 0 } };
 	struct dc_stream_state *stream = pipe_ctx->stream;
 	struct dc_link *link = stream->link;
+	struct dce_hwseq *hws = link->dc->hwseq;
 
 	/* only 3 items below are used by unblank */
 	params.timing = pipe_ctx->stream->timing;
@@ -3140,7 +3160,7 @@ void dcn10_unblank_stream(struct pipe_ctx *pipe_ctx,
 	}
 
 	if (link->local_sink && link->local_sink->sink_signal == SIGNAL_TYPE_EDP) {
-		link->dc->hwss.edp_backlight_control(link, true);
+		hws->funcs.edp_backlight_control(link, true);
 	}
 }
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
index 5aad3922be6c..55b8f3b2fc4e 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
@@ -27,6 +27,7 @@
 #define __DC_HWSS_DCN10_H__
 
 #include "core_types.h"
+#include "hw_sequencer_private.h"
 
 struct dc;
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
index 38923f3120ee..e7e5352ec424 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
@@ -23,25 +23,19 @@
  *
  */
 
+#include "hw_sequencer_private.h"
 #include "dce110/dce110_hw_sequencer.h"
 #include "dcn10_hw_sequencer.h"
 
 static const struct hw_sequencer_funcs dcn10_funcs = {
 	.program_gamut_remap = dcn10_program_gamut_remap,
 	.init_hw = dcn10_init_hw,
-	.init_pipes = dcn10_init_pipes,
 	.apply_ctx_to_hw = dce110_apply_ctx_to_hw,
 	.apply_ctx_for_surface = dcn10_apply_ctx_for_surface,
 	.update_plane_addr = dcn10_update_plane_addr,
-	.plane_atomic_disconnect = dcn10_plane_atomic_disconnect,
-	.program_pipe = dcn10_program_pipe,
 	.update_dchub = dcn10_update_dchub,
-	.update_mpcc = dcn10_update_mpcc,
 	.update_pending_status = dcn10_update_pending_status,
-	.set_input_transfer_func = dcn10_set_input_transfer_func,
-	.set_output_transfer_func = dcn10_set_output_transfer_func,
 	.program_output_csc = dcn10_program_output_csc,
-	.power_down = dce110_power_down,
 	.enable_accelerated_mode = dce110_enable_accelerated_mode,
 	.enable_timing_synchronization = dcn10_enable_timing_synchronization,
 	.enable_per_frame_crtc_position_reset = dcn10_enable_per_frame_crtc_position_reset,
@@ -53,14 +47,10 @@ static const struct hw_sequencer_funcs dcn10_funcs = {
 	.blank_stream = dce110_blank_stream,
 	.enable_audio_stream = dce110_enable_audio_stream,
 	.disable_audio_stream = dce110_disable_audio_stream,
-	.enable_display_power_gating = dcn10_dummy_display_power_gating,
 	.disable_plane = dcn10_disable_plane,
-	.blank_pixel_data = dcn10_blank_pixel_data,
 	.pipe_control_lock = dcn10_pipe_control_lock,
 	.prepare_bandwidth = dcn10_prepare_bandwidth,
 	.optimize_bandwidth = dcn10_optimize_bandwidth,
-	.reset_hw_ctx_wrap = dcn10_reset_hw_ctx_wrap,
-	.enable_stream_timing = dcn10_enable_stream_timing,
 	.set_drr = dcn10_set_drr,
 	.get_position = dcn10_get_position,
 	.set_static_screen_control = dcn10_set_static_screen_control,
@@ -70,18 +60,34 @@ static const struct hw_sequencer_funcs dcn10_funcs = {
 	.get_hw_state = dcn10_get_hw_state,
 	.clear_status_bits = dcn10_clear_status_bits,
 	.wait_for_mpcc_disconnect = dcn10_wait_for_mpcc_disconnect,
-	.edp_backlight_control = dce110_edp_backlight_control,
 	.edp_power_control = dce110_edp_power_control,
 	.edp_wait_for_hpd_ready = dce110_edp_wait_for_hpd_ready,
 	.set_cursor_position = dcn10_set_cursor_position,
 	.set_cursor_attribute = dcn10_set_cursor_attribute,
 	.set_cursor_sdr_white_level = dcn10_set_cursor_sdr_white_level,
-	.disable_stream_gating = NULL,
-	.enable_stream_gating = NULL,
 	.setup_periodic_interrupt = dcn10_setup_periodic_interrupt,
-	.setup_vupdate_interrupt = dcn10_setup_vupdate_interrupt,
 	.set_clock = dcn10_set_clock,
 	.get_clock = dcn10_get_clock,
+	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
+};
+
+static const struct hwseq_private_funcs dcn10_private_funcs = {
+	.init_pipes = dcn10_init_pipes,
+	.update_plane_addr = dcn10_update_plane_addr,
+	.plane_atomic_disconnect = dcn10_plane_atomic_disconnect,
+	.program_pipe = dcn10_program_pipe,
+	.update_mpcc = dcn10_update_mpcc,
+	.set_input_transfer_func = dcn10_set_input_transfer_func,
+	.set_output_transfer_func = dcn10_set_output_transfer_func,
+	.power_down = dce110_power_down,
+	.enable_display_power_gating = dcn10_dummy_display_power_gating,
+	.blank_pixel_data = dcn10_blank_pixel_data,
+	.reset_hw_ctx_wrap = dcn10_reset_hw_ctx_wrap,
+	.enable_stream_timing = dcn10_enable_stream_timing,
+	.edp_backlight_control = dce110_edp_backlight_control,
+	.disable_stream_gating = NULL,
+	.enable_stream_gating = NULL,
+	.setup_vupdate_interrupt = dcn10_setup_vupdate_interrupt,
 	.did_underflow_occur = dcn10_did_underflow_occur,
 	.init_blank = NULL,
 	.disable_vga = dcn10_disable_vga,
@@ -96,10 +102,10 @@ static const struct hw_sequencer_funcs dcn10_funcs = {
 	.get_hdr_visual_confirm_color = dcn10_get_hdr_visual_confirm_color,
 	.set_hdr_multiplier = dcn10_set_hdr_multiplier,
 	.verify_allow_pstate_change_high = dcn10_verify_allow_pstate_change_high,
-	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
 };
 
 void dcn10_hw_sequencer_construct(struct dc *dc)
 {
 	dc->hwss = dcn10_funcs;
+	dc->hwseq->funcs = dcn10_private_funcs;
 }
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
index 03e4aafb237b..8091c7c1e0d0 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
@@ -272,6 +272,7 @@ void dcn20_init_blank(
 		struct dc *dc,
 		struct timing_generator *tg)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	enum dc_color_space color_space;
 	struct tg_color black_color = {0};
 	struct output_pixel_processor *opp = NULL;
@@ -319,7 +320,7 @@ void dcn20_init_blank(
 				otg_active_height);
 	}
 
-	dc->hwss.wait_for_blank_complete(opp);
+	hws->funcs.wait_for_blank_complete(opp);
 }
 
 void dcn20_dsc_pg_control(
@@ -552,6 +553,7 @@ void dcn20_hubp_pg_control(
  */
 void dcn20_plane_atomic_disable(struct dc *dc, struct pipe_ctx *pipe_ctx)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	struct hubp *hubp = pipe_ctx->plane_res.hubp;
 	struct dpp *dpp = pipe_ctx->plane_res.dpp;
 
@@ -572,7 +574,7 @@ void dcn20_plane_atomic_disable(struct dc *dc, struct pipe_ctx *pipe_ctx)
 	hubp->power_gated = true;
 	dc->optimized_required = false; /* We're powering off, no need to optimize */
 
-	dc->hwss.plane_atomic_power_down(dc,
+	hws->funcs.plane_atomic_power_down(dc,
 			pipe_ctx->plane_res.dpp,
 			pipe_ctx->plane_res.hubp);
 
@@ -603,6 +605,7 @@ enum dc_status dcn20_enable_stream_timing(
 		struct dc_state *context,
 		struct dc *dc)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	struct dc_stream_state *stream = pipe_ctx->stream;
 	struct drr_params params = {0};
 	unsigned int event_triggers = 0;
@@ -662,7 +665,7 @@ enum dc_status dcn20_enable_stream_timing(
 			pipe_ctx->stream_res.opp,
 			true);
 
-	dc->hwss.blank_pixel_data(dc, pipe_ctx, true);
+	hws->funcs.blank_pixel_data(dc, pipe_ctx, true);
 
 	/* VTG is  within DCHUB command block. DCFCLK is always on */
 	if (false == pipe_ctx->stream_res.tg->funcs->enable_crtc(pipe_ctx->stream_res.tg)) {
@@ -670,7 +673,7 @@ enum dc_status dcn20_enable_stream_timing(
 		return DC_ERROR_UNEXPECTED;
 	}
 
-	dc->hwss.wait_for_blank_complete(pipe_ctx->stream_res.opp);
+	hws->funcs.wait_for_blank_complete(pipe_ctx->stream_res.opp);
 
 	params.vertical_total_min = stream->adjust.v_total_min;
 	params.vertical_total_max = stream->adjust.v_total_max;
@@ -820,6 +823,7 @@ bool dcn20_set_input_transfer_func(struct dc *dc,
 				struct pipe_ctx *pipe_ctx,
 				const struct dc_plane_state *plane_state)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	struct dpp *dpp_base = pipe_ctx->plane_res.dpp;
 	const struct dc_transfer_func *tf = NULL;
 	bool result = true;
@@ -828,8 +832,8 @@ bool dcn20_set_input_transfer_func(struct dc *dc,
 	if (dpp_base == NULL || plane_state == NULL)
 		return false;
 
-	dc->hwss.set_shaper_3dlut(pipe_ctx, plane_state);
-	dc->hwss.set_blend_lut(pipe_ctx, plane_state);
+	hws->funcs.set_shaper_3dlut(pipe_ctx, plane_state);
+	hws->funcs.set_blend_lut(pipe_ctx, plane_state);
 
 	if (plane_state->in_transfer_func)
 		tf = plane_state->in_transfer_func;
@@ -1273,6 +1277,7 @@ static void dcn20_update_dchubp_dpp(
 	struct pipe_ctx *pipe_ctx,
 	struct dc_state *context)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	struct hubp *hubp = pipe_ctx->plane_res.hubp;
 	struct dpp *dpp = pipe_ctx->plane_res.dpp;
 	struct dc_plane_state *plane_state = pipe_ctx->plane_state;
@@ -1337,7 +1342,7 @@ static void dcn20_update_dchubp_dpp(
 				old_pipe_ctx->stream_res.opp->mpcc_disconnect_pending[mpcc_inst] = false;
 			}
 		}
-		dc->hwss.update_mpcc(dc, pipe_ctx);
+		hws->funcs.update_mpcc(dc, pipe_ctx);
 	}
 
 	if (pipe_ctx->update_flags.bits.scaler ||
@@ -1412,7 +1417,7 @@ static void dcn20_update_dchubp_dpp(
 	}
 
 	if (pipe_ctx->update_flags.bits.enable || plane_state->update_flags.bits.addr_update)
-		dc->hwss.update_plane_addr(dc, pipe_ctx);
+		hws->funcs.update_plane_addr(dc, pipe_ctx);
 
 	if (pipe_ctx->update_flags.bits.enable)
 		hubp->funcs->set_blank(hubp, false);
@@ -1424,10 +1429,11 @@ static void dcn20_program_pipe(
 		struct pipe_ctx *pipe_ctx,
 		struct dc_state *context)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	/* Only need to unblank on top pipe */
 	if ((pipe_ctx->update_flags.bits.enable || pipe_ctx->stream->update_flags.bits.abm_level)
 			&& !pipe_ctx->top_pipe && !pipe_ctx->prev_odm_pipe)
-		dc->hwss.blank_pixel_data(dc, pipe_ctx, !pipe_ctx->plane_state->visible);
+		hws->funcs.blank_pixel_data(dc, pipe_ctx, !pipe_ctx->plane_state->visible);
 
 	if (pipe_ctx->update_flags.bits.global_sync) {
 		pipe_ctx->stream_res.tg->funcs->program_global_sync(
@@ -1440,12 +1446,12 @@ static void dcn20_program_pipe(
 		pipe_ctx->stream_res.tg->funcs->set_vtg_params(
 				pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing);
 
-		if (dc->hwss.setup_vupdate_interrupt)
-			dc->hwss.setup_vupdate_interrupt(dc, pipe_ctx);
+		if (hws->funcs.setup_vupdate_interrupt)
+			hws->funcs.setup_vupdate_interrupt(dc, pipe_ctx);
 	}
 
 	if (pipe_ctx->update_flags.bits.odm)
-		dc->hwss.update_odm(dc, context, pipe_ctx);
+		hws->funcs.update_odm(dc, context, pipe_ctx);
 
 	if (pipe_ctx->update_flags.bits.enable)
 		dcn20_enable_plane(dc, pipe_ctx, context);
@@ -1455,19 +1461,19 @@ static void dcn20_program_pipe(
 
 	if (pipe_ctx->update_flags.bits.enable
 			|| pipe_ctx->plane_state->update_flags.bits.hdr_mult)
-		dc->hwss.set_hdr_multiplier(pipe_ctx);
+		hws->funcs.set_hdr_multiplier(pipe_ctx);
 
 	if (pipe_ctx->update_flags.bits.enable ||
 			pipe_ctx->plane_state->update_flags.bits.in_transfer_func_change ||
 			pipe_ctx->plane_state->update_flags.bits.gamma_change)
-		dc->hwss.set_input_transfer_func(dc, pipe_ctx, pipe_ctx->plane_state);
+		hws->funcs.set_input_transfer_func(dc, pipe_ctx, pipe_ctx->plane_state);
 
 	/* dcn10_translate_regamma_to_hw_format takes 750us to finish
 	 * only do gamma programming for powering on, internal memcmp to avoid
 	 * updating on slave planes
 	 */
 	if (pipe_ctx->update_flags.bits.enable || pipe_ctx->stream->update_flags.bits.out_tf)
-		dc->hwss.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
+		hws->funcs.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
 
 	/* If the pipe has been enabled or has a different opp, we
 	 * should reprogram the fmt. This deals with cases where
@@ -1507,6 +1513,7 @@ void dcn20_program_front_end_for_ctx(
 {
 	const unsigned int TIMEOUT_FOR_PIPE_ENABLE_MS = 100;
 	int i;
+	struct dce_hwseq *hws = dc->hwseq;
 	bool pipe_locked[MAX_PIPES] = {false};
 	DC_LOGGER_INIT(dc->ctx->logger);
 
@@ -1538,13 +1545,13 @@ void dcn20_program_front_end_for_ctx(
 				&& !context->res_ctx.pipe_ctx[i].top_pipe
 				&& !context->res_ctx.pipe_ctx[i].prev_odm_pipe
 				&& context->res_ctx.pipe_ctx[i].stream)
-			dc->hwss.blank_pixel_data(dc, &context->res_ctx.pipe_ctx[i], true);
+			hws->funcs.blank_pixel_data(dc, &context->res_ctx.pipe_ctx[i], true);
 
 	/* Disconnect mpcc */
 	for (i = 0; i < dc->res_pool->pipe_count; i++)
 		if (context->res_ctx.pipe_ctx[i].update_flags.bits.disable
 				|| context->res_ctx.pipe_ctx[i].update_flags.bits.opp_changed) {
-			dc->hwss.plane_atomic_disconnect(dc, &dc->current_state->res_ctx.pipe_ctx[i]);
+			hws->funcs.plane_atomic_disconnect(dc, &dc->current_state->res_ctx.pipe_ctx[i]);
 			DC_LOG_DC("Reset mpcc for pipe %d\n", dc->current_state->res_ctx.pipe_ctx[i].pipe_idx);
 		}
 
@@ -1564,8 +1571,8 @@ void dcn20_program_front_end_for_ctx(
 			pipe = &context->res_ctx.pipe_ctx[i];
 			if (!pipe->prev_odm_pipe && pipe->stream->num_wb_info > 0
 					&& (pipe->update_flags.raw || pipe->plane_state->update_flags.raw || pipe->stream->update_flags.raw)
-					&& dc->hwss.program_all_writeback_pipes_in_tree)
-				dc->hwss.program_all_writeback_pipes_in_tree(dc, pipe->stream, context);
+					&& hws->funcs.program_all_writeback_pipes_in_tree)
+				hws->funcs.program_all_writeback_pipes_in_tree(dc, pipe->stream, context);
 		}
 	}
 
@@ -1650,6 +1657,7 @@ bool dcn20_update_bandwidth(
 		struct dc_state *context)
 {
 	int i;
+	struct dce_hwseq *hws = dc->hwseq;
 
 	/* recalculate DML parameters */
 	if (!dc->res_pool->funcs->validate_bandwidth(dc, context, false))
@@ -1679,10 +1687,10 @@ bool dcn20_update_bandwidth(
 					pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing);
 
 			if (pipe_ctx->prev_odm_pipe == NULL)
-				dc->hwss.blank_pixel_data(dc, pipe_ctx, blank);
+				hws->funcs.blank_pixel_data(dc, pipe_ctx, blank);
 
-			if (dc->hwss.setup_vupdate_interrupt)
-				dc->hwss.setup_vupdate_interrupt(dc, pipe_ctx);
+			if (hws->funcs.setup_vupdate_interrupt)
+				hws->funcs.setup_vupdate_interrupt(dc, pipe_ctx);
 		}
 
 		pipe_ctx->plane_res.hubp->funcs->hubp_setup(
@@ -1919,6 +1927,7 @@ void dcn20_unblank_stream(struct pipe_ctx *pipe_ctx,
 	struct encoder_unblank_param params = { { 0 } };
 	struct dc_stream_state *stream = pipe_ctx->stream;
 	struct dc_link *link = stream->link;
+	struct dce_hwseq *hws = link->dc->hwseq;
 	struct pipe_ctx *odm_pipe;
 
 	params.opp_cnt = 1;
@@ -1939,7 +1948,7 @@ void dcn20_unblank_stream(struct pipe_ctx *pipe_ctx,
 	}
 
 	if (link->local_sink && link->local_sink->sink_signal == SIGNAL_TYPE_EDP) {
-		link->dc->hwss.edp_backlight_control(link, true);
+		hws->funcs.edp_backlight_control(link, true);
 	}
 }
 
@@ -2027,6 +2036,7 @@ void dcn20_reset_hw_ctx_wrap(
 		struct dc_state *context)
 {
 	int i;
+	struct dce_hwseq *hws = dc->hwseq;
 
 	/* Reset Back End*/
 	for (i = dc->res_pool->pipe_count - 1; i >= 0 ; i--) {
@@ -2045,8 +2055,8 @@ void dcn20_reset_hw_ctx_wrap(
 			struct clock_source *old_clk = pipe_ctx_old->clock_source;
 
 			dcn20_reset_back_end_for_pipe(dc, pipe_ctx_old, dc->current_state);
-			if (dc->hwss.enable_stream_gating)
-				dc->hwss.enable_stream_gating(dc, pipe_ctx);
+			if (hws->funcs.enable_stream_gating)
+				hws->funcs.enable_stream_gating(dc, pipe_ctx);
 			if (old_clk)
 				old_clk->funcs->cs_power_down(old_clk);
 		}
@@ -2077,6 +2087,7 @@ void dcn20_get_mpctree_visual_confirm_color(
 
 void dcn20_update_mpcc(struct dc *dc, struct pipe_ctx *pipe_ctx)
 {
+	struct dce_hwseq *hws = dc->hwseq;
 	struct hubp *hubp = pipe_ctx->plane_res.hubp;
 	struct mpcc_blnd_cfg blnd_cfg = { {0} };
 	bool per_pixel_alpha = pipe_ctx->plane_state->per_pixel_alpha;
@@ -2087,10 +2098,10 @@ void dcn20_update_mpcc(struct dc *dc, struct pipe_ctx *pipe_ctx)
 
 	// input to MPCC is always RGB, by default leave black_color at 0
 	if (dc->debug.visual_confirm == VISUAL_CONFIRM_HDR) {
-		dc->hwss.get_hdr_visual_confirm_color(
+		hws->funcs.get_hdr_visual_confirm_color(
 				pipe_ctx, &blnd_cfg.black_color);
 	} else if (dc->debug.visual_confirm == VISUAL_CONFIRM_SURFACE) {
-		dc->hwss.get_surface_visual_confirm_color(
+		hws->funcs.get_surface_visual_confirm_color(
 				pipe_ctx, &blnd_cfg.black_color);
 	} else if (dc->debug.visual_confirm == VISUAL_CONFIRM_MPCTREE) {
 		dcn20_get_mpctree_visual_confirm_color(
@@ -2246,13 +2257,13 @@ void dcn20_fpga_init_hw(struct dc *dc)
 		res_pool->dccg->funcs->dccg_init(res_pool->dccg);
 
 	//Enable ability to power gate / don't force power on permanently
-	dc->hwss.enable_power_gating_plane(hws, true);
+	hws->funcs.enable_power_gating_plane(hws, true);
 
 	// Specific to FPGA dccg and registers
 	REG_WRITE(RBBMIF_TIMEOUT_DIS, 0xFFFFFFFF);
 	REG_WRITE(RBBMIF_TIMEOUT_DIS_2, 0xFFFFFFFF);
 
-	dc->hwss.dccg_init(hws);
+	hws->funcs.dccg_init(hws);
 
 	REG_UPDATE(DCHUBBUB_GLOBAL_TIMER_CNTL, DCHUBBUB_GLOBAL_TIMER_REFDIV, 2);
 	REG_UPDATE(DCHUBBUB_GLOBAL_TIMER_CNTL, DCHUBBUB_GLOBAL_TIMER_ENABLE, 1);
@@ -2316,7 +2327,7 @@ void dcn20_fpga_init_hw(struct dc *dc)
 		dc->res_pool->opps[i]->mpcc_disconnect_pending[pipe_ctx->plane_res.mpcc_inst] = true;
 		pipe_ctx->stream_res.opp = dc->res_pool->opps[i];
 		/*to do*/
-		dc->hwss.plane_atomic_disconnect(dc, pipe_ctx);
+		hws->funcs.plane_atomic_disconnect(dc, pipe_ctx);
 	}
 
 	/* initialize DWB pointer to MCIF_WB */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.h b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.h
index 28aaceed6d8b..eecd7a26ec4c 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.h
@@ -26,6 +26,8 @@
 #ifndef __DC_HWSS_DCN20_H__
 #define __DC_HWSS_DCN20_H__
 
+#include "hw_sequencer_private.h"
+
 bool dcn20_set_blend_lut(
 	struct pipe_ctx *pipe_ctx, const struct dc_plane_state *plane_state);
 bool dcn20_set_shaper_3dlut(
@@ -111,6 +113,7 @@ void dcn20_disable_writeback(
 void dcn20_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *pipe_ctx);
 bool dcn20_dmdata_status_done(struct pipe_ctx *pipe_ctx);
 void dcn20_program_dmdata_engine(struct pipe_ctx *pipe_ctx);
+void dcn20_set_dmdata_attributes(struct pipe_ctx *pipe_ctx);
 void dcn20_init_vm_ctx(
 		struct dce_hwseq *hws,
 		struct dc *dc,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
index 51b6c25aa3c5..d51e02fdab4d 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
@@ -30,19 +30,13 @@
 static const struct hw_sequencer_funcs dcn20_funcs = {
 	.program_gamut_remap = dcn10_program_gamut_remap,
 	.init_hw = dcn10_init_hw,
-	.init_pipes = dcn10_init_pipes,
 	.apply_ctx_to_hw = dce110_apply_ctx_to_hw,
 	.apply_ctx_for_surface = NULL,
 	.program_front_end_for_ctx = dcn20_program_front_end_for_ctx,
 	.update_plane_addr = dcn20_update_plane_addr,
-	.plane_atomic_disconnect = dcn10_plane_atomic_disconnect,
 	.update_dchub = dcn10_update_dchub,
-	.update_mpcc = dcn20_update_mpcc,
 	.update_pending_status = dcn10_update_pending_status,
-	.set_input_transfer_func = dcn20_set_input_transfer_func,
-	.set_output_transfer_func = dcn20_set_output_transfer_func,
 	.program_output_csc = dcn20_program_output_csc,
-	.power_down = dce110_power_down,
 	.enable_accelerated_mode = dce110_enable_accelerated_mode,
 	.enable_timing_synchronization = dcn10_enable_timing_synchronization,
 	.enable_per_frame_crtc_position_reset = dcn10_enable_per_frame_crtc_position_reset,
@@ -54,16 +48,12 @@ static const struct hw_sequencer_funcs dcn20_funcs = {
 	.blank_stream = dce110_blank_stream,
 	.enable_audio_stream = dce110_enable_audio_stream,
 	.disable_audio_stream = dce110_disable_audio_stream,
-	.enable_display_power_gating = dcn10_dummy_display_power_gating,
 	.disable_plane = dcn20_disable_plane,
-	.blank_pixel_data = dcn20_blank_pixel_data,
 	.pipe_control_lock = dcn20_pipe_control_lock,
 	.pipe_control_lock_global = dcn20_pipe_control_lock_global,
 	.prepare_bandwidth = dcn20_prepare_bandwidth,
 	.optimize_bandwidth = dcn20_optimize_bandwidth,
 	.update_bandwidth = dcn20_update_bandwidth,
-	.reset_hw_ctx_wrap = dcn20_reset_hw_ctx_wrap,
-	.enable_stream_timing = dcn20_enable_stream_timing,
 	.set_drr = dcn10_set_drr,
 	.get_position = dcn10_get_position,
 	.set_static_screen_control = dcn10_set_static_screen_control,
@@ -73,18 +63,42 @@ static const struct hw_sequencer_funcs dcn20_funcs = {
 	.get_hw_state = dcn10_get_hw_state,
 	.clear_status_bits = dcn10_clear_status_bits,
 	.wait_for_mpcc_disconnect = dcn10_wait_for_mpcc_disconnect,
-	.edp_backlight_control = dce110_edp_backlight_control,
 	.edp_power_control = dce110_edp_power_control,
 	.edp_wait_for_hpd_ready = dce110_edp_wait_for_hpd_ready,
 	.set_cursor_position = dcn10_set_cursor_position,
 	.set_cursor_attribute = dcn10_set_cursor_attribute,
 	.set_cursor_sdr_white_level = dcn10_set_cursor_sdr_white_level,
-	.disable_stream_gating = dcn20_disable_stream_gating,
-	.enable_stream_gating = dcn20_enable_stream_gating,
 	.setup_periodic_interrupt = dcn10_setup_periodic_interrupt,
-	.setup_vupdate_interrupt = dcn20_setup_vupdate_interrupt,
 	.set_clock = dcn10_set_clock,
 	.get_clock = dcn10_get_clock,
+	.program_triplebuffer = dcn20_program_triple_buffer,
+	.enable_writeback = dcn20_enable_writeback,
+	.disable_writeback = dcn20_disable_writeback,
+	.dmdata_status_done = dcn20_dmdata_status_done,
+	.program_dmdata_engine = dcn20_program_dmdata_engine,
+	.set_dmdata_attributes = dcn20_set_dmdata_attributes,
+	.init_sys_ctx = dcn20_init_sys_ctx,
+	.init_vm_ctx = dcn20_init_vm_ctx,
+	.set_flip_control_gsl = dcn20_set_flip_control_gsl,
+	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
+};
+
+static const struct hwseq_private_funcs dcn20_private_funcs = {
+	.init_pipes = dcn10_init_pipes,
+	.update_plane_addr = dcn20_update_plane_addr,
+	.plane_atomic_disconnect = dcn10_plane_atomic_disconnect,
+	.update_mpcc = dcn20_update_mpcc,
+	.set_input_transfer_func = dcn20_set_input_transfer_func,
+	.set_output_transfer_func = dcn20_set_output_transfer_func,
+	.power_down = dce110_power_down,
+	.enable_display_power_gating = dcn10_dummy_display_power_gating,
+	.blank_pixel_data = dcn20_blank_pixel_data,
+	.reset_hw_ctx_wrap = dcn20_reset_hw_ctx_wrap,
+	.enable_stream_timing = dcn20_enable_stream_timing,
+	.edp_backlight_control = dce110_edp_backlight_control,
+	.disable_stream_gating = dcn20_disable_stream_gating,
+	.enable_stream_gating = dcn20_enable_stream_gating,
+	.setup_vupdate_interrupt = dcn20_setup_vupdate_interrupt,
 	.did_underflow_occur = dcn10_did_underflow_occur,
 	.init_blank = dcn20_init_blank,
 	.disable_vga = dcn20_disable_vga,
@@ -95,15 +109,7 @@ static const struct hw_sequencer_funcs dcn20_funcs = {
 	.dpp_pg_control = dcn20_dpp_pg_control,
 	.hubp_pg_control = dcn20_hubp_pg_control,
 	.dsc_pg_control = NULL,
-	.program_triplebuffer = dcn20_program_triple_buffer,
-	.enable_writeback = dcn20_enable_writeback,
-	.disable_writeback = dcn20_disable_writeback,
 	.update_odm = dcn20_update_odm,
-	.dmdata_status_done = dcn20_dmdata_status_done,
-	.program_dmdata_engine = dcn20_program_dmdata_engine,
-	.init_sys_ctx = dcn20_init_sys_ctx,
-	.init_vm_ctx = dcn20_init_vm_ctx,
-	.set_flip_control_gsl = dcn20_set_flip_control_gsl,
 	.dsc_pg_control = dcn20_dsc_pg_control,
 	.get_surface_visual_confirm_color = dcn10_get_surface_visual_confirm_color,
 	.get_hdr_visual_confirm_color = dcn10_get_hdr_visual_confirm_color,
@@ -113,15 +119,15 @@ static const struct hw_sequencer_funcs dcn20_funcs = {
 	.dccg_init = dcn20_dccg_init,
 	.set_blend_lut = dcn20_set_blend_lut,
 	.set_shaper_3dlut = dcn20_set_shaper_3dlut,
-	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
 };
 
 void dcn20_hw_sequencer_construct(struct dc *dc)
 {
 	dc->hwss = dcn20_funcs;
+	dc->hwseq->funcs = dcn20_private_funcs;
 
 	if (IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
 		dc->hwss.init_hw = dcn20_fpga_init_hw;
-		dc->hwss.init_pipes = NULL;
+		dc->hwseq->funcs.init_pipes = NULL;
 	}
 }
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.c
index 005894dcabc9..081ad8e43d58 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.c
@@ -28,6 +28,7 @@
 #include "core_types.h"
 #include "resource.h"
 #include "dce/dce_hwseq.h"
+#include "dcn21_hwseq.h"
 #include "vmid.h"
 #include "reg_helper.h"
 #include "hw/clk_mgr.h"
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.h b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.h
index 2f7b8a220eb9..182736096123 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hwseq.h
@@ -26,6 +26,8 @@
 #ifndef __DC_HWSS_DCN21_H__
 #define __DC_HWSS_DCN21_H__
 
+#include "hw_sequencer_private.h"
+
 struct dc;
 
 int dcn21_init_sys_ctx(struct dce_hwseq *hws,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
index 1d8b67b4e252..4861aa5c59ae 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
@@ -31,19 +31,13 @@
 static const struct hw_sequencer_funcs dcn21_funcs = {
 	.program_gamut_remap = dcn10_program_gamut_remap,
 	.init_hw = dcn10_init_hw,
-	.init_pipes = dcn10_init_pipes,
 	.apply_ctx_to_hw = dce110_apply_ctx_to_hw,
 	.apply_ctx_for_surface = NULL,
 	.program_front_end_for_ctx = dcn20_program_front_end_for_ctx,
 	.update_plane_addr = dcn20_update_plane_addr,
-	.plane_atomic_disconnect = dcn10_plane_atomic_disconnect,
 	.update_dchub = dcn10_update_dchub,
-	.update_mpcc = dcn20_update_mpcc,
 	.update_pending_status = dcn10_update_pending_status,
-	.set_input_transfer_func = dcn20_set_input_transfer_func,
-	.set_output_transfer_func = dcn20_set_output_transfer_func,
 	.program_output_csc = dcn20_program_output_csc,
-	.power_down = dce110_power_down,
 	.enable_accelerated_mode = dce110_enable_accelerated_mode,
 	.enable_timing_synchronization = dcn10_enable_timing_synchronization,
 	.enable_per_frame_crtc_position_reset = dcn10_enable_per_frame_crtc_position_reset,
@@ -55,16 +49,12 @@ static const struct hw_sequencer_funcs dcn21_funcs = {
 	.blank_stream = dce110_blank_stream,
 	.enable_audio_stream = dce110_enable_audio_stream,
 	.disable_audio_stream = dce110_disable_audio_stream,
-	.enable_display_power_gating = dcn10_dummy_display_power_gating,
 	.disable_plane = dcn20_disable_plane,
-	.blank_pixel_data = dcn20_blank_pixel_data,
 	.pipe_control_lock = dcn20_pipe_control_lock,
 	.pipe_control_lock_global = dcn20_pipe_control_lock_global,
 	.prepare_bandwidth = dcn20_prepare_bandwidth,
 	.optimize_bandwidth = dcn20_optimize_bandwidth,
 	.update_bandwidth = dcn20_update_bandwidth,
-	.reset_hw_ctx_wrap = dcn20_reset_hw_ctx_wrap,
-	.enable_stream_timing = dcn20_enable_stream_timing,
 	.set_drr = dcn10_set_drr,
 	.get_position = dcn10_get_position,
 	.set_static_screen_control = dcn10_set_static_screen_control,
@@ -74,18 +64,49 @@ static const struct hw_sequencer_funcs dcn21_funcs = {
 	.get_hw_state = dcn10_get_hw_state,
 	.clear_status_bits = dcn10_clear_status_bits,
 	.wait_for_mpcc_disconnect = dcn10_wait_for_mpcc_disconnect,
-	.edp_backlight_control = dce110_edp_backlight_control,
 	.edp_power_control = dce110_edp_power_control,
 	.edp_wait_for_hpd_ready = dce110_edp_wait_for_hpd_ready,
 	.set_cursor_position = dcn10_set_cursor_position,
 	.set_cursor_attribute = dcn10_set_cursor_attribute,
 	.set_cursor_sdr_white_level = dcn10_set_cursor_sdr_white_level,
-	.disable_stream_gating = dcn20_disable_stream_gating,
-	.enable_stream_gating = dcn20_enable_stream_gating,
 	.setup_periodic_interrupt = dcn10_setup_periodic_interrupt,
-	.setup_vupdate_interrupt = dcn20_setup_vupdate_interrupt,
 	.set_clock = dcn10_set_clock,
 	.get_clock = dcn10_get_clock,
+	.program_triplebuffer = dcn20_program_triple_buffer,
+	.enable_writeback = dcn20_enable_writeback,
+	.disable_writeback = dcn20_disable_writeback,
+	.dmdata_status_done = dcn20_dmdata_status_done,
+	.program_dmdata_engine = dcn20_program_dmdata_engine,
+	.set_dmdata_attributes = dcn20_set_dmdata_attributes,
+	.init_sys_ctx = dcn21_init_sys_ctx,
+	.init_vm_ctx = dcn20_init_vm_ctx,
+	.set_flip_control_gsl = dcn20_set_flip_control_gsl,
+	.optimize_pwr_state = dcn21_optimize_pwr_state,
+	.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
+	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
+	.set_cursor_position = dcn10_set_cursor_position,
+	.set_cursor_attribute = dcn10_set_cursor_attribute,
+	.set_cursor_sdr_white_level = dcn10_set_cursor_sdr_white_level,
+	.optimize_pwr_state = dcn21_optimize_pwr_state,
+	.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
+};
+
+static const struct hwseq_private_funcs dcn21_private_funcs = {
+	.init_pipes = dcn10_init_pipes,
+	.update_plane_addr = dcn20_update_plane_addr,
+	.plane_atomic_disconnect = dcn10_plane_atomic_disconnect,
+	.update_mpcc = dcn20_update_mpcc,
+	.set_input_transfer_func = dcn20_set_input_transfer_func,
+	.set_output_transfer_func = dcn20_set_output_transfer_func,
+	.power_down = dce110_power_down,
+	.enable_display_power_gating = dcn10_dummy_display_power_gating,
+	.blank_pixel_data = dcn20_blank_pixel_data,
+	.reset_hw_ctx_wrap = dcn20_reset_hw_ctx_wrap,
+	.enable_stream_timing = dcn20_enable_stream_timing,
+	.edp_backlight_control = dce110_edp_backlight_control,
+	.disable_stream_gating = dcn20_disable_stream_gating,
+	.enable_stream_gating = dcn20_enable_stream_gating,
+	.setup_vupdate_interrupt = dcn20_setup_vupdate_interrupt,
 	.did_underflow_occur = dcn10_did_underflow_occur,
 	.init_blank = dcn20_init_blank,
 	.disable_vga = dcn20_disable_vga,
@@ -96,36 +117,26 @@ static const struct hw_sequencer_funcs dcn21_funcs = {
 	.dpp_pg_control = dcn20_dpp_pg_control,
 	.hubp_pg_control = dcn20_hubp_pg_control,
 	.dsc_pg_control = NULL,
-	.program_triplebuffer = dcn20_program_triple_buffer,
-	.enable_writeback = dcn20_enable_writeback,
-	.disable_writeback = dcn20_disable_writeback,
 	.update_odm = dcn20_update_odm,
-	.dmdata_status_done = dcn20_dmdata_status_done,
-	.program_dmdata_engine = dcn20_program_dmdata_engine,
-	.init_sys_ctx = dcn21_init_sys_ctx,
-	.init_vm_ctx = dcn20_init_vm_ctx,
-	.set_flip_control_gsl = dcn20_set_flip_control_gsl,
 	.dsc_pg_control = dcn20_dsc_pg_control,
 	.get_surface_visual_confirm_color = dcn10_get_surface_visual_confirm_color,
 	.get_hdr_visual_confirm_color = dcn10_get_hdr_visual_confirm_color,
 	.set_hdr_multiplier = dcn10_set_hdr_multiplier,
 	.verify_allow_pstate_change_high = dcn10_verify_allow_pstate_change_high,
 	.s0i3_golden_init_wa = dcn21_s0i3_golden_init_wa,
-	.optimize_pwr_state = dcn21_optimize_pwr_state,
-	.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
 	.wait_for_blank_complete = dcn20_wait_for_blank_complete,
 	.dccg_init = dcn20_dccg_init,
 	.set_blend_lut = dcn20_set_blend_lut,
 	.set_shaper_3dlut = dcn20_set_shaper_3dlut,
-	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
 };
 
 void dcn21_hw_sequencer_construct(struct dc *dc)
 {
 	dc->hwss = dcn21_funcs;
+	dc->hwseq->funcs = dcn21_private_funcs;
 
 	if (IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
 		dc->hwss.init_hw = dcn20_fpga_init_hw;
-		dc->hwss.init_pipes = NULL;
+		dc->hwseq->funcs.init_pipes = NULL;
 	}
 }
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
index 5941577d78a5..e9c6021a5372 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
@@ -32,38 +32,11 @@
 #include "inc/hw/link_encoder.h"
 #include "core_status.h"
 
-enum pipe_gating_control {
-	PIPE_GATING_CONTROL_DISABLE = 0,
-	PIPE_GATING_CONTROL_ENABLE,
-	PIPE_GATING_CONTROL_INIT
-};
-
 enum vline_select {
 	VLINE0,
 	VLINE1
 };
 
-struct dce_hwseq_wa {
-	bool blnd_crtc_trigger;
-	bool DEGVIDCN10_253;
-	bool false_optc_underflow;
-	bool DEGVIDCN10_254;
-	bool DEGVIDCN21;
-};
-
-struct hwseq_wa_state {
-	bool DEGVIDCN10_253_applied;
-};
-
-struct dce_hwseq {
-	struct dc_context *ctx;
-	const struct dce_hwseq_registers *regs;
-	const struct dce_hwseq_shift *shifts;
-	const struct dce_hwseq_mask *masks;
-	struct dce_hwseq_wa wa;
-	struct hwseq_wa_state wa_state;
-};
-
 struct pipe_ctx;
 struct dc_state;
 struct dc_stream_status;
@@ -71,255 +44,110 @@ struct dc_writeback_info;
 struct dchub_init_data;
 struct dc_static_screen_events;
 struct resource_pool;
-struct resource_context;
-struct stream_resource;
 struct dc_phy_addr_space_config;
 struct dc_virtual_addr_space_config;
-struct hubp;
 struct dpp;
+struct dce_hwseq;
 
 struct hw_sequencer_funcs {
+	/* Embedded Display Related */
+	void (*edp_power_control)(struct dc_link *link, bool enable);
+	void (*edp_wait_for_hpd_ready)(struct dc_link *link, bool power_up);
 
-	void (*disable_stream_gating)(struct dc *dc, struct pipe_ctx *pipe_ctx);
-
-	void (*enable_stream_gating)(struct dc *dc, struct pipe_ctx *pipe_ctx);
-
+	/* Pipe Programming Related */
 	void (*init_hw)(struct dc *dc);
-
-	void (*init_pipes)(struct dc *dc, struct dc_state *context);
-
-	enum dc_status (*apply_ctx_to_hw)(
-			struct dc *dc, struct dc_state *context);
-
-	void (*reset_hw_ctx_wrap)(
-			struct dc *dc, struct dc_state *context);
-
-	void (*apply_ctx_for_surface)(
-			struct dc *dc,
+	void (*enable_accelerated_mode)(struct dc *dc,
+			struct dc_state *context);
+	enum dc_status (*apply_ctx_to_hw)(struct dc *dc,
+			struct dc_state *context);
+	void (*disable_plane)(struct dc *dc, struct pipe_ctx *pipe_ctx);
+	void (*apply_ctx_for_surface)(struct dc *dc,
 			const struct dc_stream_state *stream,
-			int num_planes,
+			int num_planes, struct dc_state *context);
+	void (*program_front_end_for_ctx)(struct dc *dc,
 			struct dc_state *context);
-
-	void (*program_gamut_remap)(
+	void (*update_plane_addr)(const struct dc *dc,
 			struct pipe_ctx *pipe_ctx);
-
-	void (*program_output_csc)(struct dc *dc,
-			struct pipe_ctx *pipe_ctx,
-			enum dc_color_space colorspace,
-			uint16_t *matrix,
-			int opp_id);
-
-	void (*program_front_end_for_ctx)(
-			struct dc *dc,
-			struct dc_state *context);
-	void (*program_triplebuffer)(
-		const struct dc *dc,
-		struct pipe_ctx *pipe_ctx,
-		bool enableTripleBuffer);
-	void (*set_flip_control_gsl)(
-		struct pipe_ctx *pipe_ctx,
-		bool flip_immediate);
-
-	void (*update_plane_addr)(
-		const struct dc *dc,
-		struct pipe_ctx *pipe_ctx);
-
-	void (*plane_atomic_disconnect)(
-		struct dc *dc,
-		struct pipe_ctx *pipe_ctx);
-
-	void (*update_dchub)(
-		struct dce_hwseq *hws,
-		struct dchub_init_data *dh_data);
-
-	int (*init_sys_ctx)(
-			struct dce_hwseq *hws,
-			struct dc *dc,
-			struct dc_phy_addr_space_config *pa_config);
-	void (*init_vm_ctx)(
-			struct dce_hwseq *hws,
-			struct dc *dc,
-			struct dc_virtual_addr_space_config *va_config,
-			int vmid);
-	void (*update_mpcc)(
-		struct dc *dc,
-		struct pipe_ctx *pipe_ctx);
-
-	void (*update_pending_status)(
+	void (*update_dchub)(struct dce_hwseq *hws,
+			struct dchub_init_data *dh_data);
+	void (*wait_for_mpcc_disconnect)(struct dc *dc,
+			struct resource_pool *res_pool,
 			struct pipe_ctx *pipe_ctx);
-
-	bool (*set_input_transfer_func)(struct dc *dc,
-				struct pipe_ctx *pipe_ctx,
-				const struct dc_plane_state *plane_state);
-
-	bool (*set_output_transfer_func)(struct dc *dc,
-				struct pipe_ctx *pipe_ctx,
-				const struct dc_stream_state *stream);
-
-	void (*power_down)(struct dc *dc);
-
-	void (*enable_accelerated_mode)(struct dc *dc, struct dc_state *context);
-
-	void (*enable_timing_synchronization)(
-			struct dc *dc,
-			int group_index,
-			int group_size,
-			struct pipe_ctx *grouped_pipes[]);
-
-	void (*enable_per_frame_crtc_position_reset)(
-			struct dc *dc,
-			int group_size,
+	void (*program_triplebuffer)(const struct dc *dc,
+		struct pipe_ctx *pipe_ctx, bool enableTripleBuffer);
+	void (*update_pending_status)(struct pipe_ctx *pipe_ctx);
+
+	/* Pipe Lock Related */
+	void (*pipe_control_lock_global)(struct dc *dc,
+			struct pipe_ctx *pipe, bool lock);
+	void (*pipe_control_lock)(struct dc *dc,
+			struct pipe_ctx *pipe, bool lock);
+	void (*set_flip_control_gsl)(struct pipe_ctx *pipe_ctx,
+			bool flip_immediate);
+
+	/* Timing Related */
+	void (*get_position)(struct pipe_ctx **pipe_ctx, int num_pipes,
+			struct crtc_position *position);
+	int (*get_vupdate_offset_from_vsync)(struct pipe_ctx *pipe_ctx);
+	void (*enable_per_frame_crtc_position_reset)(struct dc *dc,
+			int group_size, struct pipe_ctx *grouped_pipes[]);
+	void (*enable_timing_synchronization)(struct dc *dc,
+			int group_index, int group_size,
 			struct pipe_ctx *grouped_pipes[]);
+	void (*setup_periodic_interrupt)(struct dc *dc,
+			struct pipe_ctx *pipe_ctx,
+			enum vline_select vline);
+	void (*set_drr)(struct pipe_ctx **pipe_ctx, int num_pipes,
+			unsigned int vmin, unsigned int vmax,
+			unsigned int vmid, unsigned int vmid_frame_number);
+	void (*set_static_screen_control)(struct pipe_ctx **pipe_ctx,
+			int num_pipes,
+			const struct dc_static_screen_events *events);
 
-	void (*enable_display_pipe_clock_gating)(
-					struct dc_context *ctx,
-					bool clock_gating);
-
-	bool (*enable_display_power_gating)(
-					struct dc *dc,
-					uint8_t controller_id,
-					struct dc_bios *dcb,
-					enum pipe_gating_control power_gating);
-
-	void (*disable_plane)(struct dc *dc, struct pipe_ctx *pipe_ctx);
-
-	void (*update_info_frame)(struct pipe_ctx *pipe_ctx);
-
-	void (*send_immediate_sdp_message)(
-				struct pipe_ctx *pipe_ctx,
-				const uint8_t *custom_sdp_message,
-				unsigned int sdp_message_size);
-
+	/* Stream Related */
 	void (*enable_stream)(struct pipe_ctx *pipe_ctx);
-
 	void (*disable_stream)(struct pipe_ctx *pipe_ctx);
-
+	void (*blank_stream)(struct pipe_ctx *pipe_ctx);
 	void (*unblank_stream)(struct pipe_ctx *pipe_ctx,
 			struct dc_link_settings *link_settings);
 
-	void (*blank_stream)(struct pipe_ctx *pipe_ctx);
-
-	void (*enable_audio_stream)(struct pipe_ctx *pipe_ctx);
-
-	void (*disable_audio_stream)(struct pipe_ctx *pipe_ctx);
-
-	void (*pipe_control_lock)(
-				struct dc *dc,
-				struct pipe_ctx *pipe,
-				bool lock);
+	/* Bandwidth Related */
+	void (*prepare_bandwidth)(struct dc *dc, struct dc_state *context);
+	bool (*update_bandwidth)(struct dc *dc, struct dc_state *context);
+	void (*optimize_bandwidth)(struct dc *dc, struct dc_state *context);
 
-	void (*pipe_control_lock_global)(
-				struct dc *dc,
-				struct pipe_ctx *pipe,
-				bool lock);
-	void (*blank_pixel_data)(
-			struct dc *dc,
+	/* Infopacket Related */
+	void (*set_avmute)(struct pipe_ctx *pipe_ctx, bool enable);
+	void (*send_immediate_sdp_message)(
 			struct pipe_ctx *pipe_ctx,
-			bool blank);
-
-	void (*prepare_bandwidth)(
-			struct dc *dc,
-			struct dc_state *context);
-	void (*optimize_bandwidth)(
-			struct dc *dc,
-			struct dc_state *context);
-
-	void (*exit_optimized_pwr_state)(
-			const struct dc *dc,
-			struct dc_state *context);
-	void (*optimize_pwr_state)(
-			const struct dc *dc,
-			struct dc_state *context);
-
-	bool (*update_bandwidth)(
-			struct dc *dc,
-			struct dc_state *context);
+			const uint8_t *custom_sdp_message,
+			unsigned int sdp_message_size);
+	void (*update_info_frame)(struct pipe_ctx *pipe_ctx);
+	void (*set_dmdata_attributes)(struct pipe_ctx *pipe);
 	void (*program_dmdata_engine)(struct pipe_ctx *pipe_ctx);
 	bool (*dmdata_status_done)(struct pipe_ctx *pipe_ctx);
 
-	void (*set_drr)(struct pipe_ctx **pipe_ctx, int num_pipes,
-			unsigned int vmin, unsigned int vmax,
-			unsigned int vmid, unsigned int vmid_frame_number);
-
-	void (*get_position)(struct pipe_ctx **pipe_ctx, int num_pipes,
-			struct crtc_position *position);
-
-	void (*set_static_screen_control)(struct pipe_ctx **pipe_ctx,
-			int num_pipes, const struct dc_static_screen_events *events);
-
-	enum dc_status (*enable_stream_timing)(
-			struct pipe_ctx *pipe_ctx,
-			struct dc_state *context,
-			struct dc *dc);
-
-	void (*setup_stereo)(
-			struct pipe_ctx *pipe_ctx,
-			struct dc *dc);
-
-	void (*set_avmute)(struct pipe_ctx *pipe_ctx, bool enable);
-
-	void (*log_hw_state)(struct dc *dc,
-		struct dc_log_buffer_ctx *log_ctx);
-	void (*get_hw_state)(struct dc *dc, char *pBuf, unsigned int bufSize, unsigned int mask);
-	void (*clear_status_bits)(struct dc *dc, unsigned int mask);
-
-	void (*wait_for_mpcc_disconnect)(struct dc *dc,
-			struct resource_pool *res_pool,
-			struct pipe_ctx *pipe_ctx);
-
-	void (*edp_power_control)(
-			struct dc_link *link,
-			bool enable);
-	void (*edp_backlight_control)(
-			struct dc_link *link,
-			bool enable);
-	void (*edp_wait_for_hpd_ready)(struct dc_link *link, bool power_up);
-
+	/* Cursor Related */
 	void (*set_cursor_position)(struct pipe_ctx *pipe);
 	void (*set_cursor_attribute)(struct pipe_ctx *pipe);
 	void (*set_cursor_sdr_white_level)(struct pipe_ctx *pipe);
 
-	void (*setup_periodic_interrupt)(struct dc *dc,
-			struct pipe_ctx *pipe_ctx,
-			enum vline_select vline);
-	void (*setup_vupdate_interrupt)(struct dc *dc, struct pipe_ctx *pipe_ctx);
-	bool (*did_underflow_occur)(struct dc *dc, struct pipe_ctx *pipe_ctx);
-
-	void (*init_blank)(struct dc *dc, struct timing_generator *tg);
-	void (*disable_vga)(struct dce_hwseq *hws);
-	void (*bios_golden_init)(struct dc *dc);
-	void (*plane_atomic_power_down)(struct dc *dc,
-			struct dpp *dpp,
-			struct hubp *hubp);
-
-	void (*plane_atomic_disable)(
-			struct dc *dc, struct pipe_ctx *pipe_ctx);
-
-	void (*enable_power_gating_plane)(
-		struct dce_hwseq *hws,
-		bool enable);
-
-	void (*dpp_pg_control)(
-			struct dce_hwseq *hws,
-			unsigned int dpp_inst,
-			bool power_on);
-
-	void (*hubp_pg_control)(
-			struct dce_hwseq *hws,
-			unsigned int hubp_inst,
-			bool power_on);
-
-	void (*dsc_pg_control)(
-			struct dce_hwseq *hws,
-			unsigned int dsc_inst,
-			bool power_on);
-
+	/* Colour Related */
+	void (*program_gamut_remap)(struct pipe_ctx *pipe_ctx);
+	void (*program_output_csc)(struct dc *dc, struct pipe_ctx *pipe_ctx,
+			enum dc_color_space colorspace,
+			uint16_t *matrix, int opp_id);
 
-	void (*update_odm)(struct dc *dc, struct dc_state *context, struct pipe_ctx *pipe_ctx);
-	void (*program_all_writeback_pipes_in_tree)(
+	/* VM Related */
+	int (*init_sys_ctx)(struct dce_hwseq *hws,
 			struct dc *dc,
-			const struct dc_stream_state *stream,
-			struct dc_state *context);
+			struct dc_phy_addr_space_config *pa_config);
+	void (*init_vm_ctx)(struct dce_hwseq *hws,
+			struct dc *dc,
+			struct dc_virtual_addr_space_config *va_config,
+			int vmid);
+
+	/* Writeback Related */
 	void (*update_writeback)(struct dc *dc,
 			const struct dc_stream_status *stream_status,
 			struct dc_writeback_info *wb_info,
@@ -330,46 +158,32 @@ struct hw_sequencer_funcs {
 			struct dc_state *context);
 	void (*disable_writeback)(struct dc *dc,
 			unsigned int dwb_pipe_inst);
-	enum dc_status (*set_clock)(struct dc *dc,
-			enum dc_clock_type clock_type,
-			uint32_t clk_khz,
-			uint32_t stepping);
 
-	void (*get_clock)(struct dc *dc,
+	/* Clock Related */
+	enum dc_status (*set_clock)(struct dc *dc,
 			enum dc_clock_type clock_type,
+			uint32_t clk_khz, uint32_t stepping);
+	void (*get_clock)(struct dc *dc, enum dc_clock_type clock_type,
 			struct dc_clock_config *clock_cfg);
-
-	bool (*s0i3_golden_init_wa)(struct dc *dc);
-
-	void (*get_surface_visual_confirm_color)(
-			const struct pipe_ctx *pipe_ctx,
-			struct tg_color *color);
-
-	void (*get_hdr_visual_confirm_color)(
-			struct pipe_ctx *pipe_ctx,
-			struct tg_color *color);
-
-	void (*set_hdr_multiplier)(struct pipe_ctx *pipe_ctx);
-
-	void (*verify_allow_pstate_change_high)(struct dc *dc);
-
-	void (*program_pipe)(
-			struct dc *dc,
-			struct pipe_ctx *pipe_ctx,
+	void (*optimize_pwr_state)(const struct dc *dc,
+			struct dc_state *context);
+	void (*exit_optimized_pwr_state)(const struct dc *dc,
 			struct dc_state *context);
 
-	bool (*wait_for_blank_complete)(
-			struct output_pixel_processor *opp);
+	/* Audio Related */
+	void (*enable_audio_stream)(struct pipe_ctx *pipe_ctx);
+	void (*disable_audio_stream)(struct pipe_ctx *pipe_ctx);
 
-	void (*dccg_init)(struct dce_hwseq *hws);
+	/* Stereo 3D Related */
+	void (*setup_stereo)(struct pipe_ctx *pipe_ctx, struct dc *dc);
 
-	bool (*set_blend_lut)(
-		struct pipe_ctx *pipe_ctx, const struct dc_plane_state *plane_state);
+	/* HW State Logging Related */
+	void (*log_hw_state)(struct dc *dc, struct dc_log_buffer_ctx *log_ctx);
+	void (*get_hw_state)(struct dc *dc, char *pBuf,
+			unsigned int bufSize, unsigned int mask);
+	void (*clear_status_bits)(struct dc *dc, unsigned int mask);
 
-	bool (*set_shaper_3dlut)(
-		struct pipe_ctx *pipe_ctx, const struct dc_plane_state *plane_state);
 
-	int (*get_vupdate_offset_from_vsync)(struct pipe_ctx *pipe_ctx);
 };
 
 void color_space_to_black_color(
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
new file mode 100644
index 000000000000..8ba06f015975
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
@@ -0,0 +1,156 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_HW_SEQUENCER_PRIVATE_H__
+#define __DC_HW_SEQUENCER_PRIVATE_H__
+
+#include "dc_types.h"
+
+enum pipe_gating_control {
+	PIPE_GATING_CONTROL_DISABLE = 0,
+	PIPE_GATING_CONTROL_ENABLE,
+	PIPE_GATING_CONTROL_INIT
+};
+
+struct dce_hwseq_wa {
+	bool blnd_crtc_trigger;
+	bool DEGVIDCN10_253;
+	bool false_optc_underflow;
+	bool DEGVIDCN10_254;
+	bool DEGVIDCN21;
+};
+
+struct hwseq_wa_state {
+	bool DEGVIDCN10_253_applied;
+};
+
+struct pipe_ctx;
+struct dc_state;
+struct dc_stream_status;
+struct dc_writeback_info;
+struct dchub_init_data;
+struct dc_static_screen_events;
+struct resource_pool;
+struct resource_context;
+struct stream_resource;
+struct dc_phy_addr_space_config;
+struct dc_virtual_addr_space_config;
+struct hubp;
+struct dpp;
+struct dce_hwseq;
+struct timing_generator;
+struct tg_color;
+struct output_pixel_processor;
+
+struct hwseq_private_funcs {
+
+	void (*disable_stream_gating)(struct dc *dc, struct pipe_ctx *pipe_ctx);
+	void (*enable_stream_gating)(struct dc *dc, struct pipe_ctx *pipe_ctx);
+	void (*init_pipes)(struct dc *dc, struct dc_state *context);
+	void (*reset_hw_ctx_wrap)(struct dc *dc, struct dc_state *context);
+	void (*update_plane_addr)(const struct dc *dc,
+			struct pipe_ctx *pipe_ctx);
+	void (*plane_atomic_disconnect)(struct dc *dc,
+			struct pipe_ctx *pipe_ctx);
+	void (*update_mpcc)(struct dc *dc, struct pipe_ctx *pipe_ctx);
+	bool (*set_input_transfer_func)(struct dc *dc,
+				struct pipe_ctx *pipe_ctx,
+				const struct dc_plane_state *plane_state);
+	bool (*set_output_transfer_func)(struct dc *dc,
+				struct pipe_ctx *pipe_ctx,
+				const struct dc_stream_state *stream);
+	void (*power_down)(struct dc *dc);
+	void (*enable_display_pipe_clock_gating)(struct dc_context *ctx,
+					bool clock_gating);
+	bool (*enable_display_power_gating)(struct dc *dc,
+					uint8_t controller_id,
+					struct dc_bios *dcb,
+					enum pipe_gating_control power_gating);
+	void (*blank_pixel_data)(struct dc *dc,
+			struct pipe_ctx *pipe_ctx,
+			bool blank);
+	enum dc_status (*enable_stream_timing)(
+			struct pipe_ctx *pipe_ctx,
+			struct dc_state *context,
+			struct dc *dc);
+	void (*edp_backlight_control)(struct dc_link *link,
+			bool enable);
+	void (*setup_vupdate_interrupt)(struct dc *dc,
+			struct pipe_ctx *pipe_ctx);
+	bool (*did_underflow_occur)(struct dc *dc, struct pipe_ctx *pipe_ctx);
+	void (*init_blank)(struct dc *dc, struct timing_generator *tg);
+	void (*disable_vga)(struct dce_hwseq *hws);
+	void (*bios_golden_init)(struct dc *dc);
+	void (*plane_atomic_power_down)(struct dc *dc,
+			struct dpp *dpp,
+			struct hubp *hubp);
+	void (*plane_atomic_disable)(struct dc *dc, struct pipe_ctx *pipe_ctx);
+	void (*enable_power_gating_plane)(struct dce_hwseq *hws,
+		bool enable);
+	void (*dpp_pg_control)(struct dce_hwseq *hws,
+			unsigned int dpp_inst,
+			bool power_on);
+	void (*hubp_pg_control)(struct dce_hwseq *hws,
+			unsigned int hubp_inst,
+			bool power_on);
+	void (*dsc_pg_control)(struct dce_hwseq *hws,
+			unsigned int dsc_inst,
+			bool power_on);
+	void (*update_odm)(struct dc *dc, struct dc_state *context,
+			struct pipe_ctx *pipe_ctx);
+	void (*program_all_writeback_pipes_in_tree)(struct dc *dc,
+			const struct dc_stream_state *stream,
+			struct dc_state *context);
+	bool (*s0i3_golden_init_wa)(struct dc *dc);
+	void (*get_surface_visual_confirm_color)(
+			const struct pipe_ctx *pipe_ctx,
+			struct tg_color *color);
+	void (*get_hdr_visual_confirm_color)(struct pipe_ctx *pipe_ctx,
+			struct tg_color *color);
+	void (*set_hdr_multiplier)(struct pipe_ctx *pipe_ctx);
+	void (*verify_allow_pstate_change_high)(struct dc *dc);
+	void (*program_pipe)(struct dc *dc,
+			struct pipe_ctx *pipe_ctx,
+			struct dc_state *context);
+	bool (*wait_for_blank_complete)(struct output_pixel_processor *opp);
+	void (*dccg_init)(struct dce_hwseq *hws);
+	bool (*set_blend_lut)(struct pipe_ctx *pipe_ctx,
+			const struct dc_plane_state *plane_state);
+	bool (*set_shaper_3dlut)(struct pipe_ctx *pipe_ctx,
+			const struct dc_plane_state *plane_state);
+};
+
+struct dce_hwseq {
+	struct dc_context *ctx;
+	const struct dce_hwseq_registers *regs;
+	const struct dce_hwseq_shift *shifts;
+	const struct dce_hwseq_mask *masks;
+	struct dce_hwseq_wa wa;
+	struct hwseq_wa_state wa_state;
+	struct hwseq_private_funcs funcs;
+
+};
+
+#endif /* __DC_HW_SEQUENCER_PRIVATE_H__ */
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 04/51] drm/amd/display: Fix Dali clk mgr construct
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (2 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 03/51] drm/amd/display: add separate of private hwss functions sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 05/51] drm/amd/display: Map DSC resources 1-to-1 if numbers of OPPs and DSCs are equal sunpeng.li
                   ` (46 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Eric Yang, Leo Li, harry.wentland, rodrigo.siqueira,
	Michael Strauss, bhawanpreet.lakha

From: Michael Strauss <michael.strauss@amd.com>

[WHY]
Dali is currently being misinterpreted as Renoir,
as a result uses wrong clk mgr constructor

[HOW]
Add check to init Dali as Raven2 before it can be misidentified
Clean up & fix Raven2 & Dali ASIC checks
Signed-off-by: Michael Strauss <michael.strauss@amd.com>
Reviewed-by: Eric Yang <eric.yang2@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c  |  7 +++++++
 drivers/gpu/drm/amd/display/include/dal_asic_id.h | 12 +++++-------
 2 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
index a7c4c1d1fc59..6d60ef822619 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
@@ -134,6 +134,13 @@ struct clk_mgr *dc_clk_mgr_create(struct dc_context *ctx, struct pp_smu_funcs *p
 
 #if defined(CONFIG_DRM_AMD_DC_DCN)
 	case FAMILY_RV:
+		if (ASICREV_IS_DALI(asic_id.hw_internal_rev)) {
+			/* TEMP: this check has to come before ASICREV_IS_RENOIR */
+			/* which also incorrectly returns true for Dali */
+			rv2_clk_mgr_construct(ctx, clk_mgr, pp_smu);
+			break;
+		}
+
 		if (ASICREV_IS_RENOIR(asic_id.hw_internal_rev)) {
 			rn_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
 			break;
diff --git a/drivers/gpu/drm/amd/display/include/dal_asic_id.h b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
index 6f56208a9471..72b659c63aea 100644
--- a/drivers/gpu/drm/amd/display/include/dal_asic_id.h
+++ b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
@@ -134,19 +134,17 @@
 #define PICASSO_A0 0x41
 /* DCN1_01 */
 #define RAVEN2_A0 0x81
+#define RAVEN2_15D8_REV_E3 0xE3
+#define RAVEN2_15D8_REV_E4 0xE4
 #define RAVEN1_F0 0xF0
 #define RAVEN_UNKNOWN 0xFF
 
-#define PICASSO_15D8_REV_E3 0xE3
-#define PICASSO_15D8_REV_E4 0xE4
-
 #define ASICREV_IS_RAVEN(eChipRev) ((eChipRev >= RAVEN_A0) && eChipRev < RAVEN_UNKNOWN)
 #define ASICREV_IS_PICASSO(eChipRev) ((eChipRev >= PICASSO_A0) && (eChipRev < RAVEN2_A0))
-#define ASICREV_IS_RAVEN2(eChipRev) ((eChipRev >= RAVEN2_A0) && (eChipRev < PICASSO_15D8_REV_E3))
-#define ASICREV_IS_DALI(eChipRev) ((eChipRev >= PICASSO_15D8_REV_E3) && (eChipRev < RAVEN1_F0))
-
+#define ASICREV_IS_RAVEN2(eChipRev) ((eChipRev >= RAVEN2_A0) && (eChipRev < RAVEN1_F0))
 #define ASICREV_IS_RV1_F0(eChipRev) ((eChipRev >= RAVEN1_F0) && (eChipRev < RAVEN_UNKNOWN))
-
+#define ASICREV_IS_DALI(eChipRev) ((eChipRev == RAVEN2_15D8_REV_E3) \
+		|| (eChipRev == RAVEN2_15D8_REV_E4))
 
 #define FAMILY_RV 142 /* DCN 1*/
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 05/51] drm/amd/display: Map DSC resources 1-to-1 if numbers of OPPs and DSCs are equal
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (3 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 04/51] drm/amd/display: Fix Dali clk mgr construct sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 06/51] drm/amd/display: fix DalDramClockChangeLatencyNs override sunpeng.li
                   ` (45 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, Nikola Cornij,
	Dmytro Laktyushkin, bhawanpreet.lakha

From: Nikola Cornij <nikola.cornij@amd.com>

[why]
On ASICs where number of DSCs is the same as OPPs there's no need
for DSC resource management. Mappping 1-to-1 fixes mode-set- or S3-
-related issues for such platforms.

[how]
Map DSC resources 1-to-1 to pipes only if number of OPPs is the same
as number of DSCs. This will still keep other ASICs working.
A follow-up patch to fix mode-set issues on those ASICs will be
required if testing shows issues with mode set.

Signed-off-by: Nikola Cornij <nikola.cornij@amd.com>
Reviewed-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../gpu/drm/amd/display/dc/dcn20/dcn20_resource.c   | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
index da7a92fc0909..2aa6c0be45b4 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
@@ -1458,13 +1458,20 @@ enum dc_status dcn20_build_mapped_resource(const struct dc *dc, struct dc_state
 
 static void acquire_dsc(struct resource_context *res_ctx,
 			const struct resource_pool *pool,
-			struct display_stream_compressor **dsc)
+			struct display_stream_compressor **dsc,
+			int pipe_idx)
 {
 	int i;
 
 	ASSERT(*dsc == NULL);
 	*dsc = NULL;
 
+	if (pool->res_cap->num_dsc == pool->res_cap->num_opp) {
+		*dsc = pool->dscs[pipe_idx];
+		res_ctx->is_dsc_acquired[pipe_idx] = true;
+		return;
+	}
+
 	/* Find first free DSC */
 	for (i = 0; i < pool->res_cap->num_dsc; i++)
 		if (!res_ctx->is_dsc_acquired[i]) {
@@ -1505,7 +1512,7 @@ static enum dc_status add_dsc_to_stream_resource(struct dc *dc,
 		if (pipe_ctx->stream != dc_stream)
 			continue;
 
-		acquire_dsc(&dc_ctx->res_ctx, pool, &pipe_ctx->stream_res.dsc);
+		acquire_dsc(&dc_ctx->res_ctx, pool, &pipe_ctx->stream_res.dsc, i);
 
 		/* The number of DSCs can be less than the number of pipes */
 		if (!pipe_ctx->stream_res.dsc) {
@@ -1697,7 +1704,7 @@ bool dcn20_split_stream_for_odm(
 	}
 	next_odm_pipe->stream_res.opp = pool->opps[next_odm_pipe->pipe_idx];
 	if (next_odm_pipe->stream->timing.flags.DSC == 1) {
-		acquire_dsc(res_ctx, pool, &next_odm_pipe->stream_res.dsc);
+		acquire_dsc(res_ctx, pool, &next_odm_pipe->stream_res.dsc, next_odm_pipe->pipe_idx);
 		ASSERT(next_odm_pipe->stream_res.dsc);
 		if (next_odm_pipe->stream_res.dsc == NULL)
 			return false;
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 06/51] drm/amd/display: fix DalDramClockChangeLatencyNs override
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (4 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 05/51] drm/amd/display: Map DSC resources 1-to-1 if numbers of OPPs and DSCs are equal sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 07/51] drm/amd/display: Wrong ifdef guards were used around DML validation sunpeng.li
                   ` (44 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Eric Yang, Leo Li, harry.wentland, rodrigo.siqueira,
	Joseph Gravenor, bhawanpreet.lakha

From: Joseph Gravenor <joseph.gravenor@amd.com>

[why]
pstate_latency_us never gets updated from the hard coded value
in rn_clk_mgr.c

[how]
update the wm table's values before we do calculations with them

Signed-off-by: Joseph Gravenor <joseph.gravenor@amd.com>
Reviewed-by: Eric Yang <eric.yang2@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
index 818c7a629484..fef11d57d2b7 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
@@ -1011,9 +1011,12 @@ static void patch_bounding_box(struct dc *dc, struct _vcs_dpi_soc_bounding_box_s
 	}
 
 	if (dc->bb_overrides.dram_clock_change_latency_ns) {
-		bb->dram_clock_change_latency_us =
+		for (i = 0; i < WM_SET_COUNT; i++) {
+			dc->clk_mgr->bw_params->wm_table.entries[i].pstate_latency_us =
 				dc->bb_overrides.dram_clock_change_latency_ns / 1000.0;
+		}
 	}
+
 	kernel_fpu_end();
 }
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 07/51] drm/amd/display: Wrong ifdef guards were used around DML validation
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (5 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 06/51] drm/amd/display: fix DalDramClockChangeLatencyNs override sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 08/51] drm/amd/display: Reset PHY in link re-training sunpeng.li
                   ` (43 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, Jaehyun Chung,
	Alvin Lee, bhawanpreet.lakha

From: Jaehyun Chung <jaehyun.chung@amd.com>

[Why]
Wrong guards were causing the debug option not to run.

[How]
Changed the guard to the correct one, matching the rq, ttu, dlg regs struct
members that need to be guarded. Also log a message when validation starts.

Signed-off-by: Jaehyun Chung <jaehyun.chung@amd.com>
Reviewed-by: Alvin Lee <Alvin.Lee2@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc.c          | 2 +-
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c | 1 +
 drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c | 1 +
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index e384c143bb58..061e8adf7476 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -2187,7 +2187,7 @@ static void commit_planes_for_stream(struct dc *dc,
 	}
 	if (dc->hwss.program_front_end_for_ctx && update_type != UPDATE_TYPE_FAST) {
 		dc->hwss.program_front_end_for_ctx(dc, context);
-#ifdef CONFIG_DRM_AMD_DC_DCN1_0
+#ifdef CONFIG_DRM_AMD_DC_DCN
 		if (dc->debug.validate_dml_output) {
 			for (i = 0; i < dc->res_pool->pipe_count; i++) {
 				struct pipe_ctx cur_pipe = context->res_ctx.pipe_ctx[i];
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c
index 2823be75b071..84d7ac5dd206 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.c
@@ -1257,6 +1257,7 @@ void hubp2_validate_dml_output(struct hubp *hubp,
 	struct _vcs_dpi_display_dlg_regs_st dlg_attr = {0};
 	struct _vcs_dpi_display_ttu_regs_st ttu_attr = {0};
 	DC_LOGGER_INIT(ctx->logger);
+	DC_LOG_DEBUG("DML Validation | Running Validation");
 
 	/* Requestor Regs */
 	REG_GET(HUBPRET_CONTROL,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
index 0be1c917b242..4408aed5087b 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
@@ -267,6 +267,7 @@ void hubp21_validate_dml_output(struct hubp *hubp,
 	struct _vcs_dpi_display_dlg_regs_st dlg_attr = {0};
 	struct _vcs_dpi_display_ttu_regs_st ttu_attr = {0};
 	DC_LOGGER_INIT(ctx->logger);
+	DC_LOG_DEBUG("DML Validation | Running Validation");
 
 	/* Requester - Per hubp */
 	REG_GET(HUBPRET_CONTROL,
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 08/51] drm/amd/display: Reset PHY in link re-training
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (6 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 07/51] drm/amd/display: Wrong ifdef guards were used around DML validation sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 09/51] drm/amd/display: Disable link before reenable sunpeng.li
                   ` (42 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, Wenjing Liu,
	Paul Hsieh, bhawanpreet.lakha

From: Paul Hsieh <paul.hsieh@amd.com>

[Why]
Link training failed randomly when plugging USB-C display in/out.

[How]
If link training failed, reset PHY in link re-training.

Signed-off-by: Paul Hsieh <paul.hsieh@amd.com>
Reviewed-by: Wenjing Liu <Wenjing.Liu@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc_link.c | 32 ++-------
 .../gpu/drm/amd/display/dc/core/dc_link_dp.c  | 68 +++++++++++++++----
 .../drm/amd/display/dc/core/dc_link_hwss.c    | 14 +---
 .../gpu/drm/amd/display/dc/inc/dc_link_dp.h   |  5 +-
 4 files changed, 66 insertions(+), 53 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index 093f6c808876..5a35395e6060 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -1495,7 +1495,6 @@ static enum dc_status enable_link_dp(
 	bool skip_video_pattern;
 	struct dc_link *link = stream->link;
 	struct dc_link_settings link_settings = {0};
-	enum dp_panel_mode panel_mode;
 	bool fec_enable;
 	int i;
 	bool apply_seamless_boot_optimization = false;
@@ -1531,40 +1530,17 @@ static enum dc_status enable_link_dp(
 	if (state->clk_mgr && !apply_seamless_boot_optimization)
 		state->clk_mgr->funcs->update_clocks(state->clk_mgr, state, false);
 
-	dp_enable_link_phy(
-		link,
-		pipe_ctx->stream->signal,
-		pipe_ctx->clock_source->id,
-		&link_settings);
-
-	if (stream->sink_patches.dppowerup_delay > 0) {
-		int delay_dp_power_up_in_ms = stream->sink_patches.dppowerup_delay;
-
-		msleep(delay_dp_power_up_in_ms);
-	}
-
-	panel_mode = dp_get_panel_mode(link);
-	dp_set_panel_mode(link, panel_mode);
-
-	/* We need to do this before the link training to ensure the idle pattern in SST
-	 * mode will be sent right after the link training */
-	link->link_enc->funcs->connect_dig_be_to_fe(link->link_enc,
-						    pipe_ctx->stream_res.stream_enc->id, true);
 	skip_video_pattern = true;
 
 	if (link_settings.link_rate == LINK_RATE_LOW)
 			skip_video_pattern = false;
 
-	if (link->aux_access_disabled) {
-		dc_link_dp_perform_link_training_skip_aux(link, &link_settings);
-
-		link->cur_link_settings = link_settings;
-		status = DC_OK;
-	} else if (perform_link_training_with_retries(
-			link,
+	if (perform_link_training_with_retries(
 			&link_settings,
 			skip_video_pattern,
-			LINK_TRAINING_ATTEMPTS)) {
+			LINK_TRAINING_ATTEMPTS,
+			pipe_ctx,
+			pipe_ctx->stream->signal)) {
 		link->cur_link_settings = link_settings;
 		status = DC_OK;
 	}
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index 272261192e82..537b4dee8f22 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -1433,23 +1433,58 @@ enum link_training_result dc_link_dp_perform_link_training(
 }
 
 bool perform_link_training_with_retries(
-	struct dc_link *link,
 	const struct dc_link_settings *link_setting,
 	bool skip_video_pattern,
-	int attempts)
+	int attempts,
+	struct pipe_ctx *pipe_ctx,
+	enum signal_type signal)
 {
 	uint8_t j;
 	uint8_t delay_between_attempts = LINK_TRAINING_RETRY_DELAY;
+	struct dc_stream_state *stream = pipe_ctx->stream;
+	struct dc_link *link = stream->link;
+	enum dp_panel_mode panel_mode = dp_get_panel_mode(link);
 
 	for (j = 0; j < attempts; ++j) {
 
-		if (dc_link_dp_perform_link_training(
+		dp_enable_link_phy(
+			link,
+			signal,
+			pipe_ctx->clock_source->id,
+			link_setting);
+
+		if (stream->sink_patches.dppowerup_delay > 0) {
+			int delay_dp_power_up_in_ms = stream->sink_patches.dppowerup_delay;
+
+			msleep(delay_dp_power_up_in_ms);
+		}
+
+		dp_set_panel_mode(link, panel_mode);
+
+		/* We need to do this before the link training to ensure the idle pattern in SST
+		 * mode will be sent right after the link training
+		 */
+		link->link_enc->funcs->connect_dig_be_to_fe(link->link_enc,
+								pipe_ctx->stream_res.stream_enc->id, true);
+
+		if (link->aux_access_disabled) {
+			dc_link_dp_perform_link_training_skip_aux(link, link_setting);
+			return true;
+		} else if (dc_link_dp_perform_link_training(
 				link,
 				link_setting,
 				skip_video_pattern) == LINK_TRAINING_SUCCESS)
 			return true;
 
+		/* latest link training still fail, skip delay and keep PHY on
+		 */
+		if (j == (attempts - 1))
+			break;
+
+		dp_disable_link_phy(link, signal);
+
 		msleep(delay_between_attempts);
+
 		delay_between_attempts += LINK_TRAINING_RETRY_DELAY;
 	}
 
@@ -2770,17 +2805,26 @@ bool dc_link_handle_hpd_rx_irq(struct dc_link *link, union hpd_irq_data *out_hpd
 					sizeof(hpd_irq_dpcd_data),
 					"Status: ");
 
-		perform_link_training_with_retries(link,
-			&link->cur_link_settings,
-			true, LINK_TRAINING_ATTEMPTS);
-
 		for (i = 0; i < MAX_PIPES; i++) {
 			pipe_ctx = &link->dc->current_state->res_ctx.pipe_ctx[i];
-			if (pipe_ctx && pipe_ctx->stream && pipe_ctx->stream->link == link &&
-					pipe_ctx->stream->dpms_off == false &&
-					pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST) {
-				dc_link_allocate_mst_payload(pipe_ctx);
-			}
+			if (pipe_ctx && pipe_ctx->stream && pipe_ctx->stream->link == link)
+				break;
+		}
+
+		if (pipe_ctx == NULL || pipe_ctx->stream == NULL)
+			return false;
+
+		dp_disable_link_phy(link, pipe_ctx->stream->signal);
+
+		perform_link_training_with_retries(&link->cur_link_settings,
+			true, LINK_TRAINING_ATTEMPTS,
+			pipe_ctx,
+			pipe_ctx->stream->signal);
+
+		if (pipe_ctx && pipe_ctx->stream && pipe_ctx->stream->link == link &&
+				pipe_ctx->stream->dpms_off == false &&
+				pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST) {
+			dc_link_allocate_mst_payload(pipe_ctx);
 		}
 
 		status = false;
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
index 67ce12df23f1..548aac02ca11 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_hwss.c
@@ -333,20 +333,12 @@ void dp_retrain_link_dp_test(struct dc_link *link,
 			memset(&link->cur_link_settings, 0,
 				sizeof(link->cur_link_settings));
 
-			link->link_enc->funcs->enable_dp_output(
-						link->link_enc,
-						link_setting,
-						pipes[i].clock_source->id);
-			link->cur_link_settings = *link_setting;
-
-			dp_receiver_power_ctrl(link, true);
-
 			perform_link_training_with_retries(
-					link,
 					link_setting,
 					skip_video_pattern,
-					LINK_TRAINING_ATTEMPTS);
-
+					LINK_TRAINING_ATTEMPTS,
+					&pipes[i],
+					SIGNAL_TYPE_DISPLAY_PORT);
 
 			link->dc->hwss.enable_stream(&pipes[i]);
 
diff --git a/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h b/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
index 4879cf54d8f1..6198bccd6199 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
@@ -57,10 +57,11 @@ void decide_link_settings(
 	struct dc_link_settings *link_setting);
 
 bool perform_link_training_with_retries(
-	struct dc_link *link,
 	const struct dc_link_settings *link_setting,
 	bool skip_video_pattern,
-	int attempts);
+	int attempts,
+	struct pipe_ctx *pipe_ctx,
+	enum signal_type signal);
 
 bool is_mst_supported(struct dc_link *link);
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 09/51] drm/amd/display: Disable link before reenable
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (7 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 08/51] drm/amd/display: Reset PHY in link re-training sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 10/51] drm/amd/display: Add DMCUB__PG_DONE trace code enum sunpeng.li
                   ` (41 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, Lucy Li,
	bhawanpreet.lakha, Anthony Koo

From: Lucy Li <lucy.li@amd.com>

[Why]
Black screen seen after display is disabled then re-enabled.
Caused by difference in link settings when
switching between different resolutions.

[How]
In PnP case, or whenever the display is
still enabled but the driver is unloaded,
disable link before re-enabling with new link settings.

Signed-off-by: Lucy Li <lucy.li@amd.com>
Reviewed-by: Anthony Koo <Anthony.Koo@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc_link.c | 99 ++++++++++---------
 1 file changed, 52 insertions(+), 47 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index 5a35395e6060..4681ca20f683 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -1511,15 +1511,6 @@ static enum dc_status enable_link_dp(
 	decide_link_settings(stream, &link_settings);
 
 	if (pipe_ctx->stream->signal == SIGNAL_TYPE_EDP) {
-		/* If link settings are different than current and link already enabled
-		 * then need to disable before programming to new rate.
-		 */
-		if (link->link_status.link_active &&
-			(link->cur_link_settings.lane_count != link_settings.lane_count ||
-			 link->cur_link_settings.link_rate != link_settings.link_rate)) {
-			dp_disable_link_phy(link, pipe_ctx->stream->signal);
-		}
-
 		/*in case it is not on*/
 		link->dc->hwss.edp_power_control(link, true);
 		link->dc->hwss.edp_wait_for_hpd_ready(link, true);
@@ -2039,6 +2030,45 @@ static void write_i2c_redriver_setting(
 		ASSERT(i2c_success);
 }
 
+static void disable_link(struct dc_link *link, enum signal_type signal)
+{
+	/*
+	 * TODO: implement call for dp_set_hw_test_pattern
+	 * it is needed for compliance testing
+	 */
+
+	/* Here we need to specify that encoder output settings
+	 * need to be calculated as for the set mode,
+	 * it will lead to querying dynamic link capabilities
+	 * which should be done before enable output
+	 */
+
+	if (dc_is_dp_signal(signal)) {
+		/* SST DP, eDP */
+		if (dc_is_dp_sst_signal(signal))
+			dp_disable_link_phy(link, signal);
+		else
+			dp_disable_link_phy_mst(link, signal);
+
+		if (dc_is_dp_sst_signal(signal) ||
+				link->mst_stream_alloc_table.stream_count == 0) {
+			dp_set_fec_enable(link, false);
+			dp_set_fec_ready(link, false);
+		}
+	} else {
+		if (signal != SIGNAL_TYPE_VIRTUAL)
+			link->link_enc->funcs->disable_output(link->link_enc, signal);
+	}
+
+	if (signal == SIGNAL_TYPE_DISPLAY_PORT_MST) {
+		/* MST disable link only when no stream use the link */
+		if (link->mst_stream_alloc_table.stream_count <= 0)
+			link->link_status.link_active = false;
+	} else {
+		link->link_status.link_active = false;
+	}
+}
+
 static void enable_link_hdmi(struct pipe_ctx *pipe_ctx)
 {
 	struct dc_stream_state *stream = pipe_ctx->stream;
@@ -2123,6 +2153,19 @@ static enum dc_status enable_link(
 		struct pipe_ctx *pipe_ctx)
 {
 	enum dc_status status = DC_ERROR_UNEXPECTED;
+	struct dc_stream_state *stream = pipe_ctx->stream;
+	struct dc_link *link = stream->link;
+
+	/* There's some scenarios where driver is unloaded with display
+	 * still enabled. When driver is reloaded, it may cause a display
+	 * to not light up if there is a mismatch between old and new
+	 * link settings. Need to call disable first before enabling at
+	 * new link settings.
+	 */
+	if (link->link_status.link_active) {
+		disable_link(link, pipe_ctx->stream->signal);
+	}
+
 	switch (pipe_ctx->stream->signal) {
 	case SIGNAL_TYPE_DISPLAY_PORT:
 		status = enable_link_dp(state, pipe_ctx);
@@ -2157,44 +2200,6 @@ static enum dc_status enable_link(
 	return status;
 }
 
-static void disable_link(struct dc_link *link, enum signal_type signal)
-{
-	/*
-	 * TODO: implement call for dp_set_hw_test_pattern
-	 * it is needed for compliance testing
-	 */
-
-	/* here we need to specify that encoder output settings
-	 * need to be calculated as for the set mode,
-	 * it will lead to querying dynamic link capabilities
-	 * which should be done before enable output */
-
-	if (dc_is_dp_signal(signal)) {
-		/* SST DP, eDP */
-		if (dc_is_dp_sst_signal(signal))
-			dp_disable_link_phy(link, signal);
-		else
-			dp_disable_link_phy_mst(link, signal);
-
-		if (dc_is_dp_sst_signal(signal) ||
-				link->mst_stream_alloc_table.stream_count == 0) {
-			dp_set_fec_enable(link, false);
-			dp_set_fec_ready(link, false);
-		}
-	} else {
-		if (signal != SIGNAL_TYPE_VIRTUAL)
-			link->link_enc->funcs->disable_output(link->link_enc, signal);
-	}
-
-	if (signal == SIGNAL_TYPE_DISPLAY_PORT_MST) {
-		/* MST disable link only when no stream use the link */
-		if (link->mst_stream_alloc_table.stream_count <= 0)
-			link->link_status.link_active = false;
-	} else {
-		link->link_status.link_active = false;
-	}
-}
-
 static uint32_t get_timing_pixel_clock_100hz(const struct dc_crtc_timing *timing)
 {
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 10/51] drm/amd/display: Add DMCUB__PG_DONE trace code enum
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (8 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 09/51] drm/amd/display: Disable link before reenable sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 11/51] drm/amd/display: Only wait for DMUB phy init on dcn21 sunpeng.li
                   ` (40 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, Yongqiang Sun,
	harry.wentland, bhawanpreet.lakha

From: Yongqiang Sun <yongqiang.sun@amd.com>

Signed-off-by: Yongqiang Sun <yongqiang.sun@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dmub/inc/dmub_trace_buffer.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_trace_buffer.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_trace_buffer.h
index b0ee099d8a6e..6b3ee42db350 100644
--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_trace_buffer.h
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_trace_buffer.h
@@ -45,6 +45,7 @@ enum dmucb_trace_code {
 	DMCUB__DMCU_ISR_LOAD_END,
 	DMCUB__MAIN_IDLE,
 	DMCUB__PERF_TRACE,
+	DMCUB__PG_DONE,
 };
 
 struct dmcub_trace_buf_entry {
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 11/51] drm/amd/display: Only wait for DMUB phy init on dcn21
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (9 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 10/51] drm/amd/display: Add DMCUB__PG_DONE trace code enum sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 12/51] drm/amd/display: Return DMUB_STATUS_OK when autoload unsupported sunpeng.li
                   ` (39 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, harry.wentland,
	bhawanpreet.lakha, Nicholas Kazlauskas

From: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>

[Why]
The wait for PHY init won't finish if the firmware doesn't support it.

[How]
Only hook this functionality up on DCN21 and move it out of DCN20.

For ASIC without support then this should return OK so we don't hang
while waiting in DC.

Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c | 5 -----
 drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h | 2 --
 drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.c | 5 +++++
 drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.h | 2 ++
 drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c   | 2 +-
 5 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
index e2b2cf2e01fd..6b7d54572aa3 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
@@ -135,8 +135,3 @@ bool dmub_dcn20_is_supported(struct dmub_srv *dmub)
 
 	return supported;
 }
-
-bool dmub_dcn20_is_phy_init(struct dmub_srv *dmub)
-{
-	return REG_READ(DMCUB_SCRATCH10) == 0;
-}
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h
index e1ba748ca594..ca7db03b94f7 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h
@@ -59,6 +59,4 @@ bool dmub_dcn20_is_hw_init(struct dmub_srv *dmub);
 
 bool dmub_dcn20_is_supported(struct dmub_srv *dmub);
 
-bool dmub_dcn20_is_phy_init(struct dmub_srv *dmub);
-
 #endif /* _DMUB_DCN20_H_ */
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.c
index d40a808112e7..b9dc2dd645eb 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.c
@@ -124,3 +124,8 @@ bool dmub_dcn21_is_auto_load_done(struct dmub_srv *dmub)
 {
 	return (REG_READ(DMCUB_SCRATCH0) == 3);
 }
+
+bool dmub_dcn21_is_phy_init(struct dmub_srv *dmub)
+{
+	return REG_READ(DMCUB_SCRATCH10) == 0;
+}
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.h b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.h
index f57969d8d56f..9e5f195e288f 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.h
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.h
@@ -42,4 +42,6 @@ void dmub_dcn21_setup_windows(struct dmub_srv *dmub,
 
 bool dmub_dcn21_is_auto_load_done(struct dmub_srv *dmub);
 
+bool dmub_dcn21_is_phy_init(struct dmub_srv *dmub);
+
 #endif /* _DMUB_DCN21_H_ */
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
index 60c574a39c6a..3ec26f6af2e1 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
@@ -76,13 +76,13 @@ static bool dmub_srv_hw_setup(struct dmub_srv *dmub, enum dmub_asic asic)
 		funcs->get_inbox1_rptr = dmub_dcn20_get_inbox1_rptr;
 		funcs->set_inbox1_wptr = dmub_dcn20_set_inbox1_wptr;
 		funcs->is_supported = dmub_dcn20_is_supported;
-		funcs->is_phy_init = dmub_dcn20_is_phy_init;
 		funcs->is_hw_init = dmub_dcn20_is_hw_init;
 
 		if (asic == DMUB_ASIC_DCN21) {
 			funcs->backdoor_load = dmub_dcn21_backdoor_load;
 			funcs->setup_windows = dmub_dcn21_setup_windows;
 			funcs->is_auto_load_done = dmub_dcn21_is_auto_load_done;
+			funcs->is_phy_init = dmub_dcn21_is_phy_init;
 		}
 		break;
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 12/51] drm/amd/display: Return DMUB_STATUS_OK when autoload unsupported
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (10 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 11/51] drm/amd/display: Only wait for DMUB phy init on dcn21 sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 13/51] drm/amd/display: Program CW5 for tracebuffer for dcn20 sunpeng.li
                   ` (38 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, harry.wentland,
	bhawanpreet.lakha, Nicholas Kazlauskas

From: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>

[Why]
Not having support for autoload isn't an error. If the DMUB firmware
doesn't support it then don't return DMUB_STATUS_INVALID.

[How]
Return DMUB_STATUS_OK when ->is_auto_load_done is NULL.

Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
index 3ec26f6af2e1..70c7a4be9ccc 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
@@ -379,9 +379,12 @@ enum dmub_status dmub_srv_wait_for_auto_load(struct dmub_srv *dmub,
 {
 	uint32_t i;
 
-	if (!dmub->hw_init || !dmub->hw_funcs.is_auto_load_done)
+	if (!dmub->hw_init)
 		return DMUB_STATUS_INVALID;
 
+	if (!dmub->hw_funcs.is_auto_load_done)
+		return DMUB_STATUS_OK;
+
 	for (i = 0; i <= timeout_us; i += 100) {
 		if (dmub->hw_funcs.is_auto_load_done(dmub))
 			return DMUB_STATUS_OK;
@@ -397,9 +400,12 @@ enum dmub_status dmub_srv_wait_for_phy_init(struct dmub_srv *dmub,
 {
 	uint32_t i = 0;
 
-	if (!dmub->hw_init || !dmub->hw_funcs.is_phy_init)
+	if (!dmub->hw_init)
 		return DMUB_STATUS_INVALID;
 
+	if (!dmub->hw_funcs.is_phy_init)
+		return DMUB_STATUS_OK;
+
 	for (i = 0; i <= timeout_us; i += 10) {
 		if (dmub->hw_funcs.is_phy_init(dmub))
 			return DMUB_STATUS_OK;
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 13/51] drm/amd/display: Program CW5 for tracebuffer for dcn20
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (11 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 12/51] drm/amd/display: Return DMUB_STATUS_OK when autoload unsupported sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 14/51] drm/amd/display: populate bios integrated info for renoir sunpeng.li
                   ` (37 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, harry.wentland,
	bhawanpreet.lakha, Nicholas Kazlauskas

From: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>

[Why]
On dcn21 this is programmed for tracebuffer support but isn't being
programmed on dcn20.

DMCUB execution hits an undefined address 65000000 on tracebuffer
access.

[How]
Program CW5.

Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
index 6b7d54572aa3..302dd3d4b77d 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
@@ -99,6 +99,13 @@ void dmub_dcn20_setup_windows(struct dmub_srv *dmub,
 	REG_SET_2(DMCUB_REGION4_TOP_ADDRESS, 0, DMCUB_REGION4_TOP_ADDRESS,
 		  cw4->region.top - cw4->region.base - 1, DMCUB_REGION4_ENABLE,
 		  1);
+
+	REG_WRITE(DMCUB_REGION3_CW5_OFFSET, cw5->offset.u.low_part);
+	REG_WRITE(DMCUB_REGION3_CW5_OFFSET_HIGH, cw5->offset.u.high_part);
+	REG_WRITE(DMCUB_REGION3_CW5_BASE_ADDRESS, cw5->region.base);
+	REG_SET_2(DMCUB_REGION3_CW5_TOP_ADDRESS, 0,
+		  DMCUB_REGION3_CW5_TOP_ADDRESS, cw5->region.top,
+		  DMCUB_REGION3_CW5_ENABLE, 1);
 }
 
 void dmub_dcn20_setup_mailbox(struct dmub_srv *dmub,
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 14/51] drm/amd/display: populate bios integrated info for renoir
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (12 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 13/51] drm/amd/display: Program CW5 for tracebuffer for dcn20 sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 15/51] drm/amd/display: Fixed kernel panic when booting with DP-to-HDMI dongle sunpeng.li
                   ` (36 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, Joseph Gravenor,
	harry.wentland, bhawanpreet.lakha

From: Joseph Gravenor <joseph.gravenor@amd.com>

[Why]
When video_memory_type bw_params->vram_type
is assigned, wedistinguish between Ddr4MemType and LpDdr4MemType.
Because of this we will never report that we are using
LpDdr4MemType and never re-purpose WM set D

[How]
populate bios integrated info for renoir by adding the
revision number for renoir and use that integrated info
table instead of of asic_id to get the vram type

Signed-off-by: Joseph Gravenor <joseph.gravenor@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c     |  1 +
 .../gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c  | 10 ++++++----
 2 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
index eb06ee765c78..72795ae81dd0 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
@@ -1638,6 +1638,7 @@ static enum bp_result construct_integrated_info(
 		/* Don't need to check major revision as they are all 1 */
 		switch (revision.minor) {
 		case 11:
+		case 12:
 			result = get_integrated_info_v11(bp, info);
 			break;
 		default:
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
index 841095d09d3c..9f0381c68844 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
@@ -569,7 +569,7 @@ static unsigned int find_dcfclk_for_voltage(struct dpm_clocks *clock_table, unsi
 	return 0;
 }
 
-static void rn_clk_mgr_helper_populate_bw_params(struct clk_bw_params *bw_params, struct dpm_clocks *clock_table, struct hw_asic_id *asic_id)
+static void rn_clk_mgr_helper_populate_bw_params(struct clk_bw_params *bw_params, struct dpm_clocks *clock_table, struct integrated_info *bios_info)
 {
 	int i, j = 0;
 
@@ -601,8 +601,8 @@ static void rn_clk_mgr_helper_populate_bw_params(struct clk_bw_params *bw_params
 		bw_params->clk_table.entries[i].dcfclk_mhz = find_dcfclk_for_voltage(clock_table, clock_table->FClocks[j].Vol);
 	}
 
-	bw_params->vram_type = asic_id->vram_type;
-	bw_params->num_channels = asic_id->vram_width / DDR4_DRAM_WIDTH;
+	bw_params->vram_type = bios_info->memory_type;
+	bw_params->num_channels = bios_info->ma_channel_number;
 
 	for (i = 0; i < WM_SET_COUNT; i++) {
 		bw_params->wm_table.entries[i].wm_inst = i;
@@ -685,7 +685,9 @@ void rn_clk_mgr_construct(
 
 	if (pp_smu && pp_smu->rn_funcs.get_dpm_clock_table) {
 		pp_smu->rn_funcs.get_dpm_clock_table(&pp_smu->rn_funcs.pp_smu, &clock_table);
-		rn_clk_mgr_helper_populate_bw_params(clk_mgr->base.bw_params, &clock_table, &ctx->asic_id);
+		if (ctx->dc_bios && ctx->dc_bios->integrated_info) {
+			rn_clk_mgr_helper_populate_bw_params (clk_mgr->base.bw_params, &clock_table, ctx->dc_bios->integrated_info);
+		}
 	}
 
 	if (!IS_FPGA_MAXIMUS_DC(ctx->dce_environment) && clk_mgr->smu_ver >= 0x00371500) {
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 15/51] drm/amd/display: Fixed kernel panic when booting with DP-to-HDMI dongle
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (13 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 14/51] drm/amd/display: populate bios integrated info for renoir sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 16/51] drm/amd/display: have two different sr and pstate latency tables for renoir sunpeng.li
                   ` (35 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: David Galiffi, Leo Li, Tony Cheng, rodrigo.siqueira,
	harry.wentland, bhawanpreet.lakha

From: David Galiffi <David.Galiffi@amd.com>

[Why]
In dc_link_is_dp_sink_present, if dal_ddc_open fails, then
dal_gpio_destroy_ddc is called, destroying pin_data and pin_clock. They
are created only on dc_construct, and next aux access will cause a panic.

[How]
Instead of calling dal_gpio_destroy_ddc, call dal_ddc_close.

Signed-off-by: David Galiffi <David.Galiffi@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc_link.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index 4681ca20f683..cef8c1ba9797 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -372,7 +372,7 @@ bool dc_link_is_dp_sink_present(struct dc_link *link)
 
 	if (GPIO_RESULT_OK != dal_ddc_open(
 		ddc, GPIO_MODE_INPUT, GPIO_DDC_CONFIG_TYPE_MODE_I2C)) {
-		dal_gpio_destroy_ddc(&ddc);
+		dal_ddc_close(ddc);
 
 		return present;
 	}
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 16/51] drm/amd/display: have two different sr and pstate latency tables for renoir
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (14 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 15/51] drm/amd/display: Fixed kernel panic when booting with DP-to-HDMI dongle sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 17/51] drm/amd/display: fix dprefclk and ss percentage reading on RN sunpeng.li
                   ` (34 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Eric Yang, Leo Li, harry.wentland, rodrigo.siqueira,
	Joseph Gravenor, bhawanpreet.lakha

From: Joseph Gravenor <joseph.gravenor@amd.com>

[Why]
new sr and pstate latencies are optimized for the case when we are not
using lpddr4 memory

[How]
have two different wm tables, one for the lpddr case and one for
non lpddr case

Signed-off-by: Joseph Gravenor <joseph.gravenor@amd.com>
Reviewed-by: Eric Yang <eric.yang2@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 114 ++++++++++++------
 1 file changed, 80 insertions(+), 34 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
index 9f0381c68844..89ed230cdb26 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
@@ -518,44 +518,83 @@ struct clk_bw_params rn_bw_params = {
 		.num_entries = 4,
 	},
 
-	.wm_table = {
-		.entries = {
-			{
-				.wm_inst = WM_A,
-				.wm_type = WM_TYPE_PSTATE_CHG,
-				.pstate_latency_us = 11.72,
-				.sr_exit_time_us = 6.09,
-				.sr_enter_plus_exit_time_us = 7.14,
-				.valid = true,
-			},
-			{
-				.wm_inst = WM_B,
-				.wm_type = WM_TYPE_PSTATE_CHG,
-				.pstate_latency_us = 11.72,
-				.sr_exit_time_us = 10.12,
-				.sr_enter_plus_exit_time_us = 11.48,
-				.valid = true,
-			},
-			{
-				.wm_inst = WM_C,
-				.wm_type = WM_TYPE_PSTATE_CHG,
-				.pstate_latency_us = 11.72,
-				.sr_exit_time_us = 10.12,
-				.sr_enter_plus_exit_time_us = 11.48,
-				.valid = true,
-			},
-			{
-				.wm_inst = WM_D,
-				.wm_type = WM_TYPE_PSTATE_CHG,
-				.pstate_latency_us = 11.72,
-				.sr_exit_time_us = 10.12,
-				.sr_enter_plus_exit_time_us = 11.48,
-				.valid = true,
-			},
+};
+
+struct wm_table ddr4_wm_table = {
+	.entries = {
+		{
+			.wm_inst = WM_A,
+			.wm_type = WM_TYPE_PSTATE_CHG,
+			.pstate_latency_us = 11.72,
+			.sr_exit_time_us = 6.09,
+			.sr_enter_plus_exit_time_us = 7.14,
+			.valid = true,
+		},
+		{
+			.wm_inst = WM_B,
+			.wm_type = WM_TYPE_PSTATE_CHG,
+			.pstate_latency_us = 11.72,
+			.sr_exit_time_us = 10.12,
+			.sr_enter_plus_exit_time_us = 11.48,
+			.valid = true,
+		},
+		{
+			.wm_inst = WM_C,
+			.wm_type = WM_TYPE_PSTATE_CHG,
+			.pstate_latency_us = 11.72,
+			.sr_exit_time_us = 10.12,
+			.sr_enter_plus_exit_time_us = 11.48,
+			.valid = true,
+		},
+		{
+			.wm_inst = WM_D,
+			.wm_type = WM_TYPE_PSTATE_CHG,
+			.pstate_latency_us = 11.72,
+			.sr_exit_time_us = 10.12,
+			.sr_enter_plus_exit_time_us = 11.48,
+			.valid = true,
 		},
 	}
 };
 
+struct wm_table lpddr4_wm_table = {
+	.entries = {
+		{
+			.wm_inst = WM_A,
+			.wm_type = WM_TYPE_PSTATE_CHG,
+			.pstate_latency_us = 23.84,
+			.sr_exit_time_us = 12.5,
+			.sr_enter_plus_exit_time_us = 17.0,
+			.valid = true,
+		},
+		{
+			.wm_inst = WM_B,
+			.wm_type = WM_TYPE_PSTATE_CHG,
+			.pstate_latency_us = 23.84,
+			.sr_exit_time_us = 12.5,
+			.sr_enter_plus_exit_time_us = 17.0,
+			.valid = true,
+		},
+		{
+			.wm_inst = WM_C,
+			.wm_type = WM_TYPE_PSTATE_CHG,
+			.pstate_latency_us = 23.84,
+			.sr_exit_time_us = 12.5,
+			.sr_enter_plus_exit_time_us = 17.0,
+			.valid = true,
+		},
+		{
+			.wm_inst = WM_D,
+			.wm_type = WM_TYPE_PSTATE_CHG,
+			.pstate_latency_us = 23.84,
+			.sr_exit_time_us = 12.5,
+			.sr_enter_plus_exit_time_us = 17.0,
+			.valid = true,
+		},
+	}
+};
+
+
 static unsigned int find_dcfclk_for_voltage(struct dpm_clocks *clock_table, unsigned int voltage)
 {
 	int i;
@@ -677,10 +716,17 @@ void rn_clk_mgr_construct(
 			ASSERT(clk_mgr->base.dprefclk_khz == 600000);
 			clk_mgr->base.dprefclk_khz = 600000;
 		}
+
+		if (ctx->dc_bios->integrated_info->memory_type == LpDdr4MemType) {
+			rn_bw_params.wm_table = lpddr4_wm_table;
+		} else {
+			rn_bw_params.wm_table = ddr4_wm_table;
+		}
 	}
 
 	dce_clock_read_ss_info(clk_mgr);
 
+
 	clk_mgr->base.bw_params = &rn_bw_params;
 
 	if (pp_smu && pp_smu->rn_funcs.get_dpm_clock_table) {
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 17/51] drm/amd/display: fix dprefclk and ss percentage reading on RN
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (15 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 16/51] drm/amd/display: have two different sr and pstate latency tables for renoir sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 18/51] drm/amd/display: 3.2.61 sunpeng.li
                   ` (33 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Eric Yang, Leo Li, Tony Cheng, rodrigo.siqueira, harry.wentland,
	bhawanpreet.lakha

From: Eric Yang <Eric.Yang2@amd.com>

[Why]
Before was using HW counter value to determine the dprefclk. Which
take into account ss, but has large variation, not good enough for
generating audio dto. Also, the bios parser code to get the ss
percentage was not working.

[How]
After this change, dprefclk is hard coded, same as on RV. We don't
expect this to change on Renoir. Modified bios parser code to get
the right ss percentage.

Signed-off-by: Eric Yang <Eric.Yang2@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../gpu/drm/amd/display/dc/bios/bios_parser2.c   |  1 +
 .../amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c    | 16 +++-------------
 drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h  |  1 +
 3 files changed, 5 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
index 72795ae81dd0..da29fd62f56a 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
@@ -834,6 +834,7 @@ static enum bp_result bios_parser_get_spread_spectrum_info(
 		case 1:
 			return get_ss_info_v4_1(bp, signal, index, ss_info);
 		case 2:
+		case 3:
 			return get_ss_info_v4_2(bp, signal, index, ss_info);
 		default:
 			break;
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
index 89ed230cdb26..307c8540e36f 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
@@ -675,7 +675,6 @@ void rn_clk_mgr_construct(
 {
 	struct dc_debug_options *debug = &ctx->dc->debug;
 	struct dpm_clocks clock_table = { 0 };
-	struct clk_state_registers_and_bypass s = { 0 };
 
 	clk_mgr->base.ctx = ctx;
 	clk_mgr->base.funcs = &dcn21_funcs;
@@ -695,7 +694,6 @@ void rn_clk_mgr_construct(
 	if (IS_FPGA_MAXIMUS_DC(ctx->dce_environment)) {
 		dcn21_funcs.update_clocks = dcn2_update_clocks_fpga;
 		clk_mgr->base.dentist_vco_freq_khz = 3600000;
-		clk_mgr->base.dprefclk_khz = 600000;
 	} else {
 		struct clk_log_info log_info = {0};
 
@@ -706,24 +704,16 @@ void rn_clk_mgr_construct(
 		if (clk_mgr->base.dentist_vco_freq_khz == 0)
 			clk_mgr->base.dentist_vco_freq_khz = 3600000;
 
-		rn_dump_clk_registers(&s, &clk_mgr->base, &log_info);
-		/* Convert dprefclk units from MHz to KHz */
-		/* Value already divided by 10, some resolution lost */
-		clk_mgr->base.dprefclk_khz = s.dprefclk * 1000;
-
-		/* in case we don't get a value from the register, use default */
-		if (clk_mgr->base.dprefclk_khz == 0) {
-			ASSERT(clk_mgr->base.dprefclk_khz == 600000);
-			clk_mgr->base.dprefclk_khz = 600000;
-		}
-
 		if (ctx->dc_bios->integrated_info->memory_type == LpDdr4MemType) {
 			rn_bw_params.wm_table = lpddr4_wm_table;
 		} else {
 			rn_bw_params.wm_table = ddr4_wm_table;
 		}
+		/* Saved clocks configured at boot for debug purposes */
+		rn_dump_clk_registers(&clk_mgr->base.boot_snapshot, &clk_mgr->base, &log_info);
 	}
 
+	clk_mgr->base.dprefclk_khz = 600000;
 	dce_clock_read_ss_info(clk_mgr);
 
 
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
index 4aa09fe954c5..ac530c057ddd 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
@@ -191,6 +191,7 @@ struct clk_mgr {
 	bool psr_allow_active_cache;
 	int dprefclk_khz; // Used by program pixel clock in clock source funcs, need to figureout where this goes
 	int dentist_vco_freq_khz;
+	struct clk_state_registers_and_bypass boot_snapshot;
 	struct clk_bw_params *bw_params;
 };
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 18/51] drm/amd/display: 3.2.61
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (16 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 17/51] drm/amd/display: fix dprefclk and ss percentage reading on RN sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 19/51] drm/amd/display: Change the delay time before enabling FEC sunpeng.li
                   ` (32 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: bhawanpreet.lakha, rodrigo.siqueira, Aric Cyr, Leo Li, harry.wentland

From: Aric Cyr <aric.cyr@amd.com>

Signed-off-by: Aric Cyr <aric.cyr@amd.com>
Reviewed-by: Aric Cyr <Aric.Cyr@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index 3e6133f8cdc4..34b824270c84 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -39,7 +39,7 @@
 #include "inc/hw/dmcu.h"
 #include "dml/display_mode_lib.h"
 
-#define DC_VER "3.2.60"
+#define DC_VER "3.2.61"
 
 #define MAX_SURFACES 3
 #define MAX_PLANES 6
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 19/51] drm/amd/display: Change the delay time before enabling FEC
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (17 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 18/51] drm/amd/display: 3.2.61 sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 20/51] drm/amd/display: fixed that I2C over AUX didn't read data issue sunpeng.li
                   ` (31 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Harry Wentland, rodrigo.siqueira, Nikola Cornij,
	Leo (Hanghong) Ma, bhawanpreet.lakha

From: "Leo (Hanghong) Ma" <hanghong.ma@amd.com>

[why]
DP spec requires 1000 symbols delay between the end of link training
and enabling FEC in the stream. Currently we are using 1 miliseconds
delay which is not accurate.

[how]
One lane RBR should have the maximum time for transmitting 1000 LL
codes which is 6.173 us. So using 7 microseconds delay instead of
1 miliseconds.

Signed-off-by: Leo (Hanghong) Ma <hanghong.ma@amd.com>
Reviewed-by: Harry Wentland <Harry.Wentland@amd.com>
Reviewed-by: Nikola Cornij <Nikola.Cornij@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index 537b4dee8f22..b10019106030 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -3951,7 +3951,14 @@ void dp_set_fec_enable(struct dc_link *link, bool enable)
 	if (link_enc->funcs->fec_set_enable &&
 			link->dpcd_caps.fec_cap.bits.FEC_CAPABLE) {
 		if (link->fec_state == dc_link_fec_ready && enable) {
-			msleep(1);
+			/* Accord to DP spec, FEC enable sequence can first
+			 * be transmitted anytime after 1000 LL codes have
+			 * been transmitted on the link after link training
+			 * completion. Using 1 lane RBR should have the maximum
+			 * time for transmitting 1000 LL codes which is 6.173 us.
+			 * So use 7 microseconds delay instead.
+			 */
+			udelay(7);
 			link_enc->funcs->fec_set_enable(link_enc, true);
 			link->fec_state = dc_link_fec_enabled;
 		} else if (link->fec_state == dc_link_fec_enabled && !enable) {
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 20/51] drm/amd/display: fixed that I2C over AUX didn't read data issue
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (18 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 19/51] drm/amd/display: Change the delay time before enabling FEC sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 21/51] drm/amd/display: add log for lttpr sunpeng.li
                   ` (30 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Charlene Liu, Leo Li, harry.wentland, rodrigo.siqueira,
	Brandon Syu, bhawanpreet.lakha

From: Brandon Syu <Brandon.Syu@amd.com>

[Why]
The variable mismatch assignment error.

[How]
To use uint32_t replace it.

Signed-off-by: Brandon Syu <Brandon.Syu@amd.com>
Reviewed-by: Charlene Liu <Charlene.Liu@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c      | 2 +-
 drivers/gpu/drm/amd/display/include/i2caux_interface.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
index 3fc9752edfe0..c2c136b12184 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
@@ -589,7 +589,7 @@ bool dal_ddc_service_query_ddc_data(
 bool dal_ddc_submit_aux_command(struct ddc_service *ddc,
 		struct aux_payload *payload)
 {
-	uint8_t retrieved = 0;
+	uint32_t retrieved = 0;
 	bool ret = 0;
 
 	if (!ddc)
diff --git a/drivers/gpu/drm/amd/display/include/i2caux_interface.h b/drivers/gpu/drm/amd/display/include/i2caux_interface.h
index bb012cb1a9f5..c7fbb9c3ad6b 100644
--- a/drivers/gpu/drm/amd/display/include/i2caux_interface.h
+++ b/drivers/gpu/drm/amd/display/include/i2caux_interface.h
@@ -42,7 +42,7 @@ struct aux_payload {
 	bool write;
 	bool mot;
 	uint32_t address;
-	uint8_t length;
+	uint32_t length;
 	uint8_t *data;
 	/*
 	 * used to return the reply type of the transaction
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 21/51] drm/amd/display: add log for lttpr
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (19 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 20/51] drm/amd/display: fixed that I2C over AUX didn't read data issue sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 22/51] drm/amd/display: Disable chroma viewport w/a when rotated 180 degrees sunpeng.li
                   ` (29 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, Wenjing Liu,
	abdoulaye berthe, bhawanpreet.lakha

From: abdoulaye berthe <abdoulaye.berthe@amd.com>

Signed-off-by: abdoulaye berthe <abdoulaye.berthe@amd.com>
Reviewed-by: Wenjing Liu <Wenjing.Liu@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../gpu/drm/amd/display/dc/core/dc_link_dp.c  | 125 +++++++++++++-----
 1 file changed, 93 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index b10019106030..486c14e0cd41 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -255,11 +255,18 @@ static void dpcd_set_lt_pattern_and_lane_settings(
 	dpcd_lt_buffer[DP_TRAINING_PATTERN_SET - DP_TRAINING_PATTERN_SET]
 		= dpcd_pattern.raw;
 
-	DC_LOG_HW_LINK_TRAINING("%s\n 0x%X pattern = %x\n",
-		__func__,
-		dpcd_base_lt_offset,
-		dpcd_pattern.v1_4.TRAINING_PATTERN_SET);
-
+	if (is_repeater(link, offset)) {
+		DC_LOG_HW_LINK_TRAINING("%s\n LTTPR Repeater ID: %d\n 0x%X pattern = %x\n",
+			__func__,
+			offset,
+			dpcd_base_lt_offset,
+			dpcd_pattern.v1_4.TRAINING_PATTERN_SET);
+	} else {
+		DC_LOG_HW_LINK_TRAINING("%s\n 0x%X pattern = %x\n",
+			__func__,
+			dpcd_base_lt_offset,
+			dpcd_pattern.v1_4.TRAINING_PATTERN_SET);
+	}
 	/*****************************************************************
 	* DpcdAddress_Lane0Set -> DpcdAddress_Lane3Set
 	*****************************************************************/
@@ -289,14 +296,25 @@ static void dpcd_set_lt_pattern_and_lane_settings(
 		dpcd_lane,
 		size_in_bytes);
 
-	DC_LOG_HW_LINK_TRAINING("%s:\n 0x%X VS set = %x  PE set = %x max VS Reached = %x  max PE Reached = %x\n",
-		__func__,
-		dpcd_base_lt_offset,
-		dpcd_lane[0].bits.VOLTAGE_SWING_SET,
-		dpcd_lane[0].bits.PRE_EMPHASIS_SET,
-		dpcd_lane[0].bits.MAX_SWING_REACHED,
-		dpcd_lane[0].bits.MAX_PRE_EMPHASIS_REACHED);
-
+	if (is_repeater(link, offset)) {
+		DC_LOG_HW_LINK_TRAINING("%s:\n LTTPR Repeater ID: %d\n"
+				" 0x%X VS set = %x PE set = %x max VS Reached = %x  max PE Reached = %x\n",
+			__func__,
+			offset,
+			dpcd_base_lt_offset,
+			dpcd_lane[0].bits.VOLTAGE_SWING_SET,
+			dpcd_lane[0].bits.PRE_EMPHASIS_SET,
+			dpcd_lane[0].bits.MAX_SWING_REACHED,
+			dpcd_lane[0].bits.MAX_PRE_EMPHASIS_REACHED);
+	} else {
+		DC_LOG_HW_LINK_TRAINING("%s:\n 0x%X VS set = %x  PE set = %x max VS Reached = %x  max PE Reached = %x\n",
+			__func__,
+			dpcd_base_lt_offset,
+			dpcd_lane[0].bits.VOLTAGE_SWING_SET,
+			dpcd_lane[0].bits.PRE_EMPHASIS_SET,
+			dpcd_lane[0].bits.MAX_SWING_REACHED,
+			dpcd_lane[0].bits.MAX_PRE_EMPHASIS_REACHED);
+	}
 	if (edp_workaround) {
 		/* for eDP write in 2 parts because the 5-byte burst is
 		* causing issues on some eDP panels (EPR#366724)
@@ -544,23 +562,42 @@ static void get_lane_status_and_drive_settings(
 
 	ln_status_updated->raw = dpcd_buf[2];
 
-	DC_LOG_HW_LINK_TRAINING("%s:\n 0x%X Lane01Status = %x\n 0x%X Lane23Status = %x\n ",
-		__func__,
-		lane01_status_address, dpcd_buf[0],
-		lane01_status_address + 1, dpcd_buf[1]);
-
+	if (is_repeater(link, offset)) {
+		DC_LOG_HW_LINK_TRAINING("%s:\n LTTPR Repeater ID: %d\n"
+				" 0x%X Lane01Status = %x\n 0x%X Lane23Status = %x\n ",
+			__func__,
+			offset,
+			lane01_status_address, dpcd_buf[0],
+			lane01_status_address + 1, dpcd_buf[1]);
+	} else {
+		DC_LOG_HW_LINK_TRAINING("%s:\n 0x%X Lane01Status = %x\n 0x%X Lane23Status = %x\n ",
+			__func__,
+			lane01_status_address, dpcd_buf[0],
+			lane01_status_address + 1, dpcd_buf[1]);
+	}
 	lane01_adjust_address = DP_ADJUST_REQUEST_LANE0_1;
 
 	if (is_repeater(link, offset))
 		lane01_adjust_address = DP_ADJUST_REQUEST_LANE0_1_PHY_REPEATER1 +
 				((DP_REPEATER_CONFIGURATION_AND_STATUS_SIZE) * (offset - 1));
 
-	DC_LOG_HW_LINK_TRAINING("%s:\n 0x%X Lane01AdjustRequest = %x\n 0x%X Lane23AdjustRequest = %x\n",
-		__func__,
-		lane01_adjust_address,
-		dpcd_buf[lane_adjust_offset],
-		lane01_adjust_address + 1,
-		dpcd_buf[lane_adjust_offset + 1]);
+	if (is_repeater(link, offset)) {
+		DC_LOG_HW_LINK_TRAINING("%s:\n LTTPR Repeater ID: %d\n"
+				" 0x%X Lane01AdjustRequest = %x\n 0x%X Lane23AdjustRequest = %x\n",
+					__func__,
+					offset,
+					lane01_adjust_address,
+					dpcd_buf[lane_adjust_offset],
+					lane01_adjust_address + 1,
+					dpcd_buf[lane_adjust_offset + 1]);
+	} else {
+		DC_LOG_HW_LINK_TRAINING("%s:\n 0x%X Lane01AdjustRequest = %x\n 0x%X Lane23AdjustRequest = %x\n",
+			__func__,
+			lane01_adjust_address,
+			dpcd_buf[lane_adjust_offset],
+			lane01_adjust_address + 1,
+			dpcd_buf[lane_adjust_offset + 1]);
+	}
 
 	/*copy to req_settings*/
 	request_settings.link_settings.lane_count =
@@ -656,14 +693,26 @@ static void dpcd_set_lane_settings(
 	}
 	*/
 
-	DC_LOG_HW_LINK_TRAINING("%s\n 0x%X VS set = %x  PE set = %x max VS Reached = %x  max PE Reached = %x\n",
-		__func__,
-		lane0_set_address,
-		dpcd_lane[0].bits.VOLTAGE_SWING_SET,
-		dpcd_lane[0].bits.PRE_EMPHASIS_SET,
-		dpcd_lane[0].bits.MAX_SWING_REACHED,
-		dpcd_lane[0].bits.MAX_PRE_EMPHASIS_REACHED);
+	if (is_repeater(link, offset)) {
+		DC_LOG_HW_LINK_TRAINING("%s\n LTTPR Repeater ID: %d\n"
+				" 0x%X VS set = %x  PE set = %x max VS Reached = %x  max PE Reached = %x\n",
+			__func__,
+			offset,
+			lane0_set_address,
+			dpcd_lane[0].bits.VOLTAGE_SWING_SET,
+			dpcd_lane[0].bits.PRE_EMPHASIS_SET,
+			dpcd_lane[0].bits.MAX_SWING_REACHED,
+			dpcd_lane[0].bits.MAX_PRE_EMPHASIS_REACHED);
 
+	} else {
+		DC_LOG_HW_LINK_TRAINING("%s\n 0x%X VS set = %x  PE set = %x max VS Reached = %x  max PE Reached = %x\n",
+			__func__,
+			lane0_set_address,
+			dpcd_lane[0].bits.VOLTAGE_SWING_SET,
+			dpcd_lane[0].bits.PRE_EMPHASIS_SET,
+			dpcd_lane[0].bits.MAX_SWING_REACHED,
+			dpcd_lane[0].bits.MAX_PRE_EMPHASIS_REACHED);
+	}
 	link->cur_lane_setting = link_training_setting->lane_settings[0];
 
 }
@@ -1170,12 +1219,16 @@ static void configure_lttpr_mode(struct dc_link *link)
 	uint8_t repeater_id;
 	uint8_t repeater_mode = DP_PHY_REPEATER_MODE_TRANSPARENT;
 
+	DC_LOG_HW_LINK_TRAINING("%s\n Set LTTPR to Non Transparent Mode\n", __func__);
 	core_link_write_dpcd(link,
 			DP_PHY_REPEATER_MODE,
 			(uint8_t *)&repeater_mode,
 			sizeof(repeater_mode));
 
 	if (!link->is_lttpr_mode_transparent) {
+
+		DC_LOG_HW_LINK_TRAINING("%s\n Set LTTPR to Transparent Mode\n", __func__);
+
 		repeater_mode = DP_PHY_REPEATER_MODE_NON_TRANSPARENT;
 		core_link_write_dpcd(link,
 				DP_PHY_REPEATER_MODE,
@@ -1212,8 +1265,9 @@ static void repeater_training_done(struct dc_link *link, uint32_t offset)
 		&dpcd_pattern.raw,
 		1);
 
-	DC_LOG_HW_LINK_TRAINING("%s\n 0x%X pattern = %x\n",
+	DC_LOG_HW_LINK_TRAINING("%s\n LTTPR Id: %d 0x%X pattern = %x\n",
 		__func__,
+		offset,
 		dpcd_base_lt_offset,
 		dpcd_pattern.v1_4.TRAINING_PATTERN_SET);
 }
@@ -1663,6 +1717,11 @@ static struct dc_link_settings get_max_link_cap(struct dc_link *link)
 
 		if (link->dpcd_caps.lttpr_caps.max_link_rate < max_link_cap.link_rate)
 			max_link_cap.link_rate = link->dpcd_caps.lttpr_caps.max_link_rate;
+
+		DC_LOG_HW_LINK_TRAINING("%s\n Training with LTTPR,  max_lane count %d max_link rate %d \n",
+						__func__,
+						max_link_cap.lane_count,
+						max_link_cap.link_rate);
 	}
 	return max_link_cap;
 }
@@ -3196,6 +3255,8 @@ static bool retrieve_link_cap(struct dc_link *link)
 			link->is_lttpr_mode_transparent = true;
 			dc_link_aux_configure_timeout(link->ddc, LINK_AUX_DEFAULT_TIMEOUT_PERIOD);
 		}
+
+		CONN_DATA_DETECT(link, lttpr_dpcd_data, sizeof(lttpr_dpcd_data), "LTTPR Caps: ");
 	}
 
 	{
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 22/51] drm/amd/display: Disable chroma viewport w/a when rotated 180 degrees
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (20 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 21/51] drm/amd/display: add log for lttpr sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 23/51] drm/amd/display: fix dml20 min_dst_y_next_start calculation sunpeng.li
                   ` (28 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, Michael Strauss,
	harry.wentland, bhawanpreet.lakha

From: Michael Strauss <michael.strauss@amd.com>

[WHY]
Previous Renoir chroma viewport workaround fixed an MPO flicker by
increasing the chroma viewport size. However, when the MPO plane is
rotated 180 degrees, the viewport is read in reverse. Since the workaround
increases viewport size, when reading in reverse it causes a vertical
chroma offset.

[HOW]
Pass rotation value to viewport set functions
Temporarily disable the chroma viewport w/a when hubp is rotated 180 degrees

Signed-off-by: Michael Strauss <michael.strauss@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c         | 3 ++-
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h         | 4 +++-
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 3 ++-
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c        | 3 ++-
 drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c         | 7 +++++--
 drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h              | 4 +++-
 6 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c
index 31b64733d693..4d1301e5eaf5 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c
@@ -810,7 +810,8 @@ static void hubp1_set_vm_context0_settings(struct hubp *hubp,
 void min_set_viewport(
 	struct hubp *hubp,
 	const struct rect *viewport,
-	const struct rect *viewport_c)
+	const struct rect *viewport_c,
+	enum dc_rotation_angle rotation)
 {
 	struct dcn10_hubp *hubp1 = TO_DCN10_HUBP(hubp);
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h
index 780af5b3c16f..e44eaae5033b 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.h
@@ -749,7 +749,9 @@ void hubp1_set_blank(struct hubp *hubp, bool blank);
 
 void min_set_viewport(struct hubp *hubp,
 		const struct rect *viewport,
-		const struct rect *viewport_c);
+		const struct rect *viewport_c,
+		enum dc_rotation_angle rotation);
+/* rotation angle added for use by hubp21_set_viewport */
 
 void hubp1_clk_cntl(struct hubp *hubp, bool enable);
 void hubp1_vtg_sel(struct hubp *hubp, uint32_t otg_inst);
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index 2b3081ee0e07..2440e28493e7 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -2286,7 +2286,8 @@ static void dcn10_update_dchubp_dpp(
 		hubp->funcs->mem_program_viewport(
 			hubp,
 			&pipe_ctx->plane_res.scl_data.viewport,
-			&pipe_ctx->plane_res.scl_data.viewport_c);
+			&pipe_ctx->plane_res.scl_data.viewport_c,
+			plane_state->rotation);
 	}
 
 	if (pipe_ctx->stream->cursor_attributes.address.quad_part != 0) {
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
index 8091c7c1e0d0..ece0817708f5 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
@@ -1363,7 +1363,8 @@ static void dcn20_update_dchubp_dpp(
 		hubp->funcs->mem_program_viewport(
 			hubp,
 			&pipe_ctx->plane_res.scl_data.viewport,
-			&pipe_ctx->plane_res.scl_data.viewport_c);
+			&pipe_ctx->plane_res.scl_data.viewport_c,
+			plane_state->rotation);
 
 	/* Any updates are handled in dc interface, just need to apply existing for plane enable */
 	if ((pipe_ctx->update_flags.bits.enable || pipe_ctx->update_flags.bits.opp_changed)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
index 4408aed5087b..38661b9c61f8 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
@@ -169,7 +169,8 @@ static void hubp21_setup(
 void hubp21_set_viewport(
 	struct hubp *hubp,
 	const struct rect *viewport,
-	const struct rect *viewport_c)
+	const struct rect *viewport_c,
+	enum dc_rotation_angle rotation)
 {
 	struct dcn21_hubp *hubp21 = TO_DCN21_HUBP(hubp);
 	int patched_viewport_height = 0;
@@ -196,9 +197,11 @@ void hubp21_set_viewport(
 	 *	Work around for underflow issue with NV12 + rIOMMU translation
 	 *	+ immediate flip. This will cause hubp underflow, but will not
 	 *	be user visible since underflow is in blank region
+	 *	Disable w/a when rotated 180 degrees, causes vertical chroma offset
 	 */
 	patched_viewport_height = viewport_c->height;
-	if (viewport_c->height != 0 && debug->nv12_iflip_vm_wa) {
+	if (viewport_c->height != 0 && debug->nv12_iflip_vm_wa &&
+			rotation != ROTATION_ANGLE_180) {
 		int pte_row_height = 0;
 		int pte_rows = 0;
 
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
index 9793da0f3c7e..85a34dde8526 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
@@ -82,7 +82,9 @@ struct hubp_funcs {
 	void (*mem_program_viewport)(
 			struct hubp *hubp,
 			const struct rect *viewport,
-			const struct rect *viewport_c);
+			const struct rect *viewport_c,
+			enum dc_rotation_angle rotation);
+			/* rotation needed for Renoir workaround */
 
 	bool (*hubp_program_surface_flip_and_addr)(
 		struct hubp *hubp,
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 23/51] drm/amd/display: fix dml20 min_dst_y_next_start calculation
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (21 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 22/51] drm/amd/display: Disable chroma viewport w/a when rotated 180 degrees sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 24/51] drm/amd/display: Reset steer fifo before unblanking the stream sunpeng.li
                   ` (27 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, Dmytro Laktyushkin,
	harry.wentland, bhawanpreet.lakha

From: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>

Bring this calculation in line with HW programming guide.

Signed-off-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c  | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
index 2c7455e22a65..9df24ececcec 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
@@ -929,8 +929,7 @@ static void dml20_rq_dlg_get_dlg_params(struct display_mode_lib *mode_lib,
 	min_dst_y_ttu_vblank = min_ttu_vblank * pclk_freq_in_mhz / (double) htotal;
 	dlg_vblank_start = interlaced ? (vblank_start / 2) : vblank_start;
 
-	disp_dlg_regs->min_dst_y_next_start = (unsigned int) (((double) dlg_vblank_start
-			+ min_dst_y_ttu_vblank) * dml_pow(2, 2));
+	disp_dlg_regs->min_dst_y_next_start = (unsigned int) ((double) dlg_vblank_start * dml_pow(2, 2));
 	ASSERT(disp_dlg_regs->min_dst_y_next_start < (unsigned int) dml_pow(2, 18));
 
 	dml_print("DML_DLG: %s: min_dcfclk_mhz                         = %3.2f\n",
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 24/51] drm/amd/display: Reset steer fifo before unblanking the stream
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (22 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 23/51] drm/amd/display: fix dml20 min_dst_y_next_start calculation sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 25/51] drm/amd/display: Implement DePQ for DCN1 sunpeng.li
                   ` (26 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, Nikola Cornij,
	harry.wentland, bhawanpreet.lakha

From: Nikola Cornij <nikola.cornij@amd.com>

[why]
During mode transition steer fifo could overflow. Quite often it
recovers by itself, but sometimes it doesn't.

[how]
Add steer fifo reset before unblanking the stream. Also add a short
delay when resetting dig resync fifo to make sure register writes
don't end up back-to-back, in which case the HW might miss the reset
request.

Signed-off-by: Nikola Cornij <nikola.cornij@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../drm/amd/display/dc/dcn20/dcn20_stream_encoder.c  | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_stream_encoder.c
index be0978401476..9b70a1e7b962 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_stream_encoder.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_stream_encoder.c
@@ -488,15 +488,23 @@ void enc2_stream_encoder_dp_unblank(
 				DP_VID_N_MUL, n_multiply);
 	}
 
-	/* set DIG_START to 0x1 to reset FIFO */
+	/* make sure stream is disabled before resetting steer fifo */
+	REG_UPDATE(DP_VID_STREAM_CNTL, DP_VID_STREAM_ENABLE, false);
+	REG_WAIT(DP_VID_STREAM_CNTL, DP_VID_STREAM_STATUS, 0, 10, 5000);
 
+	/* set DIG_START to 0x1 to reset FIFO */
 	REG_UPDATE(DIG_FE_CNTL, DIG_START, 1);
+	udelay(1);
 
 	/* write 0 to take the FIFO out of reset */
 
 	REG_UPDATE(DIG_FE_CNTL, DIG_START, 0);
 
-	/* switch DP encoder to CRTC data */
+	/* switch DP encoder to CRTC data, but reset it the fifo first. It may happen
+	 * that it overflows during mode transition, and sometimes doesn't recover.
+	 */
+	REG_UPDATE(DP_STEER_FIFO, DP_STEER_FIFO_RESET, 1);
+	udelay(10);
 
 	REG_UPDATE(DP_STEER_FIFO, DP_STEER_FIFO_RESET, 0);
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 25/51] drm/amd/display: Implement DePQ for DCN1
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (23 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 24/51] drm/amd/display: Reset steer fifo before unblanking the stream sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 26/51] drm/amd/display: update p-state latency for renoir when using lpddr4 sunpeng.li
                   ` (25 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Krunoslav Kovac, Leo Li, harry.wentland, rodrigo.siqueira,
	Reza Amini, bhawanpreet.lakha

From: Reza Amini <Reza.Amini@amd.com>

[Why]
Need support for more color management in 10bit
surface.

[How]
Provide support for DePQ for 10bit surface

Signed-off-by: Reza Amini <Reza.Amini@amd.com>
Reviewed-by: Krunoslav Kovac <Krunoslav.Kovac@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../drm/amd/display/dc/dcn10/dcn10_dpp_cm.c   |  3 ++
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c |  5 +++
 .../amd/display/modules/color/color_gamma.c   | 39 ++++++++++++++-----
 3 files changed, 38 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c
index 6b7593dd0c77..935c892622a0 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_cm.c
@@ -628,6 +628,9 @@ void dpp1_set_degamma(
 	case IPP_DEGAMMA_MODE_HW_xvYCC:
 		REG_UPDATE(CM_DGAM_CONTROL, CM_DGAM_LUT_MODE, 2);
 			break;
+	case IPP_DEGAMMA_MODE_USER_PWL:
+		REG_UPDATE(CM_DGAM_CONTROL, CM_DGAM_LUT_MODE, 3);
+		break;
 	default:
 		BREAK_TO_DEBUGGER();
 		break;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index 2440e28493e7..9551fefb9d1d 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -1465,6 +1465,11 @@ bool dcn10_set_input_transfer_func(struct dc *dc, struct pipe_ctx *pipe_ctx,
 			dpp_base->funcs->dpp_set_degamma(dpp_base, IPP_DEGAMMA_MODE_BYPASS);
 			break;
 		case TRANSFER_FUNCTION_PQ:
+			dpp_base->funcs->dpp_set_degamma(dpp_base, IPP_DEGAMMA_MODE_USER_PWL);
+			cm_helper_translate_curve_to_degamma_hw_format(tf, &dpp_base->degamma_params);
+			dpp_base->funcs->dpp_program_degamma_pwl(dpp_base, &dpp_base->degamma_params);
+			result = true;
+			break;
 		default:
 			result = false;
 			break;
diff --git a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
index 9b121b08c806..b52c4d379651 100644
--- a/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
+++ b/drivers/gpu/drm/amd/display/modules/color/color_gamma.c
@@ -154,6 +154,7 @@ static void compute_de_pq(struct fixed31_32 in_x, struct fixed31_32 *out_y)
 
 	struct fixed31_32 l_pow_m1;
 	struct fixed31_32 base, div;
+	struct fixed31_32 base2;
 
 
 	if (dc_fixpt_lt(in_x, dc_fixpt_zero))
@@ -163,13 +164,15 @@ static void compute_de_pq(struct fixed31_32 in_x, struct fixed31_32 *out_y)
 			dc_fixpt_div(dc_fixpt_one, m2));
 	base = dc_fixpt_sub(l_pow_m1, c1);
 
-	if (dc_fixpt_lt(base, dc_fixpt_zero))
-		base = dc_fixpt_zero;
-
 	div = dc_fixpt_sub(c2, dc_fixpt_mul(c3, l_pow_m1));
 
-	*out_y = dc_fixpt_pow(dc_fixpt_div(base, div),
-			dc_fixpt_div(dc_fixpt_one, m1));
+	base2 = dc_fixpt_div(base, div);
+	//avoid complex numbers
+	if (dc_fixpt_lt(base2, dc_fixpt_zero))
+		base2 = dc_fixpt_sub(dc_fixpt_zero, base2);
+
+
+	*out_y = dc_fixpt_pow(base2, dc_fixpt_div(dc_fixpt_one, m1));
 
 }
 
@@ -1998,10 +2001,28 @@ bool mod_color_calculate_degamma_params(struct dc_transfer_func *input_tf,
 	tf_pts->x_point_at_y1_green = 1;
 	tf_pts->x_point_at_y1_blue = 1;
 
-	map_regamma_hw_to_x_user(ramp, coeff, rgb_user,
-			coordinates_x, axis_x, curve,
-			MAX_HW_POINTS, tf_pts,
-			mapUserRamp && ramp && ramp->type == GAMMA_RGB_256);
+	if (input_tf->tf == TRANSFER_FUNCTION_PQ) {
+		/* just copy current rgb_regamma into  tf_pts */
+		struct pwl_float_data_ex *curvePt = curve;
+		int i = 0;
+
+		while (i <= MAX_HW_POINTS) {
+			tf_pts->red[i]   = curvePt->r;
+			tf_pts->green[i] = curvePt->g;
+			tf_pts->blue[i]  = curvePt->b;
+			++curvePt;
+			++i;
+		}
+	} else {
+		//clamps to 0-1
+		map_regamma_hw_to_x_user(ramp, coeff, rgb_user,
+				coordinates_x, axis_x, curve,
+				MAX_HW_POINTS, tf_pts,
+				mapUserRamp && ramp && ramp->type == GAMMA_RGB_256);
+	}
+
+
+
 	if (ramp->type == GAMMA_CUSTOM)
 		apply_lut_1d(ramp, MAX_HW_POINTS, tf_pts);
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 26/51] drm/amd/display: update p-state latency for renoir when using lpddr4
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (24 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 25/51] drm/amd/display: Implement DePQ for DCN1 sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 27/51] drm/amd/display: add DP protocol version sunpeng.li
                   ` (24 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, Joseph Gravenor,
	harry.wentland, bhawanpreet.lakha

From: Joseph Gravenor <joseph.gravenor@amd.com>

[Why]
DF team has produced more optimized latency numbers, for lpddr4

[How]
change the p-state laency in the lpddr4 wm table to the new latency
number

Signed-off-by: Joseph Gravenor <joseph.gravenor@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
index 307c8540e36f..901e7035bf8e 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
@@ -562,7 +562,7 @@ struct wm_table lpddr4_wm_table = {
 		{
 			.wm_inst = WM_A,
 			.wm_type = WM_TYPE_PSTATE_CHG,
-			.pstate_latency_us = 23.84,
+			.pstate_latency_us = 11.65333,
 			.sr_exit_time_us = 12.5,
 			.sr_enter_plus_exit_time_us = 17.0,
 			.valid = true,
@@ -570,7 +570,7 @@ struct wm_table lpddr4_wm_table = {
 		{
 			.wm_inst = WM_B,
 			.wm_type = WM_TYPE_PSTATE_CHG,
-			.pstate_latency_us = 23.84,
+			.pstate_latency_us = 11.65333,
 			.sr_exit_time_us = 12.5,
 			.sr_enter_plus_exit_time_us = 17.0,
 			.valid = true,
@@ -578,7 +578,7 @@ struct wm_table lpddr4_wm_table = {
 		{
 			.wm_inst = WM_C,
 			.wm_type = WM_TYPE_PSTATE_CHG,
-			.pstate_latency_us = 23.84,
+			.pstate_latency_us = 11.65333,
 			.sr_exit_time_us = 12.5,
 			.sr_enter_plus_exit_time_us = 17.0,
 			.valid = true,
@@ -586,7 +586,7 @@ struct wm_table lpddr4_wm_table = {
 		{
 			.wm_inst = WM_D,
 			.wm_type = WM_TYPE_PSTATE_CHG,
-			.pstate_latency_us = 23.84,
+			.pstate_latency_us = 11.65333,
 			.sr_exit_time_us = 12.5,
 			.sr_enter_plus_exit_time_us = 17.0,
 			.valid = true,
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 27/51] drm/amd/display: add DP protocol version
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (25 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 26/51] drm/amd/display: update p-state latency for renoir when using lpddr4 sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 28/51] drm/amd/display: Save/restore link setting for disable phy when link retraining sunpeng.li
                   ` (23 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Aric Cyr, Leo Li, harry.wentland, rodrigo.siqueira,
	bhawanpreet.lakha, Anthony Koo

From: Anthony Koo <Anthony.Koo@amd.com>

[Why]
We want to know DP protocol version

[How]
In DC create we initialize a cap to indicate the max
DP protocol version supported

Signed-off-by: Anthony Koo <Anthony.Koo@amd.com>
Reviewed-by: Aric Cyr <Aric.Cyr@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc.c | 2 ++
 drivers/gpu/drm/amd/display/dc/dc.h      | 5 +++++
 2 files changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 061e8adf7476..55f22a1c0aa5 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -809,6 +809,8 @@ struct dc *dc_create(const struct dc_init_data *init_params)
 	dc->caps.max_audios = dc->res_pool->audio_count;
 	dc->caps.linear_pitch_alignment = 64;
 
+	dc->caps.max_dp_protocol_version = DP_VERSION_1_4;
+
 	/* Populate versioning information */
 	dc->versions.dc_ver = DC_VER;
 
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index 34b824270c84..4c7a2882a512 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -54,6 +54,10 @@ struct dc_versions {
 	struct dmcu_version dmcu_version;
 };
 
+enum dp_protocol_version {
+	DP_VERSION_1_4,
+};
+
 enum dc_plane_type {
 	DC_PLANE_TYPE_INVALID,
 	DC_PLANE_TYPE_DCE_RGB,
@@ -114,6 +118,7 @@ struct dc_caps {
 	bool extended_aux_timeout_support;
 	bool dmcub_support;
 	bool hw_3d_lut;
+	enum dp_protocol_version max_dp_protocol_version;
 	struct dc_plane_cap planes[MAX_PLANES];
 };
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 28/51] drm/amd/display: Save/restore link setting for disable phy when link retraining
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (26 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 27/51] drm/amd/display: add DP protocol version sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 29/51] drm/amd/display: Return a correct error value sunpeng.li
                   ` (22 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, Hugo Hu, rodrigo.siqueira, Wenjing Liu,
	bhawanpreet.lakha

From: Hugo Hu <hugo.hu@amd.com>

[Why]
The link setting will be modify after disable phy
and due to DP Compliance Fails.

[How]
Save and resotre link setting for disable link phy when link retraining.

Signed-off-by: Hugo Hu <hugo.hu@amd.com>
Reviewed-by: Wenjing Liu <Wenjing.Liu@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index 486c14e0cd41..015fa0c52746 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -2788,9 +2788,9 @@ bool dc_link_handle_hpd_rx_irq(struct dc_link *link, union hpd_irq_data *out_hpd
 	union hpd_irq_data hpd_irq_dpcd_data = { { { {0} } } };
 	union device_service_irq device_service_clear = { { 0 } };
 	enum dc_status result;
-
 	bool status = false;
 	struct pipe_ctx *pipe_ctx;
+	struct dc_link_settings previous_link_settings;
 	int i;
 
 	if (out_link_loss)
@@ -2873,9 +2873,10 @@ bool dc_link_handle_hpd_rx_irq(struct dc_link *link, union hpd_irq_data *out_hpd
 		if (pipe_ctx == NULL || pipe_ctx->stream == NULL)
 			return false;
 
+		previous_link_settings = link->cur_link_settings;
 		dp_disable_link_phy(link, pipe_ctx->stream->signal);
 
-		perform_link_training_with_retries(&link->cur_link_settings,
+		perform_link_training_with_retries(&previous_link_settings,
 			true, LINK_TRAINING_ATTEMPTS,
 			pipe_ctx,
 			pipe_ctx->stream->signal);
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 29/51] drm/amd/display: Return a correct error value
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (27 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 28/51] drm/amd/display: Save/restore link setting for disable phy when link retraining sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 30/51] drm/amd/display: Split DMUB cmd type into type/subtype sunpeng.li
                   ` (21 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, Martin Leung,
	Mikita Lipski, bhawanpreet.lakha, Anthony Koo

From: Mikita Lipski <mikita.lipski@amd.com>

[why]
The function is expected to return instance of the timing generator
therefore we shouldn't be returning boolean in integer function,
and we shouldn't be returning zero so changing it to -1.

Signed-off-by: Mikita Lipski <mikita.lipski@amd.com>
Reviewed-by: Martin Leung <Martin.Leung@amd.com>
Acked-by: Anthony Koo <Anthony.Koo@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
index a9412720c860..0c19de678339 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
@@ -1866,7 +1866,7 @@ static int acquire_resource_from_hw_enabled_state(
 	inst = link->link_enc->funcs->get_dig_frontend(link->link_enc);
 
 	if (inst == ENGINE_ID_UNKNOWN)
-		return false;
+		return -1;
 
 	for (i = 0; i < pool->stream_enc_count; i++) {
 		if (pool->stream_enc[i]->id == inst) {
@@ -1878,10 +1878,10 @@ static int acquire_resource_from_hw_enabled_state(
 
 	// tg_inst not found
 	if (i == pool->stream_enc_count)
-		return false;
+		return -1;
 
 	if (tg_inst >= pool->timing_generator_count)
-		return false;
+		return -1;
 
 	if (!res_ctx->pipe_ctx[tg_inst].stream) {
 		struct pipe_ctx *pipe_ctx = &res_ctx->pipe_ctx[tg_inst];
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 30/51] drm/amd/display: Split DMUB cmd type into type/subtype
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (28 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 29/51] drm/amd/display: Return a correct error value sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 31/51] drm/amd/display: Add shared DMCUB/driver firmware state cache window sunpeng.li
                   ` (20 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, harry.wentland,
	bhawanpreet.lakha, Nicholas Kazlauskas

From: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>

[Why]
Commands will be considered a stable ABI between driver and firmware.

Commands are also split between DC commands, DAL feature commands,
and VBIOS commands.

Commands are currently not designated to a specific ID and the enum
does not provide a stable ABI.

We currently group all of these into a single command type of 8-bits.
With the stable ABI consideration in mind it's not unreasonable to
run out of command IDs.

For cleaner separation and versioning split the commands into a main
type and a subtype.

[How]
For commands where performance matters (like reg sequences) these
are still considered main commands.

Sub commands will be split by ownership/feature.

Update existing command sequences to reflect new changes.

Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../drm/amd/display/dc/bios/command_table2.c  | 13 +++--
 drivers/gpu/drm/amd/display/dc/dc_helper.c    |  3 ++
 .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   | 48 +++++++------------
 .../drm/amd/display/dmub/inc/dmub_cmd_dal.h   | 41 ++++++++++++++++
 .../drm/amd/display/dmub/inc/dmub_cmd_vbios.h | 41 ++++++++++++++++
 5 files changed, 112 insertions(+), 34 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_dal.h
 create mode 100644 drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_vbios.h

diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
index 1836f16bb7fe..2cb7a4288cb7 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
@@ -111,7 +111,8 @@ static void encoder_control_dmcub(
 {
 	struct dmub_rb_cmd_digx_encoder_control encoder_control = { 0 };
 
-	encoder_control.header.type = DMUB_CMD__DIGX_ENCODER_CONTROL;
+	encoder_control.header.type = DMUB_CMD__VBIOS;
+	encoder_control.header.sub_type = DMUB_CMD__VBIOS_DIGX_ENCODER_CONTROL;
 	encoder_control.encoder_control.dig.stream_param = *dig;
 
 	dc_dmub_srv_cmd_queue(dmcub, &encoder_control.header);
@@ -219,7 +220,9 @@ static void transmitter_control_dmcub(
 {
 	struct dmub_rb_cmd_dig1_transmitter_control transmitter_control;
 
-	transmitter_control.header.type = DMUB_CMD__DIG1_TRANSMITTER_CONTROL;
+	transmitter_control.header.type = DMUB_CMD__VBIOS;
+	transmitter_control.header.sub_type =
+		DMUB_CMD__VBIOS_DIG1_TRANSMITTER_CONTROL;
 	transmitter_control.transmitter_control.dig = *dig;
 
 	dc_dmub_srv_cmd_queue(dmcub, &transmitter_control.header);
@@ -302,7 +305,8 @@ static void set_pixel_clock_dmcub(
 {
 	struct dmub_rb_cmd_set_pixel_clock pixel_clock = { 0 };
 
-	pixel_clock.header.type     = DMUB_CMD__SET_PIXEL_CLOCK;
+	pixel_clock.header.type = DMUB_CMD__VBIOS;
+	pixel_clock.header.sub_type = DMUB_CMD__VBIOS_SET_PIXEL_CLOCK;
 	pixel_clock.pixel_clock.clk = *clk;
 
 	dc_dmub_srv_cmd_queue(dmcub, &pixel_clock.header);
@@ -650,7 +654,8 @@ static void enable_disp_power_gating_dmcub(
 {
 	struct dmub_rb_cmd_enable_disp_power_gating power_gating;
 
-	power_gating.header.type      = DMUB_CMD__ENABLE_DISP_POWER_GATING;
+	power_gating.header.type = DMUB_CMD__VBIOS;
+	power_gating.header.sub_type = DMUB_CMD__VBIOS_ENABLE_DISP_POWER_GATING;
 	power_gating.power_gating.pwr = *pwr;
 
 	dc_dmub_srv_cmd_queue(dmcub, &power_gating.header);
diff --git a/drivers/gpu/drm/amd/display/dc/dc_helper.c b/drivers/gpu/drm/amd/display/dc/dc_helper.c
index e41befa067ce..02a63e9cb62f 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_helper.c
+++ b/drivers/gpu/drm/amd/display/dc/dc_helper.c
@@ -178,6 +178,7 @@ static bool dmub_reg_value_burst_set_pack(const struct dc_context *ctx, uint32_t
 	}
 
 	cmd_buf->header.type = DMUB_CMD__REG_SEQ_BURST_WRITE;
+	cmd_buf->header.sub_type = 0;
 	cmd_buf->addr = addr;
 	cmd_buf->write_values[offload->reg_seq_count] = reg_val;
 	offload->reg_seq_count++;
@@ -206,6 +207,7 @@ static uint32_t dmub_reg_value_pack(const struct dc_context *ctx, uint32_t addr,
 
 	/* pack commands */
 	cmd_buf->header.type = DMUB_CMD__REG_SEQ_READ_MODIFY_WRITE;
+	cmd_buf->header.sub_type = 0;
 	seq = &cmd_buf->seq[offload->reg_seq_count];
 
 	if (offload->reg_seq_count) {
@@ -230,6 +232,7 @@ static void dmub_reg_wait_done_pack(const struct dc_context *ctx, uint32_t addr,
 	struct dmub_rb_cmd_reg_wait *cmd_buf = &offload->cmd_data.reg_wait;
 
 	cmd_buf->header.type = DMUB_CMD__REG_REG_WAIT;
+	cmd_buf->header.sub_type = 0;
 	cmd_buf->reg_wait.addr = addr;
 	cmd_buf->reg_wait.condition_field_value = mask & (condition_value << shift);
 	cmd_buf->reg_wait.mask = mask;
diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
index 43f1cd647aab..b10728f33f62 100644
--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
@@ -27,6 +27,8 @@
 #define _DMUB_CMD_H_
 
 #include "dmub_types.h"
+#include "dmub_cmd_dal.h"
+#include "dmub_cmd_vbios.h"
 #include "atomfirmware.h"
 
 #define DMUB_RB_CMD_SIZE 64
@@ -34,43 +36,29 @@
 #define DMUB_RB_SIZE (DMUB_RB_CMD_SIZE * DMUB_RB_MAX_ENTRY)
 #define REG_SET_MASK 0xFFFF
 
+/*
+ * Command IDs should be treated as stable ABI.
+ * Do not reuse or modify IDs.
+ */
+
 enum dmub_cmd_type {
-	DMUB_CMD__NULL,
-	DMUB_CMD__REG_SEQ_READ_MODIFY_WRITE,
-	DMUB_CMD__REG_SEQ_FIELD_UPDATE_SEQ,
-	DMUB_CMD__REG_SEQ_BURST_WRITE,
-	DMUB_CMD__REG_REG_WAIT,
-	DMUB_CMD__DIGX_ENCODER_CONTROL,
-	DMUB_CMD__SET_PIXEL_CLOCK,
-	DMUB_CMD__ENABLE_DISP_POWER_GATING,
-	DMUB_CMD__DPPHY_INIT,
-	DMUB_CMD__DIG1_TRANSMITTER_CONTROL,
-	DMUB_CMD__SETUP_DISPLAY_MODE,
-	DMUB_CMD__BLANK_CRTC,
-	DMUB_CMD__ENABLE_DISPPATH,
-	DMUB_CMD__DISABLE_DISPPATH,
-	DMUB_CMD__DISABLE_DISPPATH_OUTPUT,
-	DMUB_CMD__READ_DISPPATH_EDID,
-	DMUB_CMD__DP_PRE_LINKTRAINING,
-	DMUB_CMD__INIT_CONTROLLER,
-	DMUB_CMD__RESET_CONTROLLER,
-	DMUB_CMD__SET_BRI_LEVEL,
-	DMUB_CMD__LVTMA_CONTROL,
-
-	// PSR
-	DMUB_CMD__PSR_ENABLE,
-	DMUB_CMD__PSR_DISABLE,
-	DMUB_CMD__PSR_COPY_SETTINGS,
-	DMUB_CMD__PSR_SET_LEVEL,
+	DMUB_CMD__NULL = 0,
+	DMUB_CMD__REG_SEQ_READ_MODIFY_WRITE = 1,
+	DMUB_CMD__REG_SEQ_FIELD_UPDATE_SEQ = 2,
+	DMUB_CMD__REG_SEQ_BURST_WRITE = 3,
+	DMUB_CMD__REG_REG_WAIT = 4,
+	DMUB_CMD__PSR = 64,
+	DMUB_CMD__VBIOS = 128,
 };
 
 #pragma pack(push, 1)
 
 struct dmub_cmd_header {
-	enum dmub_cmd_type type : 8;
-	unsigned int reserved0 : 16;
+	unsigned int type : 8;
+	unsigned int sub_type : 8;
+	unsigned int reserved0 : 8;
 	unsigned int payload_bytes : 6;  /* up to 60 bytes */
-	unsigned int reserved : 2;
+	unsigned int reserved1 : 2;
 };
 
 /*
diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_dal.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_dal.h
new file mode 100644
index 000000000000..14f13e8a6f3b
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_dal.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef _DMUB_CMD_DAL_H_
+#define _DMUB_CMD_DAL_H_
+
+/*
+ * Command IDs should be treated as stable ABI.
+ * Do not reuse or modify IDs.
+ */
+
+enum dmub_cmd_psr_type {
+	DMUB_CMD__PSR_ENABLE = 0,
+	DMUB_CMD__PSR_DISABLE = 1,
+	DMUB_CMD__PSR_COPY_SETTINGS = 2,
+	DMUB_CMD__PSR_SET_LEVEL = 3,
+};
+
+#endif /* _DMUB_CMD_DAL_H_ */
diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_vbios.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_vbios.h
new file mode 100644
index 000000000000..b6deb8e2590f
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd_vbios.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef _DMUB_CMD_VBIOS_H_
+#define _DMUB_CMD_VBIOS_H_
+
+/*
+ * Command IDs should be treated as stable ABI.
+ * Do not reuse or modify IDs.
+ */
+
+enum dmub_cmd_vbios_type {
+	DMUB_CMD__VBIOS_DIGX_ENCODER_CONTROL = 0,
+	DMUB_CMD__VBIOS_DIG1_TRANSMITTER_CONTROL = 1,
+	DMUB_CMD__VBIOS_SET_PIXEL_CLOCK = 2,
+	DMUB_CMD__VBIOS_ENABLE_DISP_POWER_GATING = 3,
+};
+
+#endif /* _DMUB_CMD_VBIOS_H_ */
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 31/51] drm/amd/display: Add shared DMCUB/driver firmware state cache window
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (29 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 30/51] drm/amd/display: Split DMUB cmd type into type/subtype sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 32/51] drm/amd/display: update sr latency for renoir when using lpddr4 sunpeng.li
                   ` (19 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, harry.wentland,
	bhawanpreet.lakha, Nicholas Kazlauskas

From: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>

[Why]
Scratch registers are limited on the DMCUB and we have an expanding
list of state to track between driver and DMCUB.

[How]
Place shared state in cache window 6. The cache window size is aligned
to the size of the cache line on the DMCUB to make it easy to
invalidate.

The shared state is intended to be read only from driver side so
it's been marked as const.

The use of volatile is intentional. The memory for the shared firmware
state is memory mapped from the framebuffer memory. The DMCUB will
flush its cache after modifying the region. There's no way for x86
to known whether this data is stale or not so we want to intentionally
disable optimization to force the read at every access.

Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../drm/amd/display/dmub/inc/dmub_fw_state.h  | 73 +++++++++++++++++++
 .../gpu/drm/amd/display/dmub/inc/dmub_srv.h   |  8 +-
 .../gpu/drm/amd/display/dmub/src/dmub_dcn20.c | 10 ++-
 .../gpu/drm/amd/display/dmub/src/dmub_dcn20.h |  3 +-
 .../gpu/drm/amd/display/dmub/src/dmub_dcn21.c | 12 ++-
 .../gpu/drm/amd/display/dmub/src/dmub_dcn21.h |  3 +-
 .../gpu/drm/amd/display/dmub/src/dmub_srv.c   | 27 +++++--
 7 files changed, 125 insertions(+), 11 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_state.h

diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_state.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_state.h
new file mode 100644
index 000000000000..c87b1ba7590e
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_fw_state.h
@@ -0,0 +1,73 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef _DMUB_FW_STATE_H_
+#define _DMUB_FW_STATE_H_
+
+#include "dmub_types.h"
+
+#pragma pack(push, 1)
+
+struct dmub_fw_state {
+	/**
+	 * @phy_initialized_during_fw_boot:
+	 *
+	 * Detects if VBIOS/VBL has ran before firmware boot.
+	 * A value of 1 will usually mean S0i3 boot.
+	 */
+	uint8_t phy_initialized_during_fw_boot;
+
+	/**
+	 * @intialized_phy:
+	 *
+	 * Bit vector of initialized PHY.
+	 */
+	uint8_t initialized_phy;
+
+	/**
+	 * @enabled_phy:
+	 *
+	 * Bit vector of enabled PHY for DP alt mode switch tracking.
+	 */
+	uint8_t enabled_phy;
+
+	/**
+	 * @dmcu_fw_loaded:
+	 *
+	 * DMCU auto load state.
+	 */
+	uint8_t dmcu_fw_loaded;
+
+	/**
+	 * @psr_state:
+	 *
+	 * PSR state tracking.
+	 */
+	uint8_t psr_state;
+};
+
+#pragma pack(pop)
+
+#endif /* _DMUB_FW_STATE_H_ */
diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_srv.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_srv.h
index fdedbe15e026..528243e35add 100644
--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_srv.h
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_srv.h
@@ -67,6 +67,7 @@
 #include "dmub_types.h"
 #include "dmub_cmd.h"
 #include "dmub_rb.h"
+#include "dmub_fw_state.h"
 
 #if defined(__cplusplus)
 extern "C" {
@@ -102,7 +103,7 @@ enum dmub_window_id {
 	DMUB_WINDOW_3_VBIOS,
 	DMUB_WINDOW_4_MAILBOX,
 	DMUB_WINDOW_5_TRACEBUFF,
-	DMUB_WINDOW_6_RESERVED,
+	DMUB_WINDOW_6_FW_STATE,
 	DMUB_WINDOW_7_RESERVED,
 	DMUB_WINDOW_TOTAL,
 };
@@ -241,7 +242,8 @@ struct dmub_srv_hw_funcs {
 			      const struct dmub_window *cw2,
 			      const struct dmub_window *cw3,
 			      const struct dmub_window *cw4,
-				  const struct dmub_window *cw5);
+			      const struct dmub_window *cw5,
+			      const struct dmub_window *cw6);
 
 	void (*setup_mailbox)(struct dmub_srv *dmub,
 			      const struct dmub_region *inbox1);
@@ -296,11 +298,13 @@ struct dmub_srv_hw_params {
  * @asic: dmub asic identifier
  * @user_ctx: user provided context for the dmub_srv
  * @is_virtual: false if hardware support only
+ * @fw_state: dmub firmware state pointer
  */
 struct dmub_srv {
 	enum dmub_asic asic;
 	void *user_ctx;
 	bool is_virtual;
+	volatile const struct dmub_fw_state *fw_state;
 
 	/* private: internal use only */
 	struct dmub_srv_base_funcs funcs;
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
index 302dd3d4b77d..951ea7053c7e 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.c
@@ -76,7 +76,8 @@ void dmub_dcn20_setup_windows(struct dmub_srv *dmub,
 			      const struct dmub_window *cw2,
 			      const struct dmub_window *cw3,
 			      const struct dmub_window *cw4,
-				  const struct dmub_window *cw5)
+			      const struct dmub_window *cw5,
+			      const struct dmub_window *cw6)
 {
 	REG_WRITE(DMCUB_REGION3_CW2_OFFSET, cw2->offset.u.low_part);
 	REG_WRITE(DMCUB_REGION3_CW2_OFFSET_HIGH, cw2->offset.u.high_part);
@@ -106,6 +107,13 @@ void dmub_dcn20_setup_windows(struct dmub_srv *dmub,
 	REG_SET_2(DMCUB_REGION3_CW5_TOP_ADDRESS, 0,
 		  DMCUB_REGION3_CW5_TOP_ADDRESS, cw5->region.top,
 		  DMCUB_REGION3_CW5_ENABLE, 1);
+
+	REG_WRITE(DMCUB_REGION3_CW6_OFFSET, cw6->offset.u.low_part);
+	REG_WRITE(DMCUB_REGION3_CW6_OFFSET_HIGH, cw6->offset.u.high_part);
+	REG_WRITE(DMCUB_REGION3_CW6_BASE_ADDRESS, cw6->region.base);
+	REG_SET_2(DMCUB_REGION3_CW6_TOP_ADDRESS, 0,
+		  DMCUB_REGION3_CW6_TOP_ADDRESS, cw6->region.top,
+		  DMCUB_REGION3_CW6_ENABLE, 1);
 }
 
 void dmub_dcn20_setup_mailbox(struct dmub_srv *dmub,
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h
index ca7db03b94f7..e70a57573467 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn20.h
@@ -46,7 +46,8 @@ void dmub_dcn20_setup_windows(struct dmub_srv *dmub,
 			      const struct dmub_window *cw2,
 			      const struct dmub_window *cw3,
 			      const struct dmub_window *cw4,
-				  const struct dmub_window *cw5);
+			      const struct dmub_window *cw5,
+			      const struct dmub_window *cw6);
 
 void dmub_dcn20_setup_mailbox(struct dmub_srv *dmub,
 			      const struct dmub_region *inbox1);
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.c
index b9dc2dd645eb..9cea7a2d8dbf 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.c
@@ -78,7 +78,8 @@ void dmub_dcn21_setup_windows(struct dmub_srv *dmub,
 			      const struct dmub_window *cw2,
 			      const struct dmub_window *cw3,
 			      const struct dmub_window *cw4,
-				  const struct dmub_window *cw5)
+			      const struct dmub_window *cw5,
+			      const struct dmub_window *cw6)
 {
 	union dmub_addr offset;
 	uint64_t fb_base = dmub->fb_base, fb_offset = dmub->fb_offset;
@@ -118,6 +119,15 @@ void dmub_dcn21_setup_windows(struct dmub_srv *dmub,
 	REG_SET_2(DMCUB_REGION3_CW5_TOP_ADDRESS, 0,
 		  DMCUB_REGION3_CW5_TOP_ADDRESS, cw5->region.top,
 		  DMCUB_REGION3_CW5_ENABLE, 1);
+
+	dmub_dcn21_translate_addr(&cw6->offset, fb_base, fb_offset, &offset);
+
+	REG_WRITE(DMCUB_REGION3_CW6_OFFSET, offset.u.low_part);
+	REG_WRITE(DMCUB_REGION3_CW6_OFFSET_HIGH, offset.u.high_part);
+	REG_WRITE(DMCUB_REGION3_CW6_BASE_ADDRESS, cw6->region.base);
+	REG_SET_2(DMCUB_REGION3_CW6_TOP_ADDRESS, 0,
+		  DMCUB_REGION3_CW6_TOP_ADDRESS, cw6->region.top,
+		  DMCUB_REGION3_CW6_ENABLE, 1);
 }
 
 bool dmub_dcn21_is_auto_load_done(struct dmub_srv *dmub)
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.h b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.h
index 9e5f195e288f..f7a93a5dcfa5 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.h
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn21.h
@@ -38,7 +38,8 @@ void dmub_dcn21_setup_windows(struct dmub_srv *dmub,
 			      const struct dmub_window *cw2,
 			      const struct dmub_window *cw3,
 			      const struct dmub_window *cw4,
-				  const struct dmub_window *cw5);
+			      const struct dmub_window *cw5,
+			      const struct dmub_window *cw6);
 
 bool dmub_dcn21_is_auto_load_done(struct dmub_srv *dmub);
 
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
index 70c7a4be9ccc..5f39166d3c08 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
@@ -48,13 +48,14 @@
 
 
 /* Number of windows in use. */
-#define DMUB_NUM_WINDOWS (DMUB_WINDOW_5_TRACEBUFF + 1)
+#define DMUB_NUM_WINDOWS (DMUB_WINDOW_6_FW_STATE + 1)
 /* Base addresses. */
 
 #define DMUB_CW0_BASE (0x60000000)
 #define DMUB_CW1_BASE (0x61000000)
 #define DMUB_CW3_BASE (0x63000000)
 #define DMUB_CW5_BASE (0x65000000)
+#define DMUB_CW6_BASE (0x66000000)
 
 static inline uint32_t dmub_align(uint32_t val, uint32_t factor)
 {
@@ -158,6 +159,7 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
 	struct dmub_region *bios = &out->regions[DMUB_WINDOW_3_VBIOS];
 	struct dmub_region *mail = &out->regions[DMUB_WINDOW_4_MAILBOX];
 	struct dmub_region *trace_buff = &out->regions[DMUB_WINDOW_5_TRACEBUFF];
+	struct dmub_region *fw_state = &out->regions[DMUB_WINDOW_6_FW_STATE];
 
 	if (!dmub->sw_init)
 		return DMUB_STATUS_INVALID;
@@ -184,7 +186,13 @@ dmub_srv_calc_region_info(struct dmub_srv *dmub,
 	trace_buff->base = dmub_align(mail->top, 256);
 	trace_buff->top = trace_buff->base + TRACE_BUF_SIZE;
 
-	out->fb_size = dmub_align(trace_buff->top, 4096);
+	fw_state->base = dmub_align(trace_buff->top, 256);
+
+	/* Align firmware state to size of cache line. */
+	fw_state->top =
+		fw_state->base + dmub_align(sizeof(struct dmub_fw_state), 64);
+
+	out->fb_size = dmub_align(fw_state->top, 4096);
 
 	return DMUB_STATUS_OK;
 }
@@ -258,9 +266,10 @@ enum dmub_status dmub_srv_hw_init(struct dmub_srv *dmub,
 	struct dmub_fb *bios_fb = params->fb[DMUB_WINDOW_3_VBIOS];
 	struct dmub_fb *mail_fb = params->fb[DMUB_WINDOW_4_MAILBOX];
 	struct dmub_fb *tracebuff_fb = params->fb[DMUB_WINDOW_5_TRACEBUFF];
+	struct dmub_fb *fw_state_fb = params->fb[DMUB_WINDOW_6_FW_STATE];
 
 	struct dmub_rb_init_params rb_params;
-	struct dmub_window cw0, cw1, cw2, cw3, cw4, cw5;
+	struct dmub_window cw0, cw1, cw2, cw3, cw4, cw5, cw6;
 	struct dmub_region inbox1;
 
 	if (!dmub->sw_init)
@@ -286,7 +295,8 @@ enum dmub_status dmub_srv_hw_init(struct dmub_srv *dmub,
 	if (dmub->hw_funcs.reset)
 		dmub->hw_funcs.reset(dmub);
 
-	if (inst_fb && data_fb && bios_fb && mail_fb) {
+	if (inst_fb && data_fb && bios_fb && mail_fb && tracebuff_fb &&
+	    fw_state_fb) {
 		cw2.offset.quad_part = data_fb->gpu_addr;
 		cw2.region.base = DMUB_CW0_BASE + inst_fb->size;
 		cw2.region.top = cw2.region.base + data_fb->size;
@@ -306,8 +316,15 @@ enum dmub_status dmub_srv_hw_init(struct dmub_srv *dmub,
 		cw5.region.base = DMUB_CW5_BASE;
 		cw5.region.top = cw5.region.base + tracebuff_fb->size;
 
+		cw6.offset.quad_part = fw_state_fb->gpu_addr;
+		cw6.region.base = DMUB_CW6_BASE;
+		cw6.region.top = cw6.region.base + fw_state_fb->size;
+
+		dmub->fw_state = fw_state_fb->cpu_addr;
+
 		if (dmub->hw_funcs.setup_windows)
-			dmub->hw_funcs.setup_windows(dmub, &cw2, &cw3, &cw4, &cw5);
+			dmub->hw_funcs.setup_windows(dmub, &cw2, &cw3, &cw4,
+						     &cw5, &cw6);
 
 		if (dmub->hw_funcs.setup_mailbox)
 			dmub->hw_funcs.setup_mailbox(dmub, &inbox1);
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 32/51] drm/amd/display: update sr latency for renoir when using lpddr4
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (30 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 31/51] drm/amd/display: Add shared DMCUB/driver firmware state cache window sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 33/51] drm/amd/display: Remove flag check in mpcc update sunpeng.li
                   ` (18 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, Joseph Gravenor,
	harry.wentland, bhawanpreet.lakha

From: Joseph Gravenor <joseph.gravenor@amd.com>

[Why]
DF team has produced more optimized sr latency numbers, for lpddr4

[How]
change the sr laency in the lpddr4 wm table to the new latency
number

Signed-off-by: Joseph Gravenor <joseph.gravenor@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c    | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
index 901e7035bf8e..37230d3d94a0 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
@@ -563,32 +563,32 @@ struct wm_table lpddr4_wm_table = {
 			.wm_inst = WM_A,
 			.wm_type = WM_TYPE_PSTATE_CHG,
 			.pstate_latency_us = 11.65333,
-			.sr_exit_time_us = 12.5,
-			.sr_enter_plus_exit_time_us = 17.0,
+			.sr_exit_time_us = 5.32,
+			.sr_enter_plus_exit_time_us = 6.38,
 			.valid = true,
 		},
 		{
 			.wm_inst = WM_B,
 			.wm_type = WM_TYPE_PSTATE_CHG,
 			.pstate_latency_us = 11.65333,
-			.sr_exit_time_us = 12.5,
-			.sr_enter_plus_exit_time_us = 17.0,
+			.sr_exit_time_us = 9.82,
+			.sr_enter_plus_exit_time_us = 11.196,
 			.valid = true,
 		},
 		{
 			.wm_inst = WM_C,
 			.wm_type = WM_TYPE_PSTATE_CHG,
 			.pstate_latency_us = 11.65333,
-			.sr_exit_time_us = 12.5,
-			.sr_enter_plus_exit_time_us = 17.0,
+			.sr_exit_time_us = 9.89,
+			.sr_enter_plus_exit_time_us = 11.24,
 			.valid = true,
 		},
 		{
 			.wm_inst = WM_D,
 			.wm_type = WM_TYPE_PSTATE_CHG,
 			.pstate_latency_us = 11.65333,
-			.sr_exit_time_us = 12.5,
-			.sr_enter_plus_exit_time_us = 17.0,
+			.sr_exit_time_us = 9.748,
+			.sr_enter_plus_exit_time_us = 11.102,
 			.valid = true,
 		},
 	}
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 33/51] drm/amd/display: Remove flag check in mpcc update
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (31 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 32/51] drm/amd/display: update sr latency for renoir when using lpddr4 sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 34/51] drm/amd/display: check for repeater when setting aux_rd_interval sunpeng.li
                   ` (17 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, Noah Abradjian,
	Dmytro Laktyushkin, bhawanpreet.lakha

From: Noah Abradjian <noah.abradjian@amd.com>

[Why]
MPCC programming was being missed during certain split pipe enables due
to full_update flag not being true. This caused a momentary flash on
half the screen. After discussion, determined we should not have that
flag check within update_mpcc, as it should always perform full
programming when called.

[How]
Remove flag check. We call update_blending within insert_plane, so we
do not need to replace its call from the if block.

Signed-off-by: Noah Abradjian <noah.abradjian@amd.com>
Reviewed-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
index ece0817708f5..fb23142cf535 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
@@ -2139,12 +2139,6 @@ void dcn20_update_mpcc(struct dc *dc, struct pipe_ctx *pipe_ctx)
 	 */
 	mpcc_id = hubp->inst;
 
-	/* If there is no full update, don't need to touch MPC tree*/
-	if (!pipe_ctx->plane_state->update_flags.bits.full_update) {
-		mpc->funcs->update_blending(mpc, &blnd_cfg, mpcc_id);
-		return;
-	}
-
 	/* check if this MPCC is already being used */
 	new_mpcc = mpc->funcs->get_mpcc_for_dpp(mpc_tree_params, mpcc_id);
 	/* remove MPCC if being used */
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 34/51] drm/amd/display: check for repeater when setting aux_rd_interval.
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (32 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 33/51] drm/amd/display: Remove flag check in mpcc update sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 35/51] drm/amd/display: Modify logic for when to wait for mpcc idle sunpeng.li
                   ` (16 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, Wenjing Liu,
	abdoulaye berthe, George Shen, bhawanpreet.lakha

From: abdoulaye berthe <abdoulaye.berthe@amd.com>

[Why]
When training with repeater the aux read interval must be set to
repeater specific aux_red_interval. This value is always 100us for CR.

[How]
Check for repeater when setting the aux_rd_interval in channel
equalization.
Use the right offset in the aux_rd_interval array

Signed-off-by: abdoulaye berthe <abdoulaye.berthe@amd.com>
Reviewed-by: Wenjing Liu <Wenjing.Liu@amd.com>
Acked-by: George Shen <George.Shen@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index 015fa0c52746..dfcd6421ee01 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -906,10 +906,10 @@ static enum link_training_result perform_channel_equalization_sequence(
 		/* 3. wait for receiver to lock-on*/
 		wait_time_microsec = lt_settings->eq_pattern_time;
 
-		if (!link->is_lttpr_mode_transparent)
+		if (is_repeater(link, offset))
 			wait_time_microsec =
 					translate_training_aux_read_interval(
-						link->dpcd_caps.lttpr_caps.aux_rd_interval[offset]);
+						link->dpcd_caps.lttpr_caps.aux_rd_interval[offset - 1]);
 
 		wait_for_training_aux_rd_interval(
 				link,
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 35/51] drm/amd/display: Modify logic for when to wait for mpcc idle
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (33 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 34/51] drm/amd/display: check for repeater when setting aux_rd_interval sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 36/51] drm/amd/display: Remove redundant call sunpeng.li
                   ` (15 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, Noah Abradjian,
	Dmytro Laktyushkin, bhawanpreet.lakha

From: Noah Abradjian <noah.abradjian@amd.com>

[Why]
I was advised that we may need to check for mpcc idle in more cases
than just when opp_changed is true. Also, mpcc_inst is equal to
pipe_idx, so remove for loop.

[How]
Remove opp_changed flag check and mpcc_inst loop.

Signed-off-by: Noah Abradjian <noah.abradjian@amd.com>
Reviewed-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
index fb23142cf535..2d093ff0a76c 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
@@ -1330,16 +1330,16 @@ static void dcn20_update_dchubp_dpp(
 	if (pipe_ctx->update_flags.bits.mpcc
 			|| plane_state->update_flags.bits.global_alpha_change
 			|| plane_state->update_flags.bits.per_pixel_alpha_change) {
-		/* Need mpcc to be idle if changing opp */
-		if (pipe_ctx->update_flags.bits.opp_changed) {
-			struct pipe_ctx *old_pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[pipe_ctx->pipe_idx];
-			int mpcc_inst;
-
-			for (mpcc_inst = 0; mpcc_inst < MAX_PIPES; mpcc_inst++) {
-				if (!old_pipe_ctx->stream_res.opp->mpcc_disconnect_pending[mpcc_inst])
-					continue;
+		// MPCC inst is equal to pipe index in practice
+		int mpcc_inst = pipe_ctx->pipe_idx;
+		int opp_inst;
+		int opp_count = dc->res_pool->res_cap->num_opp;
+
+		for (opp_inst = 0; opp_inst < opp_count; opp_inst++) {
+			if (dc->res_pool->opps[opp_inst]->mpcc_disconnect_pending[mpcc_inst]) {
 				dc->res_pool->mpc->funcs->wait_for_idle(dc->res_pool->mpc, mpcc_inst);
-				old_pipe_ctx->stream_res.opp->mpcc_disconnect_pending[mpcc_inst] = false;
+				dc->res_pool->opps[opp_inst]->mpcc_disconnect_pending[mpcc_inst] = false;
+				break;
 			}
 		}
 		hws->funcs.update_mpcc(dc, pipe_ctx);
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 36/51] drm/amd/display: Remove redundant call
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (34 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 35/51] drm/amd/display: Modify logic for when to wait for mpcc idle sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 37/51] drm/amd/display: add dc dsc functions to return bpp range for pixel encoding sunpeng.li
                   ` (14 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, Noah Abradjian,
	Yongqiang Sun, bhawanpreet.lakha

From: Noah Abradjian <noah.abradjian@amd.com>

[Why]
I was advised that we don't need this call of program_front_end, as
earlier and later calls in the same sequence are sufficient.

[How]
Remove first call of program_front_end in dc_commit_state_no_check.

Signed-off-by: Noah Abradjian <noah.abradjian@amd.com>
Reviewed-by: Yongqiang Sun <yongqiang.sun@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 55f22a1c0aa5..39fe38cb39b6 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -1167,8 +1167,6 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
 				context->stream_status[i].plane_count,
 				context); /* use new pipe config in new context */
 		}
-	if (dc->hwss.program_front_end_for_ctx)
-		dc->hwss.program_front_end_for_ctx(dc, context);
 
 	/* Program hardware */
 	for (i = 0; i < dc->res_pool->pipe_count; i++) {
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 37/51] drm/amd/display: add dc dsc functions to return bpp range for pixel encoding
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (35 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 36/51] drm/amd/display: Remove redundant call sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 38/51] drm/amd/display: remove spam DSC log sunpeng.li
                   ` (13 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, Nikola Cornij,
	Wenjing Liu, bhawanpreet.lakha

From: Wenjing Liu <Wenjing.Liu@amd.com>

[why]
Need to support 6 bpp for 420 pixel encoding only.

[how]
Add a dc function to determine what bpp range can be supported
for given pixel encoding.

Signed-off-by: Wenjing Liu <Wenjing.Liu@amd.com>
Reviewed-by: Nikola Cornij <Nikola.Cornij@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dc_dsc.h     |  8 +++--
 drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c | 38 +++++++++++++++++----
 2 files changed, 37 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc_dsc.h b/drivers/gpu/drm/amd/display/dc/dc_dsc.h
index cc9915e545cd..d98b89bad353 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_dsc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_dsc.h
@@ -52,8 +52,8 @@ bool dc_dsc_parse_dsc_dpcd(const uint8_t *dpcd_dsc_basic_data,
 bool dc_dsc_compute_bandwidth_range(
 		const struct display_stream_compressor *dsc,
 		const uint32_t dsc_min_slice_height_override,
-		const uint32_t min_kbps,
-		const uint32_t max_kbps,
+		const uint32_t min_bpp,
+		const uint32_t max_bpp,
 		const struct dsc_dec_dpcd_caps *dsc_sink_caps,
 		const struct dc_crtc_timing *timing,
 		struct dc_dsc_bw_range *range);
@@ -65,4 +65,8 @@ bool dc_dsc_compute_config(
 		uint32_t target_bandwidth_kbps,
 		const struct dc_crtc_timing *timing,
 		struct dc_dsc_config *dsc_cfg);
+
+bool dc_dsc_get_bpp_range_for_pixel_encoding(enum dc_pixel_encoding pixel_enc,
+		uint32_t *min_bpp,
+		uint32_t *max_bpp);
 #endif
diff --git a/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c b/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
index ec86ba73a039..febae6cc7295 100644
--- a/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
+++ b/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
@@ -31,16 +31,12 @@ struct dc_dsc_policy {
 	bool use_min_slices_h;
 	int max_slices_h; // Maximum available if 0
 	int min_sice_height; // Must not be less than 8
-	int max_target_bpp;
-	int min_target_bpp; // Minimum target bits per pixel
 };
 
 const struct dc_dsc_policy dsc_policy = {
 	.use_min_slices_h = true, // DSC Policy: Use minimum number of slices that fits the pixel clock
 	.max_slices_h = 0, // DSC Policy: Use max available slices (in our case 4 for or 8, depending on the mode)
 	.min_sice_height = 108, // DSC Policy: Use slice height recommended by VESA DSC Spreadsheet user guide
-	.max_target_bpp = 16,
-	.min_target_bpp = 8,
 };
 
 
@@ -374,7 +370,6 @@ static void get_dsc_bandwidth_range(
  *        or if it couldn't be applied based on DSC policy.
  */
 static bool decide_dsc_target_bpp_x16(
-		const struct dc_dsc_policy *policy,
 		const struct dsc_enc_caps *dsc_common_caps,
 		const int target_bandwidth_kbps,
 		const struct dc_crtc_timing *timing,
@@ -382,10 +377,13 @@ static bool decide_dsc_target_bpp_x16(
 {
 	bool should_use_dsc = false;
 	struct dc_dsc_bw_range range;
+	uint32_t min_target_bpp = 0;
+	uint32_t max_target_bpp = 0;
 
 	memset(&range, 0, sizeof(range));
 
-	get_dsc_bandwidth_range(policy->min_target_bpp, policy->max_target_bpp,
+	dc_dsc_get_bpp_range_for_pixel_encoding(timing->pixel_encoding, &min_target_bpp, &max_target_bpp);
+	get_dsc_bandwidth_range(min_target_bpp, max_target_bpp,
 			dsc_common_caps, timing, &range);
 	if (target_bandwidth_kbps >= range.stream_kbps) {
 		/* enough bandwidth without dsc */
@@ -599,7 +597,7 @@ static bool setup_dsc_config(
 		goto done;
 
 	if (target_bandwidth_kbps > 0) {
-		is_dsc_possible = decide_dsc_target_bpp_x16(&dsc_policy, &dsc_common_caps, target_bandwidth_kbps, timing, &target_bpp);
+		is_dsc_possible = decide_dsc_target_bpp_x16(&dsc_common_caps, target_bandwidth_kbps, timing, &target_bpp);
 		dsc_cfg->bits_per_pixel = target_bpp;
 	}
 	if (!is_dsc_possible)
@@ -906,3 +904,29 @@ bool dc_dsc_compute_config(
 			timing, dsc_min_slice_height_override, dsc_cfg);
 	return is_dsc_possible;
 }
+
+bool dc_dsc_get_bpp_range_for_pixel_encoding(enum dc_pixel_encoding pixel_enc,
+		uint32_t *min_bpp,
+		uint32_t *max_bpp)
+{
+	bool result = true;
+
+	switch (pixel_enc) {
+	case PIXEL_ENCODING_RGB:
+	case PIXEL_ENCODING_YCBCR444:
+	case PIXEL_ENCODING_YCBCR422:
+		*min_bpp = 8;
+		*max_bpp = 16;
+		break;
+	case PIXEL_ENCODING_YCBCR420:
+		*min_bpp = 6;
+		*max_bpp = 16;
+		break;
+	default:
+		*min_bpp = 0;
+		*max_bpp = 0;
+		result = false;
+	}
+
+	return result;
+}
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 38/51] drm/amd/display: remove spam DSC log
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (36 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 37/51] drm/amd/display: add dc dsc functions to return bpp range for pixel encoding sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 39/51] drm/amd/display: add dsc policy getter sunpeng.li
                   ` (12 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, Nikola Cornij,
	Wenjing Liu, bhawanpreet.lakha

From: Wenjing Liu <Wenjing.Liu@amd.com>

[why]
add_dsc_to_stream_resource could be called for validation.
Failing validation is completely fine.
However failing it inside commit streams is bad.
This code could be triggered for both contexts.
The function itself cannot distinguish the caller, which
makes it impossible to output the log only in the
meaningful case (commit streams).

Signed-off-by: Wenjing Liu <Wenjing.Liu@amd.com>
Reviewed-by: Nikola Cornij <Nikola.Cornij@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
index 2aa6c0be45b4..f853af413582 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
@@ -1516,7 +1516,6 @@ static enum dc_status add_dsc_to_stream_resource(struct dc *dc,
 
 		/* The number of DSCs can be less than the number of pipes */
 		if (!pipe_ctx->stream_res.dsc) {
-			dm_output_to_console("No DSCs available\n");
 			result = DC_NO_DSC_RESOURCE;
 		}
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 39/51] drm/amd/display: add dsc policy getter
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (37 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 38/51] drm/amd/display: remove spam DSC log sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 40/51] drm/amd/display: Limit NV12 chroma workaround sunpeng.li
                   ` (11 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, Nikola Cornij,
	Wenjing Liu, bhawanpreet.lakha

From: Wenjing Liu <Wenjing.Liu@amd.com>

dc needs to expose its internal dsc policy.

Signed-off-by: Wenjing Liu <Wenjing.Liu@amd.com>
Reviewed-by: Nikola Cornij <Nikola.Cornij@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dc_dsc.h     |  14 ++-
 drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c | 103 ++++++++++++--------
 2 files changed, 75 insertions(+), 42 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc_dsc.h b/drivers/gpu/drm/amd/display/dc/dc_dsc.h
index d98b89bad353..8ec09813ee17 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_dsc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_dsc.h
@@ -45,6 +45,14 @@ struct display_stream_compressor {
 	int inst;
 };
 
+struct dc_dsc_policy {
+	bool use_min_slices_h;
+	int max_slices_h; // Maximum available if 0
+	int min_slice_height; // Must not be less than 8
+	uint32_t max_target_bpp;
+	uint32_t min_target_bpp;
+};
+
 bool dc_dsc_parse_dsc_dpcd(const uint8_t *dpcd_dsc_basic_data,
 		const uint8_t *dpcd_dsc_ext_data,
 		struct dsc_dec_dpcd_caps *dsc_sink_caps);
@@ -66,7 +74,7 @@ bool dc_dsc_compute_config(
 		const struct dc_crtc_timing *timing,
 		struct dc_dsc_config *dsc_cfg);
 
-bool dc_dsc_get_bpp_range_for_pixel_encoding(enum dc_pixel_encoding pixel_enc,
-		uint32_t *min_bpp,
-		uint32_t *max_bpp);
+void dc_dsc_get_policy_for_timing(const struct dc_crtc_timing *timing,
+		struct dc_dsc_policy *policy);
+
 #endif
diff --git a/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c b/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
index febae6cc7295..d2423ad1fac2 100644
--- a/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
+++ b/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
@@ -27,19 +27,6 @@
 #include <drm/drm_dp_helper.h>
 #include "dc.h"
 
-struct dc_dsc_policy {
-	bool use_min_slices_h;
-	int max_slices_h; // Maximum available if 0
-	int min_sice_height; // Must not be less than 8
-};
-
-const struct dc_dsc_policy dsc_policy = {
-	.use_min_slices_h = true, // DSC Policy: Use minimum number of slices that fits the pixel clock
-	.max_slices_h = 0, // DSC Policy: Use max available slices (in our case 4 for or 8, depending on the mode)
-	.min_sice_height = 108, // DSC Policy: Use slice height recommended by VESA DSC Spreadsheet user guide
-};
-
-
 /* This module's internal functions */
 
 static uint32_t dc_dsc_bandwidth_in_kbps_from_timing(
@@ -370,6 +357,7 @@ static void get_dsc_bandwidth_range(
  *        or if it couldn't be applied based on DSC policy.
  */
 static bool decide_dsc_target_bpp_x16(
+		const struct dc_dsc_policy *policy,
 		const struct dsc_enc_caps *dsc_common_caps,
 		const int target_bandwidth_kbps,
 		const struct dc_crtc_timing *timing,
@@ -377,13 +365,10 @@ static bool decide_dsc_target_bpp_x16(
 {
 	bool should_use_dsc = false;
 	struct dc_dsc_bw_range range;
-	uint32_t min_target_bpp = 0;
-	uint32_t max_target_bpp = 0;
 
 	memset(&range, 0, sizeof(range));
 
-	dc_dsc_get_bpp_range_for_pixel_encoding(timing->pixel_encoding, &min_target_bpp, &max_target_bpp);
-	get_dsc_bandwidth_range(min_target_bpp, max_target_bpp,
+	get_dsc_bandwidth_range(policy->min_target_bpp, policy->max_target_bpp,
 			dsc_common_caps, timing, &range);
 	if (target_bandwidth_kbps >= range.stream_kbps) {
 		/* enough bandwidth without dsc */
@@ -579,9 +564,11 @@ static bool setup_dsc_config(
 	bool is_dsc_possible = false;
 	int pic_height;
 	int slice_height;
+	struct dc_dsc_policy policy;
 
 	memset(dsc_cfg, 0, sizeof(struct dc_dsc_config));
 
+	dc_dsc_get_policy_for_timing(timing, &policy);
 	pic_width = timing->h_addressable + timing->h_border_left + timing->h_border_right;
 	pic_height = timing->v_addressable + timing->v_border_top + timing->v_border_bottom;
 
@@ -597,7 +584,12 @@ static bool setup_dsc_config(
 		goto done;
 
 	if (target_bandwidth_kbps > 0) {
-		is_dsc_possible = decide_dsc_target_bpp_x16(&dsc_common_caps, target_bandwidth_kbps, timing, &target_bpp);
+		is_dsc_possible = decide_dsc_target_bpp_x16(
+				&policy,
+				&dsc_common_caps,
+				target_bandwidth_kbps,
+				timing,
+				&target_bpp);
 		dsc_cfg->bits_per_pixel = target_bpp;
 	}
 	if (!is_dsc_possible)
@@ -699,20 +691,20 @@ static bool setup_dsc_config(
 	if (!is_dsc_possible)
 		goto done;
 
-	if (dsc_policy.use_min_slices_h) {
+	if (policy.use_min_slices_h) {
 		if (min_slices_h > 0)
 			num_slices_h = min_slices_h;
 		else if (max_slices_h > 0) { // Fall back to max slices if min slices is not working out
-			if (dsc_policy.max_slices_h)
-				num_slices_h = min(dsc_policy.max_slices_h, max_slices_h);
+			if (policy.max_slices_h)
+				num_slices_h = min(policy.max_slices_h, max_slices_h);
 			else
 				num_slices_h = max_slices_h;
 		} else
 			is_dsc_possible = false;
 	} else {
 		if (max_slices_h > 0) {
-			if (dsc_policy.max_slices_h)
-				num_slices_h = min(dsc_policy.max_slices_h, max_slices_h);
+			if (policy.max_slices_h)
+				num_slices_h = min(policy.max_slices_h, max_slices_h);
 			else
 				num_slices_h = max_slices_h;
 		} else if (min_slices_h > 0) // Fall back to min slices if max slices is not possible
@@ -734,7 +726,7 @@ static bool setup_dsc_config(
 	// Slice height (i.e. number of slices per column): start with policy and pick the first one that height is divisible by.
 	// For 4:2:0 make sure the slice height is divisible by 2 as well.
 	if (min_slice_height_override == 0)
-		slice_height = min(dsc_policy.min_sice_height, pic_height);
+		slice_height = min(policy.min_slice_height, pic_height);
 	else
 		slice_height = min(min_slice_height_override, pic_height);
 
@@ -905,28 +897,61 @@ bool dc_dsc_compute_config(
 	return is_dsc_possible;
 }
 
-bool dc_dsc_get_bpp_range_for_pixel_encoding(enum dc_pixel_encoding pixel_enc,
-		uint32_t *min_bpp,
-		uint32_t *max_bpp)
+void dc_dsc_get_policy_for_timing(const struct dc_crtc_timing *timing, struct dc_dsc_policy *policy)
 {
-	bool result = true;
+	uint32_t bpc = 0;
+
+	policy->min_target_bpp = 0;
+	policy->max_target_bpp = 0;
+
+	/* DSC Policy: Use minimum number of slices that fits the pixel clock */
+	policy->use_min_slices_h = true;
 
-	switch (pixel_enc) {
+	/* DSC Policy: Use max available slices
+	 * (in our case 4 for or 8, depending on the mode)
+	 */
+	policy->max_slices_h = 0;
+
+	/* DSC Policy: Use slice height recommended
+	 * by VESA DSC Spreadsheet user guide
+	 */
+	policy->min_slice_height = 108;
+
+	/* DSC Policy: follow DP specs with an internal upper limit to 16 bpp
+	 * for better interoperability
+	 */
+	switch (timing->display_color_depth) {
+	case COLOR_DEPTH_888:
+		bpc = 8;
+		break;
+	case COLOR_DEPTH_101010:
+		bpc = 10;
+		break;
+	case COLOR_DEPTH_121212:
+		bpc = 12;
+		break;
+	default:
+		return;
+	}
+	switch (timing->pixel_encoding) {
 	case PIXEL_ENCODING_RGB:
 	case PIXEL_ENCODING_YCBCR444:
-	case PIXEL_ENCODING_YCBCR422:
-		*min_bpp = 8;
-		*max_bpp = 16;
+	case PIXEL_ENCODING_YCBCR422: /* assume no YCbCr422 native support */
+		/* DP specs limits to 8 */
+		policy->min_target_bpp = 8;
+		/* DP specs limits to 3 x bpc */
+		policy->max_target_bpp = 3 * bpc;
 		break;
 	case PIXEL_ENCODING_YCBCR420:
-		*min_bpp = 6;
-		*max_bpp = 16;
+		/* DP specs limits to 6 */
+		policy->min_target_bpp = 6;
+		/* DP specs limits to 1.5 x bpc assume bpc is an even number */
+		policy->max_target_bpp = bpc * 3 / 2;
 		break;
 	default:
-		*min_bpp = 0;
-		*max_bpp = 0;
-		result = false;
+		return;
 	}
-
-	return result;
+	/* internal upper limit to 16 bpp */
+	if (policy->max_target_bpp > 16)
+		policy->max_target_bpp = 16;
 }
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 40/51] drm/amd/display: Limit NV12 chroma workaround
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (38 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 39/51] drm/amd/display: add dsc policy getter sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 41/51] drm/amd/display: fix cursor positioning for multiplane cases sunpeng.li
                   ` (10 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, harry.wentland,
	bhawanpreet.lakha, Anthony Koo

From: Anthony Koo <Anthony.Koo@amd.com>

[Why]
It is causing green Line at the bottom of SDR 480p
MPO playback

[How]
Limit workaround to vertical > 512

Signed-off-by: Anthony Koo <Anthony.Koo@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
index 38661b9c61f8..332bf3d3a664 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_hubp.c
@@ -200,7 +200,7 @@ void hubp21_set_viewport(
 	 *	Disable w/a when rotated 180 degrees, causes vertical chroma offset
 	 */
 	patched_viewport_height = viewport_c->height;
-	if (viewport_c->height != 0 && debug->nv12_iflip_vm_wa &&
+	if (debug->nv12_iflip_vm_wa && viewport_c->height > 512 &&
 			rotation != ROTATION_ANGLE_180) {
 		int pte_row_height = 0;
 		int pte_rows = 0;
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 41/51] drm/amd/display: fix cursor positioning for multiplane cases
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (39 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 40/51] drm/amd/display: Limit NV12 chroma workaround sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 42/51] drm/amd/display: Fix screen tearing on vrr tests sunpeng.li
                   ` (9 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Aric Cyr, Leo Li, harry.wentland, rodrigo.siqueira,
	bhawanpreet.lakha, Nicholas Kazlauskas, Anthony Koo

From: Aric Cyr <aric.cyr@amd.com>

[Why]
Cursor position needs to take into account plane scaling as well.

[How]
Translate cursor coords from stream space to plane space.

Signed-off-by: Aric Cyr <aric.cyr@amd.com>
Reviewed-by: Anthony Koo <Anthony.Koo@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
Acked-by: Nicholas Kazlauskas <Nicholas.Kazlauskas@amd.com>
---
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c | 33 ++++++++++++++-----
 1 file changed, 24 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index 9551fefb9d1d..61d2f1233f8c 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -2913,15 +2913,30 @@ void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
 		.rotation = pipe_ctx->plane_state->rotation,
 		.mirror = pipe_ctx->plane_state->horizontal_mirror
 	};
-	uint32_t x_plane = pipe_ctx->plane_state->dst_rect.x;
-	uint32_t y_plane = pipe_ctx->plane_state->dst_rect.y;
-	uint32_t x_offset = min(x_plane, pos_cpy.x);
-	uint32_t y_offset = min(y_plane, pos_cpy.y);
-
-	pos_cpy.x -= x_offset;
-	pos_cpy.y -= y_offset;
-	pos_cpy.x_hotspot += (x_plane - x_offset);
-	pos_cpy.y_hotspot += (y_plane - y_offset);
+
+	int x_plane = pipe_ctx->plane_state->dst_rect.x;
+	int y_plane = pipe_ctx->plane_state->dst_rect.y;
+	int x_pos = pos_cpy.x;
+	int y_pos = pos_cpy.y;
+
+	// translate cursor from stream space to plane space
+	x_pos = (x_pos - x_plane) * pipe_ctx->plane_state->src_rect.width /
+			pipe_ctx->plane_state->dst_rect.width;
+	y_pos = (y_pos - y_plane) * pipe_ctx->plane_state->src_rect.height /
+			pipe_ctx->plane_state->dst_rect.height;
+
+	if (x_pos < 0) {
+		pos_cpy.x_hotspot -= x_pos;
+		x_pos = 0;
+	}
+
+	if (y_pos < 0) {
+		pos_cpy.y_hotspot -= y_pos;
+		y_pos = 0;
+	}
+
+	pos_cpy.x = (uint32_t)x_pos;
+	pos_cpy.y = (uint32_t)y_pos;
 
 	if (pipe_ctx->plane_state->address.type
 			== PLN_ADDR_TYPE_VIDEO_PROGRESSIVE)
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 42/51] drm/amd/display: Fix screen tearing on vrr tests
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (40 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 41/51] drm/amd/display: fix cursor positioning for multiplane cases sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 43/51] drm/amd/display: update dispclk and dppclk vco frequency sunpeng.li
                   ` (8 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Aric Cyr, Leo Li, harry.wentland, rodrigo.siqueira, Amanda Liu,
	bhawanpreet.lakha

From: Amanda Liu <amanda.liu@amd.com>

[Why]
Screen tearing is present in tests when setting the frame rate to
certain fps

[How]
Revert previous optimizations for low frame rates.

Signed-off-by: Amanda Liu <amanda.liu@amd.com>
Reviewed-by: Aric Cyr <Aric.Cyr@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../amd/display/modules/freesync/freesync.c   | 32 ++++++++-----------
 .../amd/display/modules/inc/mod_freesync.h    |  1 -
 2 files changed, 13 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
index 16e69bbc69aa..5437b50e9f90 100644
--- a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+++ b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
@@ -37,8 +37,8 @@
 #define STATIC_SCREEN_RAMP_DELTA_REFRESH_RATE_PER_FRAME ((1000 / 60) * 65)
 /* Number of elements in the render times cache array */
 #define RENDER_TIMES_MAX_COUNT 10
-/* Threshold to exit/exit BTR (to avoid frequent enter-exits at the lower limit) */
-#define BTR_MAX_MARGIN 2500
+/* Threshold to exit BTR (to avoid frequent enter-exits at the lower limit) */
+#define BTR_EXIT_MARGIN 2000
 /* Threshold to change BTR multiplier (to avoid frequent changes) */
 #define BTR_DRIFT_MARGIN 2000
 /*Threshold to exit fixed refresh rate*/
@@ -254,22 +254,24 @@ static void apply_below_the_range(struct core_freesync *core_freesync,
 	unsigned int delta_from_mid_point_in_us_1 = 0xFFFFFFFF;
 	unsigned int delta_from_mid_point_in_us_2 = 0xFFFFFFFF;
 	unsigned int frames_to_insert = 0;
+	unsigned int min_frame_duration_in_ns = 0;
+	unsigned int max_render_time_in_us = in_out_vrr->max_duration_in_us;
 	unsigned int delta_from_mid_point_delta_in_us;
-	unsigned int max_render_time_in_us =
-			in_out_vrr->max_duration_in_us - in_out_vrr->btr.margin_in_us;
+
+	min_frame_duration_in_ns = ((unsigned int) (div64_u64(
+		(1000000000ULL * 1000000),
+		in_out_vrr->max_refresh_in_uhz)));
 
 	/* Program BTR */
-	if ((last_render_time_in_us + in_out_vrr->btr.margin_in_us / 2) < max_render_time_in_us) {
+	if (last_render_time_in_us + BTR_EXIT_MARGIN < max_render_time_in_us) {
 		/* Exit Below the Range */
 		if (in_out_vrr->btr.btr_active) {
 			in_out_vrr->btr.frame_counter = 0;
 			in_out_vrr->btr.btr_active = false;
 		}
-	} else if (last_render_time_in_us > (max_render_time_in_us + in_out_vrr->btr.margin_in_us / 2)) {
+	} else if (last_render_time_in_us > max_render_time_in_us) {
 		/* Enter Below the Range */
-		if (!in_out_vrr->btr.btr_active) {
-			in_out_vrr->btr.btr_active = true;
-		}
+		in_out_vrr->btr.btr_active = true;
 	}
 
 	/* BTR set to "not active" so disengage */
@@ -325,9 +327,7 @@ static void apply_below_the_range(struct core_freesync *core_freesync,
 		/* Choose number of frames to insert based on how close it
 		 * can get to the mid point of the variable range.
 		 */
-		if ((frame_time_in_us / mid_point_frames_ceil) > in_out_vrr->min_duration_in_us &&
-				(delta_from_mid_point_in_us_1 < delta_from_mid_point_in_us_2 ||
-						mid_point_frames_floor < 2)) {
+		if (delta_from_mid_point_in_us_1 < delta_from_mid_point_in_us_2) {
 			frames_to_insert = mid_point_frames_ceil;
 			delta_from_mid_point_delta_in_us = delta_from_mid_point_in_us_2 -
 					delta_from_mid_point_in_us_1;
@@ -343,7 +343,7 @@ static void apply_below_the_range(struct core_freesync *core_freesync,
 		if (in_out_vrr->btr.frames_to_insert != 0 &&
 				delta_from_mid_point_delta_in_us < BTR_DRIFT_MARGIN) {
 			if (((last_render_time_in_us / in_out_vrr->btr.frames_to_insert) <
-					max_render_time_in_us) &&
+					in_out_vrr->max_duration_in_us) &&
 				((last_render_time_in_us / in_out_vrr->btr.frames_to_insert) >
 					in_out_vrr->min_duration_in_us))
 				frames_to_insert = in_out_vrr->btr.frames_to_insert;
@@ -796,11 +796,6 @@ void mod_freesync_build_vrr_params(struct mod_freesync *mod_freesync,
 		refresh_range = in_out_vrr->max_refresh_in_uhz -
 				in_out_vrr->min_refresh_in_uhz;
 
-		in_out_vrr->btr.margin_in_us = in_out_vrr->max_duration_in_us -
-				2 * in_out_vrr->min_duration_in_us;
-		if (in_out_vrr->btr.margin_in_us > BTR_MAX_MARGIN)
-			in_out_vrr->btr.margin_in_us = BTR_MAX_MARGIN;
-
 		in_out_vrr->supported = true;
 	}
 
@@ -816,7 +811,6 @@ void mod_freesync_build_vrr_params(struct mod_freesync *mod_freesync,
 	in_out_vrr->btr.inserted_duration_in_us = 0;
 	in_out_vrr->btr.frames_to_insert = 0;
 	in_out_vrr->btr.frame_counter = 0;
-
 	in_out_vrr->btr.mid_point_in_us =
 				(in_out_vrr->min_duration_in_us +
 				 in_out_vrr->max_duration_in_us) / 2;
diff --git a/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h b/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h
index dbe7835aabcf..dc187844d10b 100644
--- a/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h
+++ b/drivers/gpu/drm/amd/display/modules/inc/mod_freesync.h
@@ -92,7 +92,6 @@ struct mod_vrr_params_btr {
 	uint32_t inserted_duration_in_us;
 	uint32_t frames_to_insert;
 	uint32_t frame_counter;
-	uint32_t margin_in_us;
 };
 
 struct mod_vrr_params_fixed_refresh {
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 43/51] drm/amd/display: update dispclk and dppclk vco frequency
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (41 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 42/51] drm/amd/display: Fix screen tearing on vrr tests sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 44/51] drm/amd/display: Implement DePQ for DCN2 sunpeng.li
                   ` (7 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Eric Yang, Leo Li, Tony Cheng, rodrigo.siqueira, harry.wentland,
	bhawanpreet.lakha

From: Eric Yang <Eric.Yang2@amd.com>

Value obtained from DV is not allowing 8k60 CTA mode with DSC to
pass, after checking real value being used in hw, find out that
correct value is 3600, which will allow that mode.

Signed-off-by: Eric Yang <Eric.Yang2@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
index fef11d57d2b7..8fa63929d3b9 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
@@ -255,7 +255,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn2_1_soc = {
 	.vmm_page_size_bytes = 4096,
 	.dram_clock_change_latency_us = 23.84,
 	.return_bus_width_bytes = 64,
-	.dispclk_dppclk_vco_speed_mhz = 3550,
+	.dispclk_dppclk_vco_speed_mhz = 3600,
 	.xfc_bus_transport_time_us = 4,
 	.xfc_xbuf_latency_tolerance_us = 4,
 	.use_urgent_burst_bw = 1,
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 44/51] drm/amd/display: Implement DePQ for DCN2
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (42 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 43/51] drm/amd/display: update dispclk and dppclk vco frequency sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:33 ` [PATCH 45/51] drm/amd/display: 3.2.62 sunpeng.li
                   ` (6 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: Krunoslav Kovac, Leo Li, harry.wentland, rodrigo.siqueira,
	Reza Amini, bhawanpreet.lakha

From: Reza Amini <Reza.Amini@amd.com>

[Why]
Need support for more color management in 10bit
surface.

[How]
Provide support for DePQ for 10bit surface

Signed-off-by: Reza Amini <Reza.Amini@amd.com>
Reviewed-by: Krunoslav Kovac <Krunoslav.Kovac@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp_cm.c | 3 +++
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c  | 5 +++++
 2 files changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp_cm.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp_cm.c
index 2d112c316424..05a3e7f97ef0 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp_cm.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp_cm.c
@@ -149,6 +149,9 @@ void dpp2_set_degamma(
 	case IPP_DEGAMMA_MODE_HW_xvYCC:
 		REG_UPDATE(CM_DGAM_CONTROL, CM_DGAM_LUT_MODE, 2);
 			break;
+	case IPP_DEGAMMA_MODE_USER_PWL:
+		REG_UPDATE(CM_DGAM_CONTROL, CM_DGAM_LUT_MODE, 3);
+		break;
 	default:
 		BREAK_TO_DEBUGGER();
 		break;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
index 2d093ff0a76c..ec9838d6e0ee 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
@@ -878,6 +878,11 @@ bool dcn20_set_input_transfer_func(struct dc *dc,
 					IPP_DEGAMMA_MODE_BYPASS);
 			break;
 		case TRANSFER_FUNCTION_PQ:
+			dpp_base->funcs->dpp_set_degamma(dpp_base, IPP_DEGAMMA_MODE_USER_PWL);
+			cm_helper_translate_curve_to_degamma_hw_format(tf, &dpp_base->degamma_params);
+			dpp_base->funcs->dpp_program_degamma_pwl(dpp_base, &dpp_base->degamma_params);
+			result = true;
+			break;
 		default:
 			result = false;
 			break;
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 45/51] drm/amd/display: 3.2.62
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (43 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 44/51] drm/amd/display: Implement DePQ for DCN2 sunpeng.li
@ 2019-12-02 17:33 ` sunpeng.li
  2019-12-02 17:34 ` [PATCH 46/51] drm/amd/display: Change HDR_MULT check sunpeng.li
                   ` (5 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:33 UTC (permalink / raw)
  To: amd-gfx
  Cc: bhawanpreet.lakha, rodrigo.siqueira, Aric Cyr, Leo Li, harry.wentland

From: Aric Cyr <aric.cyr@amd.com>

Signed-off-by: Aric Cyr <aric.cyr@amd.com>
Reviewed-by: Aric Cyr <Aric.Cyr@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index 4c7a2882a512..c24639080371 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -39,7 +39,7 @@
 #include "inc/hw/dmcu.h"
 #include "dml/display_mode_lib.h"
 
-#define DC_VER "3.2.61"
+#define DC_VER "3.2.62"
 
 #define MAX_SURFACES 3
 #define MAX_PLANES 6
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 46/51] drm/amd/display: Change HDR_MULT check
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (44 preceding siblings ...)
  2019-12-02 17:33 ` [PATCH 45/51] drm/amd/display: 3.2.62 sunpeng.li
@ 2019-12-02 17:34 ` sunpeng.li
  2019-12-02 17:34 ` [PATCH 47/51] drm/amd/display: Increase the number of retries after AUX DEFER sunpeng.li
                   ` (4 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:34 UTC (permalink / raw)
  To: amd-gfx
  Cc: Aric Cyr, Krunoslav Kovac, Leo Li, harry.wentland,
	rodrigo.siqueira, bhawanpreet.lakha

From: Krunoslav Kovac <Krunoslav.Kovac@amd.com>

[Why]
Currently we require HDR_MULT >= 1.0
There are scenarios where we need < 1.0

[How]
Only guard against 0 - it will black-screen image.
It is up to higher-level logic to decide what HDR_MULT
values are allowed in each particular case.

Signed-off-by: Krunoslav Kovac <Krunoslav.Kovac@amd.com>
Reviewed-by: Aric Cyr <Aric.Cyr@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index 61d2f1233f8c..3996fef56948 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -2390,17 +2390,13 @@ void dcn10_set_hdr_multiplier(struct pipe_ctx *pipe_ctx)
 	struct fixed31_32 multiplier = pipe_ctx->plane_state->hdr_mult;
 	uint32_t hw_mult = 0x1f000; // 1.0 default multiplier
 	struct custom_float_format fmt;
-	bool mult_negative; // True if fixed31_32 sign bit indicates negative value
-	uint32_t mult_int; // int component of fixed31_32
 
 	fmt.exponenta_bits = 6;
 	fmt.mantissa_bits = 12;
 	fmt.sign = true;
 
-	mult_negative = multiplier.value >> 63 != 0;
-	mult_int = multiplier.value >> 32;
 
-	if (mult_int && !mult_negative) // Check if greater than 1
+	if (!dc_fixpt_eq(multiplier, dc_fixpt_from_int(0))) // check != 0
 		convert_to_custom_float_format(multiplier, &fmt, &hw_mult);
 
 	pipe_ctx->plane_res.dpp->funcs->dpp_set_hdr_multiplier(
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 47/51] drm/amd/display: Increase the number of retries after AUX DEFER
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (45 preceding siblings ...)
  2019-12-02 17:34 ` [PATCH 46/51] drm/amd/display: Change HDR_MULT check sunpeng.li
@ 2019-12-02 17:34 ` sunpeng.li
  2019-12-02 17:34 ` [PATCH 48/51] drm/amd/display: Compare clock state member to determine optimization sunpeng.li
                   ` (3 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:34 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, George Shen, rodrigo.siqueira,
	Abdoulaye Berthe, harry.wentland, bhawanpreet.lakha

From: George Shen <george.shen@amd.com>

[Why]
When a timeout occurs after a DEFER, some devices require more retries
than in the case of a regular timeout.

[How]
In a timeout occurrence, check whether a DEFER has occurred before the
timeout and retry MAX_DEFER_RETRIES retries times instead of
MAX_TIMEOUT_RETRIES.

Signed-off-by: George Shen <george.shen@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Abdoulaye Berthe <Abdoulaye.Berthe@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dce/dce_aux.c | 32 ++++++++++++++------
 1 file changed, 22 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c
index f7626cd70ec8..191b68b8163a 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c
@@ -611,6 +611,8 @@ bool dce_aux_transfer_with_retries(struct ddc_service *ddc,
 	uint8_t reply;
 	bool payload_reply = true;
 	enum aux_channel_operation_result operation_result;
+	bool retry_on_defer = false;
+
 	int aux_ack_retries = 0,
 		aux_defer_retries = 0,
 		aux_i2c_defer_retries = 0,
@@ -641,8 +643,9 @@ bool dce_aux_transfer_with_retries(struct ddc_service *ddc,
 			break;
 
 			case AUX_TRANSACTION_REPLY_AUX_DEFER:
-			case AUX_TRANSACTION_REPLY_I2C_OVER_AUX_NACK:
 			case AUX_TRANSACTION_REPLY_I2C_OVER_AUX_DEFER:
+				retry_on_defer = true;
+			case AUX_TRANSACTION_REPLY_I2C_OVER_AUX_NACK:
 				if (++aux_defer_retries >= AUX_MAX_DEFER_RETRIES) {
 					goto fail;
 				} else {
@@ -675,15 +678,24 @@ bool dce_aux_transfer_with_retries(struct ddc_service *ddc,
 			break;
 
 		case AUX_CHANNEL_OPERATION_FAILED_TIMEOUT:
-			if (++aux_timeout_retries >= AUX_MAX_TIMEOUT_RETRIES)
-				goto fail;
-			else {
-				/*
-				 * DP 1.4, 2.8.2:  AUX Transaction Response/Reply Timeouts
-				 * According to the DP spec there should be 3 retries total
-				 * with a 400us wait inbetween each. Hardware already waits
-				 * for 550us therefore no wait is required here.
-				 */
+			// Check whether a DEFER had occurred before the timeout.
+			// If so, treat timeout as a DEFER.
+			if (retry_on_defer) {
+				if (++aux_defer_retries >= AUX_MAX_DEFER_RETRIES)
+					goto fail;
+				else if (payload->defer_delay > 0)
+					msleep(payload->defer_delay);
+			} else {
+				if (++aux_timeout_retries >= AUX_MAX_TIMEOUT_RETRIES)
+					goto fail;
+				else {
+					/*
+					 * DP 1.4, 2.8.2:  AUX Transaction Response/Reply Timeouts
+					 * According to the DP spec there should be 3 retries total
+					 * with a 400us wait inbetween each. Hardware already waits
+					 * for 550us therefore no wait is required here.
+					 */
+				}
 			}
 			break;
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 48/51] drm/amd/display: Compare clock state member to determine optimization.
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (46 preceding siblings ...)
  2019-12-02 17:34 ` [PATCH 47/51] drm/amd/display: Increase the number of retries after AUX DEFER sunpeng.li
@ 2019-12-02 17:34 ` sunpeng.li
  2019-12-02 17:34 ` [PATCH 49/51] drm/amd/display: update dml related structs sunpeng.li
                   ` (2 subsequent siblings)
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:34 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, Yongqiang Sun,
	harry.wentland, bhawanpreet.lakha

From: Yongqiang Sun <yongqiang.sun@amd.com>

[Why]
It seems always request passive flip on RN due to incorrect compare
clock state to determine optization.

[How]
Instead of calling memcmp, compare clock state member to determine the
condition.

Signed-off-by: Yongqiang Sun <yongqiang.sun@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 .../amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c  | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
index 37230d3d94a0..de51ef12e33a 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
@@ -471,12 +471,28 @@ static void rn_notify_wm_ranges(struct clk_mgr *clk_mgr_base)
 
 }
 
+static bool rn_are_clock_states_equal(struct dc_clocks *a,
+		struct dc_clocks *b)
+{
+	if (a->dispclk_khz != b->dispclk_khz)
+		return false;
+	else if (a->dppclk_khz != b->dppclk_khz)
+		return false;
+	else if (a->dcfclk_khz != b->dcfclk_khz)
+		return false;
+	else if (a->dcfclk_deep_sleep_khz != b->dcfclk_deep_sleep_khz)
+		return false;
+
+	return true;
+}
+
+
 static struct clk_mgr_funcs dcn21_funcs = {
 	.get_dp_ref_clk_frequency = dce12_get_dp_ref_freq_khz,
 	.update_clocks = rn_update_clocks,
 	.init_clocks = rn_init_clocks,
 	.enable_pme_wa = rn_enable_pme_wa,
-	/* .dump_clk_registers = rn_dump_clk_registers, */
+	.are_clock_states_equal = rn_are_clock_states_equal,
 	.notify_wm_ranges = rn_notify_wm_ranges
 };
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 49/51] drm/amd/display: update dml related structs
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (47 preceding siblings ...)
  2019-12-02 17:34 ` [PATCH 48/51] drm/amd/display: Compare clock state member to determine optimization sunpeng.li
@ 2019-12-02 17:34 ` sunpeng.li
  2019-12-02 17:34 ` [PATCH 50/51] drm/amd/display: correct log message for lttpr sunpeng.li
  2019-12-02 17:34 ` [PATCH 51/51] drm/amd/display: Extend DMCUB offload testing into dcn20/21 sunpeng.li
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:34 UTC (permalink / raw)
  To: amd-gfx
  Cc: Chris Park, Leo Li, harry.wentland, rodrigo.siqueira,
	Dmytro Laktyushkin, bhawanpreet.lakha

From: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>

In preparation for further changes

Signed-off-by: Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
Reviewed-by: Chris Park <Chris.Park@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c     | 2 ++
 drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h | 3 +++
 drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c     | 2 +-
 3 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
index f853af413582..5e0f0e679899 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
@@ -1967,6 +1967,7 @@ int dcn20_populate_dml_pipes_from_context(
 			pipes[pipe_cnt].pipe.src.viewport_height = timing->v_addressable;
 			if (pipes[pipe_cnt].pipe.src.viewport_height > 1080)
 				pipes[pipe_cnt].pipe.src.viewport_height = 1080;
+			pipes[pipe_cnt].pipe.src.surface_height_y = pipes[pipe_cnt].pipe.src.viewport_height;
 			pipes[pipe_cnt].pipe.src.data_pitch = ((pipes[pipe_cnt].pipe.src.viewport_width + 63) / 64) * 64; /* linear sw only */
 			pipes[pipe_cnt].pipe.src.source_format = dm_444_32;
 			pipes[pipe_cnt].pipe.dest.recout_width = pipes[pipe_cnt].pipe.src.viewport_width; /*vp_width/hratio*/
@@ -2000,6 +2001,7 @@ int dcn20_populate_dml_pipes_from_context(
 			pipes[pipe_cnt].pipe.src.viewport_width_c = scl->viewport_c.width;
 			pipes[pipe_cnt].pipe.src.viewport_height = scl->viewport.height;
 			pipes[pipe_cnt].pipe.src.viewport_height_c = scl->viewport_c.height;
+			pipes[pipe_cnt].pipe.src.surface_height_y = pln->plane_size.surface_size.height;
 			if (pln->format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN) {
 				pipes[pipe_cnt].pipe.src.data_pitch = pln->plane_size.surface_pitch;
 				pipes[pipe_cnt].pipe.src.data_pitch_c = pln->plane_size.chroma_pitch;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
index 516396d53d01..220d5e610f1f 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
@@ -99,6 +99,7 @@ struct _vcs_dpi_soc_bounding_box_st {
 	unsigned int num_chans;
 	unsigned int vmm_page_size_bytes;
 	unsigned int hostvm_min_page_size_bytes;
+	unsigned int gpuvm_min_page_size_bytes;
 	double dram_clock_change_latency_us;
 	double dummy_pstate_latency_us;
 	double writeback_dram_clock_change_latency_us;
@@ -224,6 +225,7 @@ struct _vcs_dpi_display_pipe_source_params_st {
 	int source_scan;
 	int sw_mode;
 	int macro_tile_size;
+	unsigned int surface_height_y;
 	unsigned int viewport_width;
 	unsigned int viewport_height;
 	unsigned int viewport_y_y;
@@ -400,6 +402,7 @@ struct _vcs_dpi_display_rq_misc_params_st {
 struct _vcs_dpi_display_rq_params_st {
 	unsigned char yuv420;
 	unsigned char yuv420_10bpc;
+	unsigned char rgbe_alpha;
 	display_rq_misc_params_st misc;
 	display_rq_sizing_params_st sizing;
 	display_rq_dlg_params_st dlg;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
index b1c2b79e42b6..15b72a8b5174 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
@@ -231,7 +231,7 @@ static void fetch_socbb_params(struct display_mode_lib *mode_lib)
 	mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading = soc->dcn_downspread_percent;   // new
 	mode_lib->vba.DISPCLKDPPCLKVCOSpeed = soc->dispclk_dppclk_vco_speed_mhz;   // new
 	mode_lib->vba.VMMPageSize = soc->vmm_page_size_bytes;
-	mode_lib->vba.GPUVMMinPageSize = soc->vmm_page_size_bytes / 1024;
+	mode_lib->vba.GPUVMMinPageSize = soc->gpuvm_min_page_size_bytes / 1024;
 	mode_lib->vba.HostVMMinPageSize = soc->hostvm_min_page_size_bytes / 1024;
 	// Set the voltage scaling clocks as the defaults. Most of these will
 	// be set to different values by the test
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 50/51] drm/amd/display: correct log message for lttpr
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (48 preceding siblings ...)
  2019-12-02 17:34 ` [PATCH 49/51] drm/amd/display: update dml related structs sunpeng.li
@ 2019-12-02 17:34 ` sunpeng.li
  2019-12-02 17:34 ` [PATCH 51/51] drm/amd/display: Extend DMCUB offload testing into dcn20/21 sunpeng.li
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:34 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, harry.wentland, rodrigo.siqueira, abdoulaye berthe,
	George Shen, bhawanpreet.lakha

From: abdoulaye berthe <abdoulaye.berthe@amd.com>

[Why]
When setting lttpr mode, the new mode to bet is not logged properly.

[How]
Update log message to show the right mode.

Signed-off-by: abdoulaye berthe <abdoulaye.berthe@amd.com>
Reviewed-by: George Shen <George.Shen@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index dfcd6421ee01..42aa889fd0f5 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -1219,7 +1219,7 @@ static void configure_lttpr_mode(struct dc_link *link)
 	uint8_t repeater_id;
 	uint8_t repeater_mode = DP_PHY_REPEATER_MODE_TRANSPARENT;
 
-	DC_LOG_HW_LINK_TRAINING("%s\n Set LTTPR to Non Transparent Mode\n", __func__);
+	DC_LOG_HW_LINK_TRAINING("%s\n Set LTTPR to Transparent Mode\n", __func__);
 	core_link_write_dpcd(link,
 			DP_PHY_REPEATER_MODE,
 			(uint8_t *)&repeater_mode,
@@ -1227,7 +1227,7 @@ static void configure_lttpr_mode(struct dc_link *link)
 
 	if (!link->is_lttpr_mode_transparent) {
 
-		DC_LOG_HW_LINK_TRAINING("%s\n Set LTTPR to Transparent Mode\n", __func__);
+		DC_LOG_HW_LINK_TRAINING("%s\n Set LTTPR to Non Transparent Mode\n", __func__);
 
 		repeater_mode = DP_PHY_REPEATER_MODE_NON_TRANSPARENT;
 		core_link_write_dpcd(link,
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH 51/51] drm/amd/display: Extend DMCUB offload testing into dcn20/21
  2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
                   ` (49 preceding siblings ...)
  2019-12-02 17:34 ` [PATCH 50/51] drm/amd/display: correct log message for lttpr sunpeng.li
@ 2019-12-02 17:34 ` sunpeng.li
  50 siblings, 0 replies; 52+ messages in thread
From: sunpeng.li @ 2019-12-02 17:34 UTC (permalink / raw)
  To: amd-gfx
  Cc: Leo Li, Tony Cheng, rodrigo.siqueira, harry.wentland,
	bhawanpreet.lakha, Nicholas Kazlauskas

From: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>

[Why]
To quickly validate whether DMCUB is running and accepting commands for
offload testing we want to intercept a common sequence as part of
modeset programming.

[How]
OTG enable will cause the most impact in terms of golden register
changes and it's a single register write.

This approach was previously done in dcn10 code when it was shared with
dcn20 but it wasn't ported over to the dcn20 code.

Port over start, execute and wait sequence into dcn20_optc.

Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng@amd.com>
Acked-by: Leo Li <sunpeng.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c
index f5854a5d2b76..673c83e2afd4 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c
@@ -59,11 +59,16 @@ bool optc2_enable_crtc(struct timing_generator *optc)
 	REG_UPDATE(CONTROL,
 			VTG0_ENABLE, 1);
 
+	REG_SEQ_START();
+
 	/* Enable CRTC */
 	REG_UPDATE_2(OTG_CONTROL,
 			OTG_DISABLE_POINT_CNTL, 3,
 			OTG_MASTER_EN, 1);
 
+	REG_SEQ_SUBMIT();
+	REG_SEQ_WAIT_DONE();
+
 	return true;
 }
 
-- 
2.24.0

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 52+ messages in thread

end of thread, other threads:[~2019-12-02 17:36 UTC | newest]

Thread overview: 52+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-02 17:33 [PATCH 00/51] DC Patches - 2 Dec 2019 sunpeng.li
2019-12-02 17:33 ` [PATCH 01/51] drm/amd/display: update sr and pstate latencies for Renoir sunpeng.li
2019-12-02 17:33 ` [PATCH 02/51] drm/amd/display: rename core_dc to dc sunpeng.li
2019-12-02 17:33 ` [PATCH 03/51] drm/amd/display: add separate of private hwss functions sunpeng.li
2019-12-02 17:33 ` [PATCH 04/51] drm/amd/display: Fix Dali clk mgr construct sunpeng.li
2019-12-02 17:33 ` [PATCH 05/51] drm/amd/display: Map DSC resources 1-to-1 if numbers of OPPs and DSCs are equal sunpeng.li
2019-12-02 17:33 ` [PATCH 06/51] drm/amd/display: fix DalDramClockChangeLatencyNs override sunpeng.li
2019-12-02 17:33 ` [PATCH 07/51] drm/amd/display: Wrong ifdef guards were used around DML validation sunpeng.li
2019-12-02 17:33 ` [PATCH 08/51] drm/amd/display: Reset PHY in link re-training sunpeng.li
2019-12-02 17:33 ` [PATCH 09/51] drm/amd/display: Disable link before reenable sunpeng.li
2019-12-02 17:33 ` [PATCH 10/51] drm/amd/display: Add DMCUB__PG_DONE trace code enum sunpeng.li
2019-12-02 17:33 ` [PATCH 11/51] drm/amd/display: Only wait for DMUB phy init on dcn21 sunpeng.li
2019-12-02 17:33 ` [PATCH 12/51] drm/amd/display: Return DMUB_STATUS_OK when autoload unsupported sunpeng.li
2019-12-02 17:33 ` [PATCH 13/51] drm/amd/display: Program CW5 for tracebuffer for dcn20 sunpeng.li
2019-12-02 17:33 ` [PATCH 14/51] drm/amd/display: populate bios integrated info for renoir sunpeng.li
2019-12-02 17:33 ` [PATCH 15/51] drm/amd/display: Fixed kernel panic when booting with DP-to-HDMI dongle sunpeng.li
2019-12-02 17:33 ` [PATCH 16/51] drm/amd/display: have two different sr and pstate latency tables for renoir sunpeng.li
2019-12-02 17:33 ` [PATCH 17/51] drm/amd/display: fix dprefclk and ss percentage reading on RN sunpeng.li
2019-12-02 17:33 ` [PATCH 18/51] drm/amd/display: 3.2.61 sunpeng.li
2019-12-02 17:33 ` [PATCH 19/51] drm/amd/display: Change the delay time before enabling FEC sunpeng.li
2019-12-02 17:33 ` [PATCH 20/51] drm/amd/display: fixed that I2C over AUX didn't read data issue sunpeng.li
2019-12-02 17:33 ` [PATCH 21/51] drm/amd/display: add log for lttpr sunpeng.li
2019-12-02 17:33 ` [PATCH 22/51] drm/amd/display: Disable chroma viewport w/a when rotated 180 degrees sunpeng.li
2019-12-02 17:33 ` [PATCH 23/51] drm/amd/display: fix dml20 min_dst_y_next_start calculation sunpeng.li
2019-12-02 17:33 ` [PATCH 24/51] drm/amd/display: Reset steer fifo before unblanking the stream sunpeng.li
2019-12-02 17:33 ` [PATCH 25/51] drm/amd/display: Implement DePQ for DCN1 sunpeng.li
2019-12-02 17:33 ` [PATCH 26/51] drm/amd/display: update p-state latency for renoir when using lpddr4 sunpeng.li
2019-12-02 17:33 ` [PATCH 27/51] drm/amd/display: add DP protocol version sunpeng.li
2019-12-02 17:33 ` [PATCH 28/51] drm/amd/display: Save/restore link setting for disable phy when link retraining sunpeng.li
2019-12-02 17:33 ` [PATCH 29/51] drm/amd/display: Return a correct error value sunpeng.li
2019-12-02 17:33 ` [PATCH 30/51] drm/amd/display: Split DMUB cmd type into type/subtype sunpeng.li
2019-12-02 17:33 ` [PATCH 31/51] drm/amd/display: Add shared DMCUB/driver firmware state cache window sunpeng.li
2019-12-02 17:33 ` [PATCH 32/51] drm/amd/display: update sr latency for renoir when using lpddr4 sunpeng.li
2019-12-02 17:33 ` [PATCH 33/51] drm/amd/display: Remove flag check in mpcc update sunpeng.li
2019-12-02 17:33 ` [PATCH 34/51] drm/amd/display: check for repeater when setting aux_rd_interval sunpeng.li
2019-12-02 17:33 ` [PATCH 35/51] drm/amd/display: Modify logic for when to wait for mpcc idle sunpeng.li
2019-12-02 17:33 ` [PATCH 36/51] drm/amd/display: Remove redundant call sunpeng.li
2019-12-02 17:33 ` [PATCH 37/51] drm/amd/display: add dc dsc functions to return bpp range for pixel encoding sunpeng.li
2019-12-02 17:33 ` [PATCH 38/51] drm/amd/display: remove spam DSC log sunpeng.li
2019-12-02 17:33 ` [PATCH 39/51] drm/amd/display: add dsc policy getter sunpeng.li
2019-12-02 17:33 ` [PATCH 40/51] drm/amd/display: Limit NV12 chroma workaround sunpeng.li
2019-12-02 17:33 ` [PATCH 41/51] drm/amd/display: fix cursor positioning for multiplane cases sunpeng.li
2019-12-02 17:33 ` [PATCH 42/51] drm/amd/display: Fix screen tearing on vrr tests sunpeng.li
2019-12-02 17:33 ` [PATCH 43/51] drm/amd/display: update dispclk and dppclk vco frequency sunpeng.li
2019-12-02 17:33 ` [PATCH 44/51] drm/amd/display: Implement DePQ for DCN2 sunpeng.li
2019-12-02 17:33 ` [PATCH 45/51] drm/amd/display: 3.2.62 sunpeng.li
2019-12-02 17:34 ` [PATCH 46/51] drm/amd/display: Change HDR_MULT check sunpeng.li
2019-12-02 17:34 ` [PATCH 47/51] drm/amd/display: Increase the number of retries after AUX DEFER sunpeng.li
2019-12-02 17:34 ` [PATCH 48/51] drm/amd/display: Compare clock state member to determine optimization sunpeng.li
2019-12-02 17:34 ` [PATCH 49/51] drm/amd/display: update dml related structs sunpeng.li
2019-12-02 17:34 ` [PATCH 50/51] drm/amd/display: correct log message for lttpr sunpeng.li
2019-12-02 17:34 ` [PATCH 51/51] drm/amd/display: Extend DMCUB offload testing into dcn20/21 sunpeng.li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.