All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/43] Add support for DCN 3.2
@ 2022-05-25 16:18 Alex Deucher
  2022-05-25 16:19 ` [PATCH 01/43] drm/amd/display: remove stale config guards Alex Deucher
                   ` (41 more replies)
  0 siblings, 42 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:18 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher

These patches add support for DCN (Display Core Next) version
3.2.x.  Patch 4 adds new register headers and is too big for
the mailing list.

Alvin Lee (4):
  drm/amd/display: Add missing instance for clock source register
  drm/amd/display: Implement WM table transfer for DCN32/DCN321
  drm/amd/display: Remove W/A for ODM memory pins
  drm/amd/display: Implement DTBCLK ref switching on dcn32

Aurabindo Pillai (17):
  drm/amd/display: remove stale config guards
  drm/amd: Add atomfirmware.h definitions needed for DCN32/321
  drm/amd/display: Add DCN32/321 version identifiers
  drm/amd: add register headers for DCN32/321
  drm/amd/display: Add DMCUB source files and changes for DCN32/321
  drm/amd/display: add dcn32 IRQ changes
  drm/amd/display: add GPIO changes for DCN32/321
  drm/amd/display: DML changes for DCN32/321
  drm/amd/display: add CLKMGR changes for DCN32/321
  drm/amd/display: add DCN32/321 specific files for Display Core
  drm/amd/display: Add dependant changes for DCN32/321
  drm/amd/display: Add DM support for DCN32/DCN321
  drm/amd/display: add DCN32 to IP discovery table
  drm/amd: Add GFX11 modifiers support to AMDGPU
  drm/amd/display: add missing interrupt handlers for DCN32/DCN321
  drm/amd/display: disable idle optimizations
  drm/amd/display: update disp pattern generator routine for DCN30

Chaitanya Dhere (1):
  drm/amd/display: FCLK P-state support updates

Charlene Liu (1):
  drm/amd/display: use updated clock source init routine

Dillon Varone (9):
  drm/amd/display: Fix USBC link creation
  drm/amd/display: Add guard for FCLK pstate message to PMFW for DCN321
  drm/amd/display: Various DML fixes to enable higher timings
  drm/amd/display: Select correct DTO source
  drm/amd/display: Ensure that DMCUB fw in use is loaded by DC and not
    VBIOS
  drm/amd/display: Add additional guard for FCLK pstate message for
    DCN321
  drm/amd/display: set dram speed for all states
  drm/amd/display: Disable DTB Ref Clock Switching in dcn32
  drm/amd/display: change dsc image width cap for dcn32 and dcn321

Duncan Ma (1):
  drm/amd/display: Add ODM seamless boot support

Eric Bernstein (1):
  drm/amd/display: Use DTBCLK for valid pixel clock

Fangzhi Zuo (1):
  drm/amd/display: Halve DTB Clock Value for DCN32

Jingwen Zhu (1):
  drm/amd/display: set link fec status during init for DCN32

Jun Lei (2):
  drm/amd/display: add new pixel rate programming
  drm/amd/display: Introduce new update_clocks logic

Martin Leung (1):
  drm/amd/display: cleaning up smu_if to add future flexibility

Rodrigo Siqueira (1):
  drm/amd/display: Drop unnecessary guard from DC resource

Samson Tam (3):
  drm/amd/display: do not override CURSOR_REQ_MODE when SubVP is not
    enabled
  drm/amd/display: Match dprefclk with clk registers
  drm/amd/display: Updates for OTG and DCCG clocks

 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c |      2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   |     40 +-
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |     97 +-
 drivers/gpu/drm/amd/display/dc/Makefile       |      2 +
 .../drm/amd/display/dc/bios/bios_parser2.c    |    950 +-
 .../dc/bios/bios_parser_types_internal2.h     |      1 +
 .../drm/amd/display/dc/bios/command_table.c   |      4 +-
 .../display/dc/bios/command_table_helper2.c   |      2 +
 .../gpu/drm/amd/display/dc/clk_mgr/Makefile   |     35 +
 .../gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c  |     17 +-
 .../display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c  |     15 +-
 .../display/dc/clk_mgr/dcn30/dcn30_clk_mgr.h  |     60 +
 .../display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c  |      8 +-
 .../display/dc/clk_mgr/dcn31/dcn31_clk_mgr.h  |      2 +
 .../dc/clk_mgr/dcn315/dcn315_clk_mgr.c        |      7 +-
 .../dc/clk_mgr/dcn316/dcn316_clk_mgr.c        |      3 +-
 .../drm/amd/display/dc/clk_mgr/dcn32/dalsmc.h |     65 +
 .../display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c  |    725 +
 .../display/dc/clk_mgr/dcn32/dcn32_clk_mgr.h  |     39 +
 .../dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c  |    134 +
 .../dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h  |     46 +
 .../dc/clk_mgr/dcn32/dcn32_smu13_driver_if.h  |     63 +
 .../dc/clk_mgr/dcn32/smu13_driver_if.h        |    108 +
 drivers/gpu/drm/amd/display/dc/core/dc.c      |     14 +-
 .../gpu/drm/amd/display/dc/core/dc_link_dp.c  |      3 +
 .../gpu/drm/amd/display/dc/core/dc_resource.c |     27 +-
 drivers/gpu/drm/amd/display/dc/dc.h           |     24 +
 .../gpu/drm/amd/display/dc/dc_bios_types.h    |      5 +
 drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c  |     31 +
 drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h  |      2 +
 drivers/gpu/drm/amd/display/dc/dc_hw_types.h  |      1 +
 drivers/gpu/drm/amd/display/dc/dc_stream.h    |     21 +
 drivers/gpu/drm/amd/display/dc/dce/dce_abm.h  |     45 +
 .../drm/amd/display/dc/dce/dce_clock_source.c |     27 +
 .../drm/amd/display/dc/dce/dce_clock_source.h |     15 +
 .../display/dc/dce110/dce110_hw_sequencer.c   |      6 +
 .../drm/amd/display/dc/dcn10/dcn10_hubbub.h   |     33 +
 .../amd/display/dc/dcn10/dcn10_link_encoder.h |      6 +
 .../gpu/drm/amd/display/dc/dcn10/dcn10_optc.h |      5 +
 .../display/dc/dcn10/dcn10_stream_encoder.h   |     32 +-
 .../gpu/drm/amd/display/dc/dcn20/dcn20_dccg.h |     34 +-
 .../gpu/drm/amd/display/dc/dcn20/dcn20_hubp.h |     25 +-
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.c    |     46 +-
 .../dc/dcn30/dcn30_dio_stream_encoder.c       |     16 +-
 .../dc/dcn30/dcn30_dio_stream_encoder.h       |     35 +
 .../gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c  |      8 +-
 .../gpu/drm/amd/display/dc/dcn30/dcn30_dpp.h  |     16 +
 .../drm/amd/display/dc/dcn30/dcn30_hwseq.c    |     33 +-
 .../gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c  |     14 +-
 .../gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h  |    147 +
 .../gpu/drm/amd/display/dc/dcn30/dcn30_optc.h |      9 +
 .../gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c |    106 +-
 .../gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h |     16 +-
 .../gpu/drm/amd/display/dc/dcn31/dcn31_optc.c |      1 +
 .../gpu/drm/amd/display/dc/dcn31/dcn31_optc.h |      6 +-
 drivers/gpu/drm/amd/display/dc/dcn32/Makefile |     37 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c |    303 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_dccg.h |    159 +
 .../display/dc/dcn32/dcn32_dio_link_encoder.c |    294 +
 .../display/dc/dcn32/dcn32_dio_link_encoder.h |     60 +
 .../dc/dcn32/dcn32_dio_stream_encoder.c       |    461 +
 .../dc/dcn32/dcn32_dio_stream_encoder.h       |    266 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_dpp.c  |    164 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_dpp.h  |     38 +
 .../dc/dcn32/dcn32_hpo_dp_link_encoder.c      |     90 +
 .../dc/dcn32/dcn32_hpo_dp_link_encoder.h      |     63 +
 .../drm/amd/display/dc/dcn32/dcn32_hubbub.c   |    964 +
 .../drm/amd/display/dc/dcn32/dcn32_hubbub.h   |    172 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_hubp.c |    148 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_hubp.h |     69 +
 .../drm/amd/display/dc/dcn32/dcn32_hwseq.c    |    958 +
 .../drm/amd/display/dc/dcn32/dcn32_hwseq.h    |     66 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_init.c |    156 +
 .../dcn32_init.h}                             |     20 +-
 .../drm/amd/display/dc/dcn32/dcn32_mmhubbub.c |    239 +
 .../drm/amd/display/dc/dcn32/dcn32_mmhubbub.h |    225 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c  |    810 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_mpc.h  |    213 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_optc.c |    268 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_optc.h |    254 +
 .../drm/amd/display/dc/dcn32/dcn32_resource.c |   4002 +
 .../drm/amd/display/dc/dcn32/dcn32_resource.h |     88 +
 .../gpu/drm/amd/display/dc/dcn321/Makefile    |     34 +
 .../dc/dcn321/dcn321_dio_link_encoder.c       |    199 +
 .../dc/dcn321/dcn321_dio_link_encoder.h       |     42 +
 .../amd/display/dc/dcn321/dcn321_resource.c   |   2335 +
 .../dcn321_resource.h}                        |     46 +-
 drivers/gpu/drm/amd/display/dc/dml/Makefile   |      7 +
 .../drm/amd/display/dc/dml/dcn20/dcn20_fpu.c  |     36 +-
 .../dc/dml/dcn30/display_mode_vba_30.c        |      8 +-
 .../dc/dml/dcn31/display_mode_vba_31.c        |      2 +-
 .../dc/dml/dcn32/display_mode_vba_32.c        |   3835 +
 .../dc/dml/dcn32/display_mode_vba_32.h        |     57 +
 .../dc/dml/dcn32/display_mode_vba_util_32.c   |   6254 +
 .../dc/dml/dcn32/display_mode_vba_util_32.h   |   1175 +
 .../dc/dml/dcn32/display_rq_dlg_calc_32.c     |    616 +
 .../dc/dml/dcn32/display_rq_dlg_calc_32.h     |     70 +
 .../amd/display/dc/dml/display_mode_enums.h   |     88 +-
 .../drm/amd/display/dc/dml/display_mode_lib.c |     12 +
 .../drm/amd/display/dc/dml/display_mode_lib.h |     15 +
 .../amd/display/dc/dml/display_mode_structs.h |    132 +
 .../drm/amd/display/dc/dml/display_mode_vba.c |    171 +
 .../drm/amd/display/dc/dml/display_mode_vba.h |    242 +-
 .../gpu/drm/amd/display/dc/dml/dml_wrapper.c  |     73 +-
 drivers/gpu/drm/amd/display/dc/gpio/Makefile  |      8 +-
 .../display/dc/gpio/dcn32/hw_factory_dcn32.c  |    255 +
 .../hw_factory_dcn32.h}                       |     13 +-
 .../dc/gpio/dcn32/hw_translate_dcn32.c        |    349 +
 .../hw_translate_dcn32.h}                     |     11 +-
 .../gpu/drm/amd/display/dc/gpio/hw_factory.c  |     16 +-
 .../drm/amd/display/dc/gpio/hw_translate.c    |     13 +-
 .../gpu/drm/amd/display/dc/inc/core_types.h   |     10 +
 .../gpu/drm/amd/display/dc/inc/hw/clk_mgr.h   |      3 +
 .../amd/display/dc/inc/hw/clk_mgr_internal.h  |     45 +-
 drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h  |     44 +-
 .../gpu/drm/amd/display/dc/inc/hw/dchubbub.h  |      3 +
 drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h  |      7 +
 .../drm/amd/display/dc/inc/hw/link_encoder.h  |      2 +
 .../gpu/drm/amd/display/dc/inc/hw/mem_input.h |      2 +
 .../amd/display/dc/inc/hw/stream_encoder.h    |      7 +
 .../amd/display/dc/inc/hw/timing_generator.h  |      8 +
 .../gpu/drm/amd/display/dc/inc/hw_sequencer.h |      2 +
 .../amd/display/dc/inc/hw_sequencer_private.h |      8 +
 drivers/gpu/drm/amd/display/dc/inc/resource.h |      7 +
 drivers/gpu/drm/amd/display/dc/irq/Makefile   |     10 +-
 .../display/dc/irq/dcn32/irq_service_dcn32.c  |    432 +
 .../display/dc/irq/dcn32/irq_service_dcn32.h  |     35 +
 .../amd/display/dc/link/link_hwss_hpo_dp.c    |     19 +-
 drivers/gpu/drm/amd/display/dmub/dmub_srv.h   |      8 +
 .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   |     26 +-
 drivers/gpu/drm/amd/display/dmub/src/Makefile |      1 +
 .../gpu/drm/amd/display/dmub/src/dmub_dcn32.c |    493 +
 .../gpu/drm/amd/display/dmub/src/dmub_dcn32.h |    256 +
 .../gpu/drm/amd/display/dmub/src/dmub_srv.c   |     51 +-
 .../amd/display/include/bios_parser_types.h   |     11 +
 .../gpu/drm/amd/display/include/dal_asic_id.h |      8 +
 .../gpu/drm/amd/display/include/dal_types.h   |      2 +
 .../include/asic_reg/dcn/dcn_3_2_0_offset.h   |  14675 +
 .../include/asic_reg/dcn/dcn_3_2_0_sh_mask.h  | 222890 +++++++++++++++
 .../include/asic_reg/dcn/dcn_3_2_1_offset.h   |  14559 +
 .../include/asic_reg/dcn/dcn_3_2_1_sh_mask.h  |  56578 ++++
 drivers/gpu/drm/amd/include/atomfirmware.h    |    209 +-
 include/uapi/drm/drm_fourcc.h                 |      2 +
 143 files changed, 339836 insertions(+), 512 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dalsmc.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_smu13_driver_if.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/smu13_driver_if.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/Makefile
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubp.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubp.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c
 rename drivers/gpu/drm/amd/display/dc/{gpio/diagnostics/hw_translate_diag.c => dcn32/dcn32_init.h} (74%)
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mmhubbub.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mmhubbub.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn321/Makefile
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn321/dcn321_dio_link_encoder.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn321/dcn321_dio_link_encoder.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
 rename drivers/gpu/drm/amd/display/dc/{gpio/diagnostics/hw_factory_diag.c => dcn321/dcn321_resource.h} (53%)
 create mode 100644 drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_factory_dcn32.c
 rename drivers/gpu/drm/amd/display/dc/gpio/{diagnostics/hw_factory_diag.h => dcn32/hw_factory_dcn32.h} (81%)
 create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_translate_dcn32.c
 rename drivers/gpu/drm/amd/display/dc/gpio/{diagnostics/hw_translate_diag.h => dcn32/hw_translate_dcn32.h} (82%)
 create mode 100644 drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.h
 create mode 100644 drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c
 create mode 100644 drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/dcn/dcn_3_2_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/dcn/dcn_3_2_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/dcn/dcn_3_2_1_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/dcn/dcn_3_2_1_sh_mask.h

-- 
2.35.3


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH 01/43] drm/amd/display: remove stale config guards
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 02/43] drm/amd: Add atomfirmware.h definitions needed for DCN32/321 Alex Deucher
                   ` (40 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c | 2 --
 drivers/gpu/drm/amd/display/dc/dml/dml_wrapper.c               | 2 --
 2 files changed, 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
index a2ade6e93f5e..ff8f60f38b21 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
@@ -41,9 +41,7 @@
 
 #include "dc_dmub_srv.h"
 
-#if defined (CONFIG_DRM_AMD_DC_DP2_0)
 #include "dc_link_dp.h"
-#endif
 
 #define TO_CLK_MGR_DCN315(clk_mgr)\
 	container_of(clk_mgr, struct clk_mgr_dcn315, base)
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml/dml_wrapper.c
index 789f7562cdc7..d2273674e872 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dml_wrapper.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dml_wrapper.c
@@ -1284,10 +1284,8 @@ static bool is_dtbclk_required(struct dc *dc, struct dc_state *context)
 	for (i = 0; i < dc->res_pool->pipe_count; i++) {
 		if (!context->res_ctx.pipe_ctx[i].stream)
 			continue;
-#if defined (CONFIG_DRM_AMD_DC_DP2_0)
 		if (is_dp_128b_132b_signal(&context->res_ctx.pipe_ctx[i]))
 			return true;
-#endif
 	}
 	return false;
 }
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 02/43] drm/amd: Add atomfirmware.h definitions needed for DCN32/321
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
  2022-05-25 16:19 ` [PATCH 01/43] drm/amd/display: remove stale config guards Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 03/43] drm/amd/display: Add DCN32/321 version identifiers Alex Deucher
                   ` (39 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

Add new structures for DCN 3.2.x.

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/include/atomfirmware.h | 209 ++++++++++++++++++---
 1 file changed, 187 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/amd/include/atomfirmware.h b/drivers/gpu/drm/amd/include/atomfirmware.h
index ae8f6d299ed9..ff855cb21d3f 100644
--- a/drivers/gpu/drm/amd/include/atomfirmware.h
+++ b/drivers/gpu/drm/amd/include/atomfirmware.h
@@ -726,18 +726,20 @@ struct vram_usagebyfirmware_v2_1
   ***************************************************************************
 */
 
-enum atom_object_record_type_id 
-{
-  ATOM_I2C_RECORD_TYPE =1,
-  ATOM_HPD_INT_RECORD_TYPE =2,
-  ATOM_OBJECT_GPIO_CNTL_RECORD_TYPE =9,
-  ATOM_CONNECTOR_HPDPIN_LUT_RECORD_TYPE =16,
-  ATOM_CONNECTOR_AUXDDC_LUT_RECORD_TYPE =17,
-  ATOM_ENCODER_CAP_RECORD_TYPE=20,
-  ATOM_BRACKET_LAYOUT_RECORD_TYPE=21,
-  ATOM_CONNECTOR_FORCED_TMDS_CAP_RECORD_TYPE=22,
-  ATOM_DISP_CONNECTOR_CAPS_RECORD_TYPE=23,
-  ATOM_RECORD_END_TYPE  =0xFF,
+enum atom_object_record_type_id {
+	ATOM_I2C_RECORD_TYPE = 1,
+	ATOM_HPD_INT_RECORD_TYPE = 2,
+	ATOM_CONNECTOR_CAP_RECORD_TYPE = 3,
+	ATOM_CONNECTOR_SPEED_UPTO = 4,
+	ATOM_OBJECT_GPIO_CNTL_RECORD_TYPE = 9,
+	ATOM_CONNECTOR_HPDPIN_LUT_RECORD_TYPE = 16,
+	ATOM_CONNECTOR_AUXDDC_LUT_RECORD_TYPE = 17,
+	ATOM_ENCODER_CAP_RECORD_TYPE = 20,
+	ATOM_BRACKET_LAYOUT_RECORD_TYPE = 21,
+	ATOM_CONNECTOR_FORCED_TMDS_CAP_RECORD_TYPE = 22,
+	ATOM_DISP_CONNECTOR_CAPS_RECORD_TYPE = 23,
+	ATOM_BRACKET_LAYOUT_V2_RECORD_TYPE = 25,
+	ATOM_RECORD_END_TYPE = 0xFF,
 };
 
 struct atom_common_record_header
@@ -760,6 +762,19 @@ struct atom_hpd_int_record
   uint8_t  plugin_pin_state;
 };
 
+struct atom_connector_caps_record {
+	struct atom_common_record_header
+		record_header; //record_type = ATOM_CONN_CAP_RECORD_TYPE
+	uint16_t connector_caps; //01b if internal display is checked; 10b if internal BL is checked; 0 of Not
+};
+
+struct atom_connector_speed_record {
+	struct atom_common_record_header
+		record_header; //record_type = ATOM_CONN_SPEED_UPTO
+	uint32_t connector_max_speed; // connector Max speed attribute, it sets 8100 in Mhz when DP connector @8.1Ghz.
+	uint16_t reserved;
+};
+
 // Bit maps for ATOM_ENCODER_CAP_RECORD.usEncoderCap
 enum atom_encoder_caps_def
 {
@@ -885,6 +900,21 @@ struct  atom_bracket_layout_record
   uint8_t reserved;
   struct atom_connector_layout_info  conn_info[1];
 };
+struct atom_bracket_layout_record_v2 {
+	struct atom_common_record_header
+		record_header; //record_type =  ATOM_BRACKET_LAYOUT_RECORD_TYPE
+	uint8_t bracketlen; //Bracket Length in mm
+	uint8_t bracketwidth; //Bracket Width in mm
+	uint8_t conn_num; //Connector numbering
+	uint8_t mini_type; //Mini Type (0 = Normal; 1 = Mini)
+	uint8_t reserved1;
+	uint8_t reserved2;
+};
+
+enum atom_connector_layout_info_mini_type_def {
+	MINI_TYPE_NORMAL = 0,
+	MINI_TYPE_MINI = 1,
+};
 
 enum atom_display_device_tag_def{
   ATOM_DISPLAY_LCD1_SUPPORT            = 0x0002, //an embedded display is either an LVDS or eDP signal type of display
@@ -911,6 +941,19 @@ struct atom_display_object_path_v2
   uint8_t  reserved;
 };
 
+struct atom_display_object_path_v3 {
+	uint16_t display_objid; //Connector Object ID or Misc Object ID
+	uint16_t disp_recordoffset;
+	uint16_t encoderobjid; //first encoder closer to the connector, could be either an external or intenal encoder
+	uint16_t reserved1; //only on USBC case, otherwise always = 0
+	uint16_t reserved2; //reserved and always = 0
+	uint16_t reserved3; //reserved and always = 0
+	//a supported device vector, each display path starts with this.the paths are enumerated in the way of priority,
+	//a path appears first
+	uint16_t device_tag;
+	uint16_t reserved4; //reserved and always = 0
+};
+
 struct display_object_info_table_v1_4
 {
   struct    atom_common_table_header  table_header;
@@ -920,6 +963,15 @@ struct display_object_info_table_v1_4
   struct    atom_display_object_path_v2 display_path[8];   //the real number of this included in the structure is calculated by using the (whole structure size - the header size- number_of_path)/size of atom_display_object_path
 };
 
+struct display_object_info_table_v1_5 {
+	struct atom_common_table_header table_header;
+	uint16_t supporteddevices;
+	uint8_t number_of_path;
+	uint8_t reserved;
+	// the real number of this included in the structure is calculated by using the
+	// (whole structure size - the header size- number_of_path)/size of atom_display_object_path
+	struct atom_display_object_path_v3 display_path[8];
+};
 
 /* 
   ***************************************************************************
@@ -1080,17 +1132,73 @@ struct atom_dc_golden_table_v1
 	uint32_t reserved[23];
 };
 
-enum dce_info_caps_def
+enum dce_info_caps_def {
+	// only for VBIOS
+	DCE_INFO_CAPS_FORCE_DISPDEV_CONNECTED = 0x02,
+	// only for VBIOS
+	DCE_INFO_CAPS_DISABLE_DFP_DP_HBR2 = 0x04,
+	// only for VBIOS
+	DCE_INFO_CAPS_ENABLE_INTERLAC_TIMING = 0x08,
+	// only for VBIOS
+	DCE_INFO_CAPS_LTTPR_SUPPORT_ENABLE = 0x20,
+	DCE_INFO_CAPS_VBIOS_LTTPR_TRANSPARENT_ENABLE = 0x40,
+};
+
+struct atom_display_controller_info_v4_5
 {
-  // only for VBIOS
-  DCE_INFO_CAPS_FORCE_DISPDEV_CONNECTED  =0x02,      
-  // only for VBIOS
-  DCE_INFO_CAPS_DISABLE_DFP_DP_HBR2      =0x04,
-  // only for VBIOS
-  DCE_INFO_CAPS_ENABLE_INTERLAC_TIMING   =0x08,
-  // only for VBIOS
-  DCE_INFO_CAPS_LTTPR_SUPPORT_ENABLE	 =0x20,
-  DCE_INFO_CAPS_VBIOS_LTTPR_TRANSPARENT_ENABLE = 0x40,
+  struct  atom_common_table_header  table_header;
+  uint32_t display_caps;
+  uint32_t bootup_dispclk_10khz;
+  uint16_t dce_refclk_10khz;
+  uint16_t i2c_engine_refclk_10khz;
+  uint16_t dvi_ss_percentage;       // in unit of 0.001%
+  uint16_t dvi_ss_rate_10hz;
+  uint16_t hdmi_ss_percentage;      // in unit of 0.001%
+  uint16_t hdmi_ss_rate_10hz;
+  uint16_t dp_ss_percentage;        // in unit of 0.001%
+  uint16_t dp_ss_rate_10hz;
+  uint8_t  dvi_ss_mode;             // enum of atom_spread_spectrum_mode
+  uint8_t  hdmi_ss_mode;            // enum of atom_spread_spectrum_mode
+  uint8_t  dp_ss_mode;              // enum of atom_spread_spectrum_mode
+  uint8_t  ss_reserved;
+  // DFP hardcode mode number defined in StandardVESA_TimingTable when EDID is not available
+  uint8_t  dfp_hardcode_mode_num;
+  // DFP hardcode mode refreshrate defined in StandardVESA_TimingTable when EDID is not available
+  uint8_t  dfp_hardcode_refreshrate;
+  // VGA hardcode mode number defined in StandardVESA_TimingTable when EDID is not avablable
+  uint8_t  vga_hardcode_mode_num;
+  // VGA hardcode mode number defined in StandardVESA_TimingTable when EDID is not avablable
+  uint8_t  vga_hardcode_refreshrate;
+  uint16_t dpphy_refclk_10khz;
+  uint16_t hw_chip_id;
+  uint8_t  dcnip_min_ver;
+  uint8_t  dcnip_max_ver;
+  uint8_t  max_disp_pipe_num;
+  uint8_t  max_vbios_active_disp_pipe_num;
+  uint8_t  max_ppll_num;
+  uint8_t  max_disp_phy_num;
+  uint8_t  max_aux_pairs;
+  uint8_t  remotedisplayconfig;
+  uint32_t dispclk_pll_vco_freq;
+  uint32_t dp_ref_clk_freq;
+  // Worst case blackout duration for a memory clock frequency (p-state) change, units of 100s of ns (0.1 us)
+  uint32_t max_mclk_chg_lat;
+  // Worst case memory self refresh exit time, units of 100ns of ns (0.1us)
+  uint32_t max_sr_exit_lat;
+  // Worst case memory self refresh entry followed by immediate exit time, units of 100ns of ns (0.1us)
+  uint32_t max_sr_enter_exit_lat;
+  uint16_t dc_golden_table_offset;  // point of struct of atom_dc_golden_table_vxx
+  uint16_t dc_golden_table_ver;
+  uint32_t aux_dphy_rx_control0_val;
+  uint32_t aux_dphy_tx_control_val;
+  uint32_t aux_dphy_rx_control1_val;
+  uint32_t dc_gpio_aux_ctrl_0_val;
+  uint32_t dc_gpio_aux_ctrl_1_val;
+  uint32_t dc_gpio_aux_ctrl_2_val;
+  uint32_t dc_gpio_aux_ctrl_3_val;
+  uint32_t dc_gpio_aux_ctrl_4_val;
+  uint32_t dc_gpio_aux_ctrl_5_val;
+  uint32_t reserved[26];
 };
 
 /* 
@@ -1806,6 +1914,63 @@ struct atom_smu_info_v3_3 {
   uint32_t reserved;
 };
 
+struct atom_smu_info_v3_5
+{
+  struct   atom_common_table_header  table_header;
+  uint8_t  smuip_min_ver;
+  uint8_t  smuip_max_ver;
+  uint8_t  waflclk_ss_mode;
+  uint8_t  gpuclk_ss_mode;
+  uint16_t sclk_ss_percentage;
+  uint16_t sclk_ss_rate_10hz;
+  uint16_t gpuclk_ss_percentage;    // in unit of 0.001%
+  uint16_t gpuclk_ss_rate_10hz;
+  uint32_t core_refclk_10khz;
+  uint32_t syspll0_1_vco_freq_10khz;
+  uint32_t syspll0_2_vco_freq_10khz;
+  uint8_t  pcc_gpio_bit;            // GPIO bit shift in SMU_GPIOPAD_A configured for PCC, =0xff means invalid
+  uint8_t  pcc_gpio_polarity;       // GPIO polarity for CTF
+  uint16_t smugoldenoffset;
+  uint32_t syspll0_0_vco_freq_10khz;
+  uint32_t bootup_smnclk_10khz;
+  uint32_t bootup_socclk_10khz;
+  uint32_t bootup_mp0clk_10khz;
+  uint32_t bootup_mp1clk_10khz;
+  uint32_t bootup_lclk_10khz;
+  uint32_t bootup_dcefclk_10khz;
+  uint32_t ctf_threshold_override_value;
+  uint32_t syspll3_0_vco_freq_10khz;
+  uint32_t syspll3_1_vco_freq_10khz;
+  uint32_t bootup_fclk_10khz;
+  uint32_t bootup_waflclk_10khz;
+  uint32_t smu_info_caps;
+  uint16_t waflclk_ss_percentage;    // in unit of 0.001%
+  uint16_t smuinitoffset;
+  uint32_t bootup_dprefclk_10khz;
+  uint32_t bootup_usbclk_10khz;
+  uint32_t smb_slave_address;
+  uint32_t cg_fdo_ctrl0_val;
+  uint32_t cg_fdo_ctrl1_val;
+  uint32_t cg_fdo_ctrl2_val;
+  uint32_t gdfll_as_wait_ctrl_val;
+  uint32_t gdfll_as_step_ctrl_val;
+  uint32_t bootup_dtbclk_10khz;
+  uint32_t fclk_syspll_refclk_10khz;
+  uint32_t smusvi_svc0_val;
+  uint32_t smusvi_svc1_val;
+  uint32_t smusvi_svd0_val;
+  uint32_t smusvi_svd1_val;
+  uint32_t smusvi_svt0_val;
+  uint32_t smusvi_svt1_val;
+  uint32_t cg_tach_ctrl_val;
+  uint32_t cg_pump_ctrl1_val;
+  uint32_t cg_pump_tach_ctrl_val;
+  uint32_t thm_ctf_delay_val;
+  uint32_t thm_thermal_int_ctrl_val;
+  uint32_t thm_tmon_config_val;
+  uint32_t reserved[16];
+};
+
 struct atom_smu_info_v3_6
 {
 	struct   atom_common_table_header  table_header;
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 03/43] drm/amd/display: Add DCN32/321 version identifiers
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
  2022-05-25 16:19 ` [PATCH 01/43] drm/amd/display: remove stale config guards Alex Deucher
  2022-05-25 16:19 ` [PATCH 02/43] drm/amd: Add atomfirmware.h definitions needed for DCN32/321 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 05/43] drm/amd/display: Add DMCUB source files and changes for DCN32/321 Alex Deucher
                   ` (38 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

Add DCN3.2 asic identifiers.

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dmub/dmub_srv.h       | 2 ++
 drivers/gpu/drm/amd/display/include/dal_asic_id.h | 8 ++++++++
 drivers/gpu/drm/amd/display/include/dal_types.h   | 2 ++
 3 files changed, 12 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
index f5cb8932bd5c..7dbc9fb55459 100644
--- a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
+++ b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
@@ -100,6 +100,8 @@ enum dmub_asic {
 	DMUB_ASIC_DCN31B,
 	DMUB_ASIC_DCN315,
 	DMUB_ASIC_DCN316,
+	DMUB_ASIC_DCN32,
+	DMUB_ASIC_DCN321,
 	DMUB_ASIC_MAX,
 };
 
diff --git a/drivers/gpu/drm/amd/display/include/dal_asic_id.h b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
index 310f8779db67..11391eead954 100644
--- a/drivers/gpu/drm/amd/display/include/dal_asic_id.h
+++ b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
@@ -247,6 +247,14 @@ enum {
 
 #define ASICREV_IS_GC_10_3_7(eChipRev) ((eChipRev >= GC_10_3_7_A0) && (eChipRev < GC_10_3_7_UNKNOWN))
 
+#define AMDGPU_FAMILY_GC_11_0_0 145
+#define GC_11_0_0_A0 0x1
+#define GC_11_0_2_A0 0x10
+#define GC_11_UNKNOWN 0xFF
+
+#define ASICREV_IS_GC_11_0_0(eChipRev) (eChipRev < GC_11_0_2_A0)
+#define ASICREV_IS_GC_11_0_2(eChipRev) (eChipRev >= GC_11_0_2_A0 && eChipRev < GC_11_UNKNOWN)
+
 /*
  * ASIC chip ID
  */
diff --git a/drivers/gpu/drm/amd/display/include/dal_types.h b/drivers/gpu/drm/amd/display/include/dal_types.h
index bf9085fc5105..775c640fc820 100644
--- a/drivers/gpu/drm/amd/display/include/dal_types.h
+++ b/drivers/gpu/drm/amd/display/include/dal_types.h
@@ -59,6 +59,8 @@ enum dce_version {
 	DCN_VERSION_3_1,
 	DCN_VERSION_3_15,
 	DCN_VERSION_3_16,
+	DCN_VERSION_3_2,
+	DCN_VERSION_3_21,
 	DCN_VERSION_MAX
 };
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 05/43] drm/amd/display: Add DMCUB source files and changes for DCN32/321
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (2 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 03/43] drm/amd/display: Add DCN32/321 version identifiers Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 06/43] drm/amd/display: add dcn32 IRQ changes Alex Deucher
                   ` (37 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

DMCUB is the display engine microcontroller which aids in modesetting
and other display related features.

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c  |  31 ++
 drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h  |   2 +
 drivers/gpu/drm/amd/display/dmub/dmub_srv.h   |   6 +
 .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   |  26 +-
 drivers/gpu/drm/amd/display/dmub/src/Makefile |   1 +
 .../gpu/drm/amd/display/dmub/src/dmub_dcn32.c | 488 ++++++++++++++++++
 .../gpu/drm/amd/display/dmub/src/dmub_dcn32.h | 256 +++++++++
 .../gpu/drm/amd/display/dmub/src/dmub_srv.c   |  51 +-
 8 files changed, 859 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c
 create mode 100644 drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.h

diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
index 541376fabbef..11597bca966a 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
@@ -250,6 +250,37 @@ void dc_dmub_trace_event_control(struct dc *dc, bool enable)
 	dm_helpers_dmub_outbox_interrupt_control(dc->ctx, enable);
 }
 
+void dc_dmub_srv_query_caps_cmd(struct dmub_srv *dmub)
+{
+	union dmub_rb_cmd cmd = { 0 };
+	enum dmub_status status;
+
+	if (!dmub) {
+		return;
+	}
+
+	memset(&cmd, 0, sizeof(cmd));
+
+	/* Prepare fw command */
+	cmd.query_feature_caps.header.type = DMUB_CMD__QUERY_FEATURE_CAPS;
+	cmd.query_feature_caps.header.sub_type = 0;
+	cmd.query_feature_caps.header.ret_status = 1;
+	cmd.query_feature_caps.header.payload_bytes = sizeof(struct dmub_cmd_query_feature_caps_data);
+
+	/* Send command to fw */
+	status = dmub_srv_cmd_with_reply_data(dmub, &cmd);
+
+	ASSERT(status == DMUB_STATUS_OK);
+
+	/* If command was processed, copy feature caps to dmub srv */
+	if (status == DMUB_STATUS_OK &&
+	    cmd.query_feature_caps.header.ret_status == 0) {
+		memcpy(&dmub->feature_caps,
+		       &cmd.query_feature_caps.query_feature_caps_data,
+		       sizeof(struct dmub_feature_caps));
+	}
+}
+
 bool dc_dmub_srv_get_diagnostic_data(struct dc_dmub_srv *dc_dmub_srv, struct dmub_diagnostic_data *diag_data)
 {
 	if (!dc_dmub_srv || !dc_dmub_srv->dmub || !diag_data)
diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h
index 7e4e2ec5915d..50e44b53f14c 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.h
@@ -68,6 +68,8 @@ bool dc_dmub_srv_get_dmub_outbox0_msg(const struct dc *dc, struct dmcub_trace_bu
 
 void dc_dmub_trace_event_control(struct dc *dc, bool enable);
 
+void dc_dmub_srv_query_caps_cmd(struct dmub_srv *dmub);
+
 void dc_dmub_srv_clear_inbox0_ack(struct dc_dmub_srv *dmub_srv);
 void dc_dmub_srv_wait_for_inbox0_ack(struct dc_dmub_srv *dmub_srv);
 void dc_dmub_srv_send_inbox0_cmd(struct dc_dmub_srv *dmub_srv, union dmub_inbox0_data_register data);
diff --git a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
index 7dbc9fb55459..04049dc5d9df 100644
--- a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
+++ b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
@@ -246,6 +246,7 @@ struct dmub_srv_hw_params {
 	bool dpia_supported;
 	bool disable_dpia;
 	bool usb4_cm_version;
+	bool fw_in_system_memory;
 };
 
 /**
@@ -310,6 +311,9 @@ struct dmub_srv_hw_funcs {
 			      const struct dmub_window *cw0,
 			      const struct dmub_window *cw1);
 
+	void (*backdoor_load_zfb_mode)(struct dmub_srv *dmub,
+			      const struct dmub_window *cw0,
+			      const struct dmub_window *cw1);
 	void (*setup_windows)(struct dmub_srv *dmub,
 			      const struct dmub_window *cw2,
 			      const struct dmub_window *cw3,
@@ -365,6 +369,7 @@ struct dmub_srv_hw_funcs {
 
 	uint32_t (*get_gpint_dataout)(struct dmub_srv *dmub);
 
+	void (*configure_dmub_in_system_memory)(struct dmub_srv *dmub);
 	void (*clear_inbox0_ack_register)(struct dmub_srv *dmub);
 	uint32_t (*read_inbox0_ack_register)(struct dmub_srv *dmub);
 	void (*send_inbox0_cmd)(struct dmub_srv *dmub, union dmub_inbox0_data_register data);
@@ -412,6 +417,7 @@ struct dmub_srv {
 	/* private: internal use only */
 	const struct dmub_srv_common_regs *regs;
 	const struct dmub_srv_dcn31_regs *regs_dcn31;
+	const struct dmub_srv_dcn32_regs *regs_dcn32;
 
 	struct dmub_srv_base_funcs funcs;
 	struct dmub_srv_hw_funcs hw_funcs;
diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
index 385c28238beb..8fc1846f0708 100644
--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
@@ -641,7 +641,10 @@ enum dmub_cmd_type {
 	 * Command type used for all panel control commands.
 	 */
 	DMUB_CMD__PANEL_CNTL = 74,
-
+	/**
+	 * Command type used for <TODO:description>
+	 */
+	DMUB_CMD__CAB_FOR_SS = 75,
 	/**
 	 * Command type used for interfacing with DPIA.
 	 */
@@ -878,6 +881,23 @@ struct dmub_rb_cmd_mall {
 	uint8_t reserved2; /**< Reserved bits */
 };
 
+/**
+ * enum dmub_cmd_cab_type - TODO:
+ */
+enum dmub_cmd_cab_type {
+	DMUB_CMD__CAB_NO_IDLE_OPTIMIZATION = 0,
+	DMUB_CMD__CAB_NO_DCN_REQ = 1,
+	DMUB_CMD__CAB_DCN_SS_FIT_IN_CAB = 2,
+};
+
+/**
+ * struct dmub_rb_cmd_cab_for_ss - TODO:
+ */
+struct dmub_rb_cmd_cab_for_ss {
+	struct dmub_cmd_header header;
+	uint8_t cab_alloc_ways; /* total number of ways */
+	uint8_t debug_bits;     /* debug bits */
+};
 /**
  * enum dmub_cmd_idle_opt_type - Idle optimization command type.
  */
@@ -2693,6 +2713,10 @@ union dmub_rb_cmd {
 	 * Definition of a DMUB_CMD__MALL command.
 	 */
 	struct dmub_rb_cmd_mall mall;
+	/**
+	 * Definition of a DMUB_CMD__CAB command.
+	 */
+	struct dmub_rb_cmd_cab_for_ss cab;
 	/**
 	 * Definition of a DMUB_CMD__IDLE_OPT_DCN_RESTORE command.
 	 */
diff --git a/drivers/gpu/drm/amd/display/dmub/src/Makefile b/drivers/gpu/drm/amd/display/dmub/src/Makefile
index 856c7f48de7a..0589ad4778ee 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/Makefile
+++ b/drivers/gpu/drm/amd/display/dmub/src/Makefile
@@ -23,6 +23,7 @@
 DMUB = dmub_srv.o dmub_srv_stat.o dmub_reg.o dmub_dcn20.o dmub_dcn21.o
 DMUB += dmub_dcn30.o dmub_dcn301.o dmub_dcn302.o dmub_dcn303.o
 DMUB += dmub_dcn31.o dmub_dcn315.o dmub_dcn316.o
+DMUB += dmub_dcn32.o
 
 AMD_DAL_DMUB = $(addprefix $(AMDDALPATH)/dmub/src/,$(DMUB))
 
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c
new file mode 100644
index 000000000000..0a498082ccc6
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c
@@ -0,0 +1,488 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "../dmub_srv.h"
+#include "dmub_reg.h"
+#include "dmub_dcn32.h"
+
+#include "dcn/dcn_3_2_0_offset.h"
+#include "dcn/dcn_3_2_0_sh_mask.h"
+
+#define DCN_BASE__INST0_SEG2                       0x000034C0
+
+#define BASE_INNER(seg) DCN_BASE__INST0_SEG##seg
+#define CTX dmub
+#define REGS dmub->regs_dcn32
+#define REG_OFFSET_EXP(reg_name) (BASE(reg##reg_name##_BASE_IDX) + reg##reg_name)
+
+const struct dmub_srv_dcn32_regs dmub_srv_dcn32_regs = {
+#define DMUB_SR(reg) REG_OFFSET_EXP(reg),
+		{ DMUB_DCN32_REGS() },
+#undef DMUB_SR
+
+#define DMUB_SF(reg, field) FD_MASK(reg, field),
+		{ DMUB_DCN32_FIELDS() },
+#undef DMUB_SF
+
+#define DMUB_SF(reg, field) FD_SHIFT(reg, field),
+		{ DMUB_DCN32_FIELDS() },
+#undef DMUB_SF
+};
+
+static void dmub_dcn32_get_fb_base_offset(struct dmub_srv *dmub,
+		uint64_t *fb_base,
+		uint64_t *fb_offset)
+{
+	uint32_t tmp;
+
+	if (dmub->fb_base || dmub->fb_offset) {
+		*fb_base = dmub->fb_base;
+		*fb_offset = dmub->fb_offset;
+		return;
+	}
+
+	REG_GET(DCN_VM_FB_LOCATION_BASE, FB_BASE, &tmp);
+	*fb_base = (uint64_t)tmp << 24;
+
+	REG_GET(DCN_VM_FB_OFFSET, FB_OFFSET, &tmp);
+	*fb_offset = (uint64_t)tmp << 24;
+}
+
+static inline void dmub_dcn32_translate_addr(const union dmub_addr *addr_in,
+		uint64_t fb_base,
+		uint64_t fb_offset,
+		union dmub_addr *addr_out)
+{
+	addr_out->quad_part = addr_in->quad_part - fb_base + fb_offset;
+}
+
+void dmub_dcn32_reset(struct dmub_srv *dmub)
+{
+	union dmub_gpint_data_register cmd;
+	const uint32_t timeout = 30;
+	uint32_t in_reset, scratch, i;
+
+	REG_GET(DMCUB_CNTL2, DMCUB_SOFT_RESET, &in_reset);
+
+	if (in_reset == 0) {
+		cmd.bits.status = 1;
+		cmd.bits.command_code = DMUB_GPINT__STOP_FW;
+		cmd.bits.param = 0;
+
+		dmub->hw_funcs.set_gpint(dmub, cmd);
+
+		/**
+		 * Timeout covers both the ACK and the wait
+		 * for remaining work to finish.
+		 *
+		 * This is mostly bound by the PHY disable sequence.
+		 * Each register check will be greater than 1us, so
+		 * don't bother using udelay.
+		 */
+
+		for (i = 0; i < timeout; ++i) {
+			if (dmub->hw_funcs.is_gpint_acked(dmub, cmd))
+				break;
+		}
+
+		for (i = 0; i < timeout; ++i) {
+			scratch = dmub->hw_funcs.get_gpint_response(dmub);
+			if (scratch == DMUB_GPINT__STOP_FW_RESPONSE)
+				break;
+		}
+
+		/* Clear the GPINT command manually so we don't reset again. */
+		cmd.all = 0;
+		dmub->hw_funcs.set_gpint(dmub, cmd);
+
+		/* Force reset in case we timed out, DMCUB is likely hung. */
+	}
+
+	REG_UPDATE(DMCUB_CNTL2, DMCUB_SOFT_RESET, 1);
+	REG_UPDATE(DMCUB_CNTL, DMCUB_ENABLE, 0);
+	REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 1);
+	REG_WRITE(DMCUB_INBOX1_RPTR, 0);
+	REG_WRITE(DMCUB_INBOX1_WPTR, 0);
+	REG_WRITE(DMCUB_OUTBOX1_RPTR, 0);
+	REG_WRITE(DMCUB_OUTBOX1_WPTR, 0);
+	REG_WRITE(DMCUB_SCRATCH0, 0);
+}
+
+void dmub_dcn32_reset_release(struct dmub_srv *dmub)
+{
+	REG_WRITE(DMCUB_GPINT_DATAIN1, 0);
+	REG_UPDATE(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET, 0);
+	REG_WRITE(DMCUB_SCRATCH15, dmub->psp_version & 0x001100FF);
+	REG_UPDATE_2(DMCUB_CNTL, DMCUB_ENABLE, 1, DMCUB_TRACEPORT_EN, 1);
+	REG_UPDATE(DMCUB_CNTL2, DMCUB_SOFT_RESET, 0);
+}
+
+void dmub_dcn32_backdoor_load(struct dmub_srv *dmub,
+		const struct dmub_window *cw0,
+		const struct dmub_window *cw1)
+{
+	union dmub_addr offset;
+	uint64_t fb_base, fb_offset;
+
+	dmub_dcn32_get_fb_base_offset(dmub, &fb_base, &fb_offset);
+
+	REG_UPDATE(DMCUB_SEC_CNTL, DMCUB_SEC_RESET, 1);
+
+	dmub_dcn32_translate_addr(&cw0->offset, fb_base, fb_offset, &offset);
+
+	REG_WRITE(DMCUB_REGION3_CW0_OFFSET, offset.u.low_part);
+	REG_WRITE(DMCUB_REGION3_CW0_OFFSET_HIGH, offset.u.high_part);
+	REG_WRITE(DMCUB_REGION3_CW0_BASE_ADDRESS, cw0->region.base);
+	REG_SET_2(DMCUB_REGION3_CW0_TOP_ADDRESS, 0,
+			DMCUB_REGION3_CW0_TOP_ADDRESS, cw0->region.top,
+			DMCUB_REGION3_CW0_ENABLE, 1);
+
+	dmub_dcn32_translate_addr(&cw1->offset, fb_base, fb_offset, &offset);
+
+	REG_WRITE(DMCUB_REGION3_CW1_OFFSET, offset.u.low_part);
+	REG_WRITE(DMCUB_REGION3_CW1_OFFSET_HIGH, offset.u.high_part);
+	REG_WRITE(DMCUB_REGION3_CW1_BASE_ADDRESS, cw1->region.base);
+	REG_SET_2(DMCUB_REGION3_CW1_TOP_ADDRESS, 0,
+			DMCUB_REGION3_CW1_TOP_ADDRESS, cw1->region.top,
+			DMCUB_REGION3_CW1_ENABLE, 1);
+
+	REG_UPDATE_2(DMCUB_SEC_CNTL, DMCUB_SEC_RESET, 0, DMCUB_MEM_UNIT_ID,
+			0x20);
+}
+
+void dmub_dcn32_backdoor_load_zfb_mode(struct dmub_srv *dmub,
+		      const struct dmub_window *cw0,
+		      const struct dmub_window *cw1)
+{
+	union dmub_addr offset;
+
+	REG_UPDATE(DMCUB_SEC_CNTL, DMCUB_SEC_RESET, 1);
+
+	offset = cw0->offset;
+
+	REG_WRITE(DMCUB_REGION3_CW0_OFFSET, offset.u.low_part);
+	REG_WRITE(DMCUB_REGION3_CW0_OFFSET_HIGH, offset.u.high_part);
+	REG_WRITE(DMCUB_REGION3_CW0_BASE_ADDRESS, cw0->region.base);
+	REG_SET_2(DMCUB_REGION3_CW0_TOP_ADDRESS, 0,
+			DMCUB_REGION3_CW0_TOP_ADDRESS, cw0->region.top,
+			DMCUB_REGION3_CW0_ENABLE, 1);
+
+	offset = cw1->offset;
+
+	REG_WRITE(DMCUB_REGION3_CW1_OFFSET, offset.u.low_part);
+	REG_WRITE(DMCUB_REGION3_CW1_OFFSET_HIGH, offset.u.high_part);
+	REG_WRITE(DMCUB_REGION3_CW1_BASE_ADDRESS, cw1->region.base);
+	REG_SET_2(DMCUB_REGION3_CW1_TOP_ADDRESS, 0,
+			DMCUB_REGION3_CW1_TOP_ADDRESS, cw1->region.top,
+			DMCUB_REGION3_CW1_ENABLE, 1);
+
+	REG_UPDATE_2(DMCUB_SEC_CNTL, DMCUB_SEC_RESET, 0, DMCUB_MEM_UNIT_ID,
+			0x20);
+}
+
+void dmub_dcn32_setup_windows(struct dmub_srv *dmub,
+		const struct dmub_window *cw2,
+		const struct dmub_window *cw3,
+		const struct dmub_window *cw4,
+		const struct dmub_window *cw5,
+		const struct dmub_window *cw6)
+{
+	union dmub_addr offset;
+
+	offset = cw3->offset;
+
+	REG_WRITE(DMCUB_REGION3_CW3_OFFSET, offset.u.low_part);
+	REG_WRITE(DMCUB_REGION3_CW3_OFFSET_HIGH, offset.u.high_part);
+	REG_WRITE(DMCUB_REGION3_CW3_BASE_ADDRESS, cw3->region.base);
+	REG_SET_2(DMCUB_REGION3_CW3_TOP_ADDRESS, 0,
+			DMCUB_REGION3_CW3_TOP_ADDRESS, cw3->region.top,
+			DMCUB_REGION3_CW3_ENABLE, 1);
+
+	offset = cw4->offset;
+
+	REG_WRITE(DMCUB_REGION3_CW4_OFFSET, offset.u.low_part);
+	REG_WRITE(DMCUB_REGION3_CW4_OFFSET_HIGH, offset.u.high_part);
+	REG_WRITE(DMCUB_REGION3_CW4_BASE_ADDRESS, cw4->region.base);
+	REG_SET_2(DMCUB_REGION3_CW4_TOP_ADDRESS, 0,
+			DMCUB_REGION3_CW4_TOP_ADDRESS, cw4->region.top,
+			DMCUB_REGION3_CW4_ENABLE, 1);
+
+	offset = cw5->offset;
+
+	REG_WRITE(DMCUB_REGION3_CW5_OFFSET, offset.u.low_part);
+	REG_WRITE(DMCUB_REGION3_CW5_OFFSET_HIGH, offset.u.high_part);
+	REG_WRITE(DMCUB_REGION3_CW5_BASE_ADDRESS, cw5->region.base);
+	REG_SET_2(DMCUB_REGION3_CW5_TOP_ADDRESS, 0,
+			DMCUB_REGION3_CW5_TOP_ADDRESS, cw5->region.top,
+			DMCUB_REGION3_CW5_ENABLE, 1);
+
+	REG_WRITE(DMCUB_REGION5_OFFSET, offset.u.low_part);
+	REG_WRITE(DMCUB_REGION5_OFFSET_HIGH, offset.u.high_part);
+	REG_SET_2(DMCUB_REGION5_TOP_ADDRESS, 0,
+			DMCUB_REGION5_TOP_ADDRESS,
+			cw5->region.top - cw5->region.base - 1,
+			DMCUB_REGION5_ENABLE, 1);
+
+	offset = cw6->offset;
+
+	REG_WRITE(DMCUB_REGION3_CW6_OFFSET, offset.u.low_part);
+	REG_WRITE(DMCUB_REGION3_CW6_OFFSET_HIGH, offset.u.high_part);
+	REG_WRITE(DMCUB_REGION3_CW6_BASE_ADDRESS, cw6->region.base);
+	REG_SET_2(DMCUB_REGION3_CW6_TOP_ADDRESS, 0,
+			DMCUB_REGION3_CW6_TOP_ADDRESS, cw6->region.top,
+			DMCUB_REGION3_CW6_ENABLE, 1);
+}
+
+void dmub_dcn32_setup_mailbox(struct dmub_srv *dmub,
+		const struct dmub_region *inbox1)
+{
+	REG_WRITE(DMCUB_INBOX1_BASE_ADDRESS, inbox1->base);
+	REG_WRITE(DMCUB_INBOX1_SIZE, inbox1->top - inbox1->base);
+}
+
+uint32_t dmub_dcn32_get_inbox1_rptr(struct dmub_srv *dmub)
+{
+	return REG_READ(DMCUB_INBOX1_RPTR);
+}
+
+void dmub_dcn32_set_inbox1_wptr(struct dmub_srv *dmub, uint32_t wptr_offset)
+{
+	REG_WRITE(DMCUB_INBOX1_WPTR, wptr_offset);
+}
+
+void dmub_dcn32_setup_out_mailbox(struct dmub_srv *dmub,
+		const struct dmub_region *outbox1)
+{
+	REG_WRITE(DMCUB_OUTBOX1_BASE_ADDRESS, outbox1->base);
+	REG_WRITE(DMCUB_OUTBOX1_SIZE, outbox1->top - outbox1->base);
+}
+
+uint32_t dmub_dcn32_get_outbox1_wptr(struct dmub_srv *dmub)
+{
+	/**
+	 * outbox1 wptr register is accessed without locks (dal & dc)
+	 * and to be called only by dmub_srv_stat_get_notification()
+	 */
+	return REG_READ(DMCUB_OUTBOX1_WPTR);
+}
+
+void dmub_dcn32_set_outbox1_rptr(struct dmub_srv *dmub, uint32_t rptr_offset)
+{
+	/**
+	 * outbox1 rptr register is accessed without locks (dal & dc)
+	 * and to be called only by dmub_srv_stat_get_notification()
+	 */
+	REG_WRITE(DMCUB_OUTBOX1_RPTR, rptr_offset);
+}
+
+bool dmub_dcn32_is_hw_init(struct dmub_srv *dmub)
+{
+	uint32_t is_hw_init;
+
+	REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_hw_init);
+
+	return is_hw_init != 0;
+}
+
+bool dmub_dcn32_is_supported(struct dmub_srv *dmub)
+{
+	uint32_t supported = 0;
+
+	REG_GET(CC_DC_PIPE_DIS, DC_DMCUB_ENABLE, &supported);
+
+	return supported;
+}
+
+void dmub_dcn32_set_gpint(struct dmub_srv *dmub,
+		union dmub_gpint_data_register reg)
+{
+	REG_WRITE(DMCUB_GPINT_DATAIN1, reg.all);
+}
+
+bool dmub_dcn32_is_gpint_acked(struct dmub_srv *dmub,
+		union dmub_gpint_data_register reg)
+{
+	union dmub_gpint_data_register test;
+
+	reg.bits.status = 0;
+	test.all = REG_READ(DMCUB_GPINT_DATAIN1);
+
+	return test.all == reg.all;
+}
+
+uint32_t dmub_dcn32_get_gpint_response(struct dmub_srv *dmub)
+{
+	return REG_READ(DMCUB_SCRATCH7);
+}
+
+uint32_t dmub_dcn32_get_gpint_dataout(struct dmub_srv *dmub)
+{
+	uint32_t dataout = REG_READ(DMCUB_GPINT_DATAOUT);
+
+	REG_UPDATE(DMCUB_INTERRUPT_ENABLE, DMCUB_GPINT_IH_INT_EN, 0);
+
+	REG_WRITE(DMCUB_GPINT_DATAOUT, 0);
+	REG_UPDATE(DMCUB_INTERRUPT_ACK, DMCUB_GPINT_IH_INT_ACK, 1);
+	REG_UPDATE(DMCUB_INTERRUPT_ACK, DMCUB_GPINT_IH_INT_ACK, 0);
+
+	REG_UPDATE(DMCUB_INTERRUPT_ENABLE, DMCUB_GPINT_IH_INT_EN, 1);
+
+	return dataout;
+}
+
+union dmub_fw_boot_status dmub_dcn32_get_fw_boot_status(struct dmub_srv *dmub)
+{
+	union dmub_fw_boot_status status;
+
+	status.all = REG_READ(DMCUB_SCRATCH0);
+	return status;
+}
+
+void dmub_dcn32_enable_dmub_boot_options(struct dmub_srv *dmub, const struct dmub_srv_hw_params *params)
+{
+	union dmub_fw_boot_options boot_options = {0};
+
+	boot_options.bits.z10_disable = params->disable_z10;
+
+	REG_WRITE(DMCUB_SCRATCH14, boot_options.all);
+}
+
+void dmub_dcn32_skip_dmub_panel_power_sequence(struct dmub_srv *dmub, bool skip)
+{
+	union dmub_fw_boot_options boot_options;
+	boot_options.all = REG_READ(DMCUB_SCRATCH14);
+	boot_options.bits.skip_phy_init_panel_sequence = skip;
+	REG_WRITE(DMCUB_SCRATCH14, boot_options.all);
+}
+
+void dmub_dcn32_setup_outbox0(struct dmub_srv *dmub,
+		const struct dmub_region *outbox0)
+{
+	REG_WRITE(DMCUB_OUTBOX0_BASE_ADDRESS, outbox0->base);
+
+	REG_WRITE(DMCUB_OUTBOX0_SIZE, outbox0->top - outbox0->base);
+}
+
+uint32_t dmub_dcn32_get_outbox0_wptr(struct dmub_srv *dmub)
+{
+	return REG_READ(DMCUB_OUTBOX0_WPTR);
+}
+
+void dmub_dcn32_set_outbox0_rptr(struct dmub_srv *dmub, uint32_t rptr_offset)
+{
+	REG_WRITE(DMCUB_OUTBOX0_RPTR, rptr_offset);
+}
+
+uint32_t dmub_dcn32_get_current_time(struct dmub_srv *dmub)
+{
+	return REG_READ(DMCUB_TIMER_CURRENT);
+}
+
+void dmub_dcn32_get_diagnostic_data(struct dmub_srv *dmub, struct dmub_diagnostic_data *diag_data)
+{
+	uint32_t is_dmub_enabled, is_soft_reset, is_sec_reset;
+	uint32_t is_traceport_enabled, is_cw0_enabled, is_cw6_enabled;
+
+	if (!dmub || !diag_data)
+		return;
+
+	memset(diag_data, 0, sizeof(*diag_data));
+
+	diag_data->dmcub_version = dmub->fw_version;
+
+	diag_data->scratch[0] = REG_READ(DMCUB_SCRATCH0);
+	diag_data->scratch[1] = REG_READ(DMCUB_SCRATCH1);
+	diag_data->scratch[2] = REG_READ(DMCUB_SCRATCH2);
+	diag_data->scratch[3] = REG_READ(DMCUB_SCRATCH3);
+	diag_data->scratch[4] = REG_READ(DMCUB_SCRATCH4);
+	diag_data->scratch[5] = REG_READ(DMCUB_SCRATCH5);
+	diag_data->scratch[6] = REG_READ(DMCUB_SCRATCH6);
+	diag_data->scratch[7] = REG_READ(DMCUB_SCRATCH7);
+	diag_data->scratch[8] = REG_READ(DMCUB_SCRATCH8);
+	diag_data->scratch[9] = REG_READ(DMCUB_SCRATCH9);
+	diag_data->scratch[10] = REG_READ(DMCUB_SCRATCH10);
+	diag_data->scratch[11] = REG_READ(DMCUB_SCRATCH11);
+	diag_data->scratch[12] = REG_READ(DMCUB_SCRATCH12);
+	diag_data->scratch[13] = REG_READ(DMCUB_SCRATCH13);
+	diag_data->scratch[14] = REG_READ(DMCUB_SCRATCH14);
+	diag_data->scratch[15] = REG_READ(DMCUB_SCRATCH15);
+
+	diag_data->undefined_address_fault_addr = REG_READ(DMCUB_UNDEFINED_ADDRESS_FAULT_ADDR);
+	diag_data->inst_fetch_fault_addr = REG_READ(DMCUB_INST_FETCH_FAULT_ADDR);
+	diag_data->data_write_fault_addr = REG_READ(DMCUB_DATA_WRITE_FAULT_ADDR);
+
+	diag_data->inbox1_rptr = REG_READ(DMCUB_INBOX1_RPTR);
+	diag_data->inbox1_wptr = REG_READ(DMCUB_INBOX1_WPTR);
+	diag_data->inbox1_size = REG_READ(DMCUB_INBOX1_SIZE);
+
+	diag_data->inbox0_rptr = REG_READ(DMCUB_INBOX0_RPTR);
+	diag_data->inbox0_wptr = REG_READ(DMCUB_INBOX0_WPTR);
+	diag_data->inbox0_size = REG_READ(DMCUB_INBOX0_SIZE);
+
+	REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_dmub_enabled);
+	diag_data->is_dmcub_enabled = is_dmub_enabled;
+
+	REG_GET(DMCUB_CNTL2, DMCUB_SOFT_RESET, &is_soft_reset);
+	diag_data->is_dmcub_soft_reset = is_soft_reset;
+
+	REG_GET(DMCUB_SEC_CNTL, DMCUB_SEC_RESET_STATUS, &is_sec_reset);
+	diag_data->is_dmcub_secure_reset = is_sec_reset;
+
+	REG_GET(DMCUB_CNTL, DMCUB_TRACEPORT_EN, &is_traceport_enabled);
+	diag_data->is_traceport_en  = is_traceport_enabled;
+
+	REG_GET(DMCUB_REGION3_CW0_TOP_ADDRESS, DMCUB_REGION3_CW0_ENABLE, &is_cw0_enabled);
+	diag_data->is_cw0_enabled = is_cw0_enabled;
+
+	REG_GET(DMCUB_REGION3_CW6_TOP_ADDRESS, DMCUB_REGION3_CW6_ENABLE, &is_cw6_enabled);
+	diag_data->is_cw6_enabled = is_cw6_enabled;
+}
+void dmub_dcn32_configure_dmub_in_system_memory(struct dmub_srv *dmub)
+{
+	/* DMCUB_REGION3_TMR_AXI_SPACE values:
+	 * 0b011 (0x3) - FB physical address
+	 * 0b100 (0x4) - GPU virtual address
+	 *
+	 * Default value is 0x3 (FB Physical address for TMR). When programming
+	 * DMUB to be in system memory, change to 0x4. The system memory allocated
+	 * is accessible by both GPU and CPU, so we use GPU virtual address.
+	 */
+	REG_WRITE(DMCUB_REGION3_TMR_AXI_SPACE, 0x4);
+}
+
+void dmub_dcn32_send_inbox0_cmd(struct dmub_srv *dmub, union dmub_inbox0_data_register data)
+{
+	REG_WRITE(DMCUB_INBOX0_WPTR, data.inbox0_cmd_common.all);
+}
+
+void dmub_dcn32_clear_inbox0_ack_register(struct dmub_srv *dmub)
+{
+	REG_WRITE(DMCUB_SCRATCH17, 0);
+}
+
+uint32_t dmub_dcn32_read_inbox0_ack_register(struct dmub_srv *dmub)
+{
+	return REG_READ(DMCUB_SCRATCH17);
+}
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.h b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.h
new file mode 100644
index 000000000000..7d1a6eb4d665
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.h
@@ -0,0 +1,256 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef _DMUB_DCN32_H_
+#define _DMUB_DCN32_H_
+
+#include "dmub_dcn31.h"
+
+struct dmub_srv;
+
+/* DCN32 register definitions. */
+
+#define DMUB_DCN32_REGS() \
+	DMUB_SR(DMCUB_CNTL) \
+	DMUB_SR(DMCUB_CNTL2) \
+	DMUB_SR(DMCUB_SEC_CNTL) \
+	DMUB_SR(DMCUB_INBOX0_SIZE) \
+	DMUB_SR(DMCUB_INBOX0_RPTR) \
+	DMUB_SR(DMCUB_INBOX0_WPTR) \
+	DMUB_SR(DMCUB_INBOX1_BASE_ADDRESS) \
+	DMUB_SR(DMCUB_INBOX1_SIZE) \
+	DMUB_SR(DMCUB_INBOX1_RPTR) \
+	DMUB_SR(DMCUB_INBOX1_WPTR) \
+	DMUB_SR(DMCUB_OUTBOX0_BASE_ADDRESS) \
+	DMUB_SR(DMCUB_OUTBOX0_SIZE) \
+	DMUB_SR(DMCUB_OUTBOX0_RPTR) \
+	DMUB_SR(DMCUB_OUTBOX0_WPTR) \
+	DMUB_SR(DMCUB_OUTBOX1_BASE_ADDRESS) \
+	DMUB_SR(DMCUB_OUTBOX1_SIZE) \
+	DMUB_SR(DMCUB_OUTBOX1_RPTR) \
+	DMUB_SR(DMCUB_OUTBOX1_WPTR) \
+	DMUB_SR(DMCUB_REGION3_CW0_OFFSET) \
+	DMUB_SR(DMCUB_REGION3_CW1_OFFSET) \
+	DMUB_SR(DMCUB_REGION3_CW2_OFFSET) \
+	DMUB_SR(DMCUB_REGION3_CW3_OFFSET) \
+	DMUB_SR(DMCUB_REGION3_CW4_OFFSET) \
+	DMUB_SR(DMCUB_REGION3_CW5_OFFSET) \
+	DMUB_SR(DMCUB_REGION3_CW6_OFFSET) \
+	DMUB_SR(DMCUB_REGION3_CW7_OFFSET) \
+	DMUB_SR(DMCUB_REGION3_CW0_OFFSET_HIGH) \
+	DMUB_SR(DMCUB_REGION3_CW1_OFFSET_HIGH) \
+	DMUB_SR(DMCUB_REGION3_CW2_OFFSET_HIGH) \
+	DMUB_SR(DMCUB_REGION3_CW3_OFFSET_HIGH) \
+	DMUB_SR(DMCUB_REGION3_CW4_OFFSET_HIGH) \
+	DMUB_SR(DMCUB_REGION3_CW5_OFFSET_HIGH) \
+	DMUB_SR(DMCUB_REGION3_CW6_OFFSET_HIGH) \
+	DMUB_SR(DMCUB_REGION3_CW7_OFFSET_HIGH) \
+	DMUB_SR(DMCUB_REGION3_CW0_BASE_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW1_BASE_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW2_BASE_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW3_BASE_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW4_BASE_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW5_BASE_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW6_BASE_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW7_BASE_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW0_TOP_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW1_TOP_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW2_TOP_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW3_TOP_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW4_TOP_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW5_TOP_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW6_TOP_ADDRESS) \
+	DMUB_SR(DMCUB_REGION3_CW7_TOP_ADDRESS) \
+	DMUB_SR(DMCUB_REGION4_OFFSET) \
+	DMUB_SR(DMCUB_REGION4_OFFSET_HIGH) \
+	DMUB_SR(DMCUB_REGION4_TOP_ADDRESS) \
+	DMUB_SR(DMCUB_REGION5_OFFSET) \
+	DMUB_SR(DMCUB_REGION5_OFFSET_HIGH) \
+	DMUB_SR(DMCUB_REGION5_TOP_ADDRESS) \
+	DMUB_SR(DMCUB_SCRATCH0) \
+	DMUB_SR(DMCUB_SCRATCH1) \
+	DMUB_SR(DMCUB_SCRATCH2) \
+	DMUB_SR(DMCUB_SCRATCH3) \
+	DMUB_SR(DMCUB_SCRATCH4) \
+	DMUB_SR(DMCUB_SCRATCH5) \
+	DMUB_SR(DMCUB_SCRATCH6) \
+	DMUB_SR(DMCUB_SCRATCH7) \
+	DMUB_SR(DMCUB_SCRATCH8) \
+	DMUB_SR(DMCUB_SCRATCH9) \
+	DMUB_SR(DMCUB_SCRATCH10) \
+	DMUB_SR(DMCUB_SCRATCH11) \
+	DMUB_SR(DMCUB_SCRATCH12) \
+	DMUB_SR(DMCUB_SCRATCH13) \
+	DMUB_SR(DMCUB_SCRATCH14) \
+	DMUB_SR(DMCUB_SCRATCH15) \
+	DMUB_SR(DMCUB_SCRATCH16) \
+	DMUB_SR(DMCUB_SCRATCH17) \
+	DMUB_SR(DMCUB_GPINT_DATAIN1) \
+	DMUB_SR(DMCUB_GPINT_DATAOUT) \
+	DMUB_SR(CC_DC_PIPE_DIS) \
+	DMUB_SR(MMHUBBUB_SOFT_RESET) \
+	DMUB_SR(DCN_VM_FB_LOCATION_BASE) \
+	DMUB_SR(DCN_VM_FB_OFFSET) \
+	DMUB_SR(DMCUB_TIMER_CURRENT) \
+	DMUB_SR(DMCUB_INST_FETCH_FAULT_ADDR) \
+	DMUB_SR(DMCUB_UNDEFINED_ADDRESS_FAULT_ADDR) \
+	DMUB_SR(DMCUB_DATA_WRITE_FAULT_ADDR) \
+	DMUB_SR(DMCUB_REGION3_TMR_AXI_SPACE) \
+	DMUB_SR(DMCUB_INTERRUPT_ENABLE) \
+	DMUB_SR(DMCUB_INTERRUPT_ACK)
+
+#define DMUB_DCN32_FIELDS() \
+	DMUB_SF(DMCUB_CNTL, DMCUB_ENABLE) \
+	DMUB_SF(DMCUB_CNTL, DMCUB_TRACEPORT_EN) \
+	DMUB_SF(DMCUB_CNTL2, DMCUB_SOFT_RESET) \
+	DMUB_SF(DMCUB_SEC_CNTL, DMCUB_SEC_RESET) \
+	DMUB_SF(DMCUB_SEC_CNTL, DMCUB_MEM_UNIT_ID) \
+	DMUB_SF(DMCUB_SEC_CNTL, DMCUB_SEC_RESET_STATUS) \
+	DMUB_SF(DMCUB_REGION3_CW0_TOP_ADDRESS, DMCUB_REGION3_CW0_TOP_ADDRESS) \
+	DMUB_SF(DMCUB_REGION3_CW0_TOP_ADDRESS, DMCUB_REGION3_CW0_ENABLE) \
+	DMUB_SF(DMCUB_REGION3_CW1_TOP_ADDRESS, DMCUB_REGION3_CW1_TOP_ADDRESS) \
+	DMUB_SF(DMCUB_REGION3_CW1_TOP_ADDRESS, DMCUB_REGION3_CW1_ENABLE) \
+	DMUB_SF(DMCUB_REGION3_CW2_TOP_ADDRESS, DMCUB_REGION3_CW2_TOP_ADDRESS) \
+	DMUB_SF(DMCUB_REGION3_CW2_TOP_ADDRESS, DMCUB_REGION3_CW2_ENABLE) \
+	DMUB_SF(DMCUB_REGION3_CW3_TOP_ADDRESS, DMCUB_REGION3_CW3_TOP_ADDRESS) \
+	DMUB_SF(DMCUB_REGION3_CW3_TOP_ADDRESS, DMCUB_REGION3_CW3_ENABLE) \
+	DMUB_SF(DMCUB_REGION3_CW4_TOP_ADDRESS, DMCUB_REGION3_CW4_TOP_ADDRESS) \
+	DMUB_SF(DMCUB_REGION3_CW4_TOP_ADDRESS, DMCUB_REGION3_CW4_ENABLE) \
+	DMUB_SF(DMCUB_REGION3_CW5_TOP_ADDRESS, DMCUB_REGION3_CW5_TOP_ADDRESS) \
+	DMUB_SF(DMCUB_REGION3_CW5_TOP_ADDRESS, DMCUB_REGION3_CW5_ENABLE) \
+	DMUB_SF(DMCUB_REGION3_CW6_TOP_ADDRESS, DMCUB_REGION3_CW6_TOP_ADDRESS) \
+	DMUB_SF(DMCUB_REGION3_CW6_TOP_ADDRESS, DMCUB_REGION3_CW6_ENABLE) \
+	DMUB_SF(DMCUB_REGION3_CW7_TOP_ADDRESS, DMCUB_REGION3_CW7_TOP_ADDRESS) \
+	DMUB_SF(DMCUB_REGION3_CW7_TOP_ADDRESS, DMCUB_REGION3_CW7_ENABLE) \
+	DMUB_SF(DMCUB_REGION4_TOP_ADDRESS, DMCUB_REGION4_TOP_ADDRESS) \
+	DMUB_SF(DMCUB_REGION4_TOP_ADDRESS, DMCUB_REGION4_ENABLE) \
+	DMUB_SF(DMCUB_REGION5_TOP_ADDRESS, DMCUB_REGION5_TOP_ADDRESS) \
+	DMUB_SF(DMCUB_REGION5_TOP_ADDRESS, DMCUB_REGION5_ENABLE) \
+	DMUB_SF(CC_DC_PIPE_DIS, DC_DMCUB_ENABLE) \
+	DMUB_SF(MMHUBBUB_SOFT_RESET, DMUIF_SOFT_RESET) \
+	DMUB_SF(DCN_VM_FB_LOCATION_BASE, FB_BASE) \
+	DMUB_SF(DCN_VM_FB_OFFSET, FB_OFFSET) \
+	DMUB_SF(DMCUB_INBOX0_WPTR, DMCUB_INBOX0_WPTR) \
+	DMUB_SF(DMCUB_REGION3_TMR_AXI_SPACE, DMCUB_REGION3_TMR_AXI_SPACE) \
+	DMUB_SF(DMCUB_INTERRUPT_ENABLE, DMCUB_GPINT_IH_INT_EN) \
+	DMUB_SF(DMCUB_INTERRUPT_ACK, DMCUB_GPINT_IH_INT_ACK)
+
+struct dmub_srv_dcn32_reg_offset {
+#define DMUB_SR(reg) uint32_t reg;
+	DMUB_DCN32_REGS()
+	DMCUB_INTERNAL_REGS()
+#undef DMUB_SR
+};
+
+struct dmub_srv_dcn32_reg_shift {
+#define DMUB_SF(reg, field) uint8_t reg##__##field;
+	DMUB_DCN32_FIELDS()
+#undef DMUB_SF
+};
+
+struct dmub_srv_dcn32_reg_mask {
+#define DMUB_SF(reg, field) uint32_t reg##__##field;
+	DMUB_DCN32_FIELDS()
+#undef DMUB_SF
+};
+
+struct dmub_srv_dcn32_regs {
+	const struct dmub_srv_dcn32_reg_offset offset;
+	const struct dmub_srv_dcn32_reg_mask mask;
+	const struct dmub_srv_dcn32_reg_shift shift;
+};
+
+extern const struct dmub_srv_dcn32_regs dmub_srv_dcn32_regs;
+
+void dmub_dcn32_reset(struct dmub_srv *dmub);
+
+void dmub_dcn32_reset_release(struct dmub_srv *dmub);
+
+void dmub_dcn32_backdoor_load(struct dmub_srv *dmub,
+			      const struct dmub_window *cw0,
+			      const struct dmub_window *cw1);
+
+void dmub_dcn32_backdoor_load_zfb_mode(struct dmub_srv *dmub,
+		      const struct dmub_window *cw0,
+		      const struct dmub_window *cw1);
+
+void dmub_dcn32_setup_windows(struct dmub_srv *dmub,
+			      const struct dmub_window *cw2,
+			      const struct dmub_window *cw3,
+			      const struct dmub_window *cw4,
+			      const struct dmub_window *cw5,
+			      const struct dmub_window *cw6);
+
+void dmub_dcn32_setup_mailbox(struct dmub_srv *dmub,
+			      const struct dmub_region *inbox1);
+
+uint32_t dmub_dcn32_get_inbox1_rptr(struct dmub_srv *dmub);
+
+void dmub_dcn32_set_inbox1_wptr(struct dmub_srv *dmub, uint32_t wptr_offset);
+
+void dmub_dcn32_setup_out_mailbox(struct dmub_srv *dmub,
+			      const struct dmub_region *outbox1);
+
+uint32_t dmub_dcn32_get_outbox1_wptr(struct dmub_srv *dmub);
+
+void dmub_dcn32_set_outbox1_rptr(struct dmub_srv *dmub, uint32_t rptr_offset);
+
+bool dmub_dcn32_is_hw_init(struct dmub_srv *dmub);
+
+bool dmub_dcn32_is_supported(struct dmub_srv *dmub);
+
+void dmub_dcn32_set_gpint(struct dmub_srv *dmub,
+			  union dmub_gpint_data_register reg);
+
+bool dmub_dcn32_is_gpint_acked(struct dmub_srv *dmub,
+			       union dmub_gpint_data_register reg);
+
+uint32_t dmub_dcn32_get_gpint_response(struct dmub_srv *dmub);
+
+uint32_t dmub_dcn32_get_gpint_dataout(struct dmub_srv *dmub);
+
+void dmub_dcn32_enable_dmub_boot_options(struct dmub_srv *dmub, const struct dmub_srv_hw_params *params);
+
+void dmub_dcn32_skip_dmub_panel_power_sequence(struct dmub_srv *dmub, bool skip);
+
+union dmub_fw_boot_status dmub_dcn32_get_fw_boot_status(struct dmub_srv *dmub);
+
+void dmub_dcn32_setup_outbox0(struct dmub_srv *dmub,
+			      const struct dmub_region *outbox0);
+
+uint32_t dmub_dcn32_get_outbox0_wptr(struct dmub_srv *dmub);
+
+void dmub_dcn32_set_outbox0_rptr(struct dmub_srv *dmub, uint32_t rptr_offset);
+
+uint32_t dmub_dcn32_get_current_time(struct dmub_srv *dmub);
+
+void dmub_dcn32_get_diagnostic_data(struct dmub_srv *dmub, struct dmub_diagnostic_data *diag_data);
+
+void dmub_dcn32_configure_dmub_in_system_memory(struct dmub_srv *dmub);
+void dmub_dcn32_send_inbox0_cmd(struct dmub_srv *dmub, union dmub_inbox0_data_register data);
+void dmub_dcn32_clear_inbox0_ack_register(struct dmub_srv *dmub);
+uint32_t dmub_dcn32_read_inbox0_ack_register(struct dmub_srv *dmub);
+
+#endif /* _DMUB_DCN32_H_ */
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
index 66db5e538c7f..4c6a624f04a7 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
@@ -34,6 +34,7 @@
 #include "dmub_dcn31.h"
 #include "dmub_dcn315.h"
 #include "dmub_dcn316.h"
+#include "dmub_dcn32.h"
 #include "os_types.h"
 /*
  * Note: the DMUB service is standalone. No additional headers should be
@@ -260,6 +261,43 @@ static bool dmub_srv_hw_setup(struct dmub_srv *dmub, enum dmub_asic asic)
 
 		break;
 
+	case DMUB_ASIC_DCN32:
+	case DMUB_ASIC_DCN321:
+		dmub->regs_dcn32 = &dmub_srv_dcn32_regs;
+		funcs->configure_dmub_in_system_memory = dmub_dcn32_configure_dmub_in_system_memory;
+		funcs->send_inbox0_cmd = dmub_dcn32_send_inbox0_cmd;
+		funcs->clear_inbox0_ack_register = dmub_dcn32_clear_inbox0_ack_register;
+		funcs->read_inbox0_ack_register = dmub_dcn32_read_inbox0_ack_register;
+		funcs->reset = dmub_dcn32_reset;
+		funcs->reset_release = dmub_dcn32_reset_release;
+		funcs->backdoor_load = dmub_dcn32_backdoor_load;
+		funcs->backdoor_load_zfb_mode = dmub_dcn32_backdoor_load_zfb_mode;
+		funcs->setup_windows = dmub_dcn32_setup_windows;
+		funcs->setup_mailbox = dmub_dcn32_setup_mailbox;
+		funcs->get_inbox1_rptr = dmub_dcn32_get_inbox1_rptr;
+		funcs->set_inbox1_wptr = dmub_dcn32_set_inbox1_wptr;
+		funcs->setup_out_mailbox = dmub_dcn32_setup_out_mailbox;
+		funcs->get_outbox1_wptr = dmub_dcn32_get_outbox1_wptr;
+		funcs->set_outbox1_rptr = dmub_dcn32_set_outbox1_rptr;
+		funcs->is_supported = dmub_dcn32_is_supported;
+		funcs->is_hw_init = dmub_dcn32_is_hw_init;
+		funcs->set_gpint = dmub_dcn32_set_gpint;
+		funcs->is_gpint_acked = dmub_dcn32_is_gpint_acked;
+		funcs->get_gpint_response = dmub_dcn32_get_gpint_response;
+		funcs->get_gpint_dataout = dmub_dcn32_get_gpint_dataout;
+		funcs->get_fw_status = dmub_dcn32_get_fw_boot_status;
+		funcs->enable_dmub_boot_options = dmub_dcn32_enable_dmub_boot_options;
+		funcs->skip_dmub_panel_power_sequence = dmub_dcn32_skip_dmub_panel_power_sequence;
+
+		/* outbox0 call stacks */
+		funcs->setup_outbox0 = dmub_dcn32_setup_outbox0;
+		funcs->get_outbox0_wptr = dmub_dcn32_get_outbox0_wptr;
+		funcs->set_outbox0_rptr = dmub_dcn32_set_outbox0_rptr;
+		funcs->get_current_time = dmub_dcn32_get_current_time;
+		funcs->get_diagnostic_data = dmub_dcn32_get_diagnostic_data;
+
+		break;
+
 	default:
 		return false;
 	}
@@ -501,6 +539,9 @@ enum dmub_status dmub_srv_hw_init(struct dmub_srv *dmub,
 	cw1.region.base = DMUB_CW1_BASE;
 	cw1.region.top = cw1.region.base + stack_fb->size - 1;
 
+	if (params->fw_in_system_memory && dmub->hw_funcs.configure_dmub_in_system_memory)
+		dmub->hw_funcs.configure_dmub_in_system_memory(dmub);
+
 	if (params->load_inst_const && dmub->hw_funcs.backdoor_load) {
 		/**
 		 * Read back all the instruction memory so we don't hang the
@@ -508,7 +549,11 @@ enum dmub_status dmub_srv_hw_init(struct dmub_srv *dmub,
 		 * flushed yet. This only occurs in backdoor loading.
 		 */
 		dmub_flush_buffer_mem(inst_fb);
-		dmub->hw_funcs.backdoor_load(dmub, &cw0, &cw1);
+
+		if (params->fw_in_system_memory && dmub->hw_funcs.backdoor_load_zfb_mode)
+			dmub->hw_funcs.backdoor_load_zfb_mode(dmub, &cw0, &cw1);
+		else
+			dmub->hw_funcs.backdoor_load(dmub, &cw0, &cw1);
 	}
 
 	cw2.offset.quad_part = data_fb->gpu_addr;
@@ -583,6 +628,10 @@ enum dmub_status dmub_srv_hw_init(struct dmub_srv *dmub,
 	if (dmub->hw_funcs.enable_dmub_boot_options)
 		dmub->hw_funcs.enable_dmub_boot_options(dmub, params);
 
+	if (dmub->hw_funcs.skip_dmub_panel_power_sequence)
+		dmub->hw_funcs.skip_dmub_panel_power_sequence(dmub,
+			params->skip_panel_power_sequence);
+
 	if (dmub->hw_funcs.reset_release)
 		dmub->hw_funcs.reset_release(dmub);
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 06/43] drm/amd/display: add dcn32 IRQ changes
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (3 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 05/43] drm/amd/display: Add DMCUB source files and changes for DCN32/321 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 07/43] drm/amd/display: add GPIO changes for DCN32/321 Alex Deucher
                   ` (36 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

Add DCN3.2.x interrupt support.

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/irq/Makefile   |  10 +-
 .../display/dc/irq/dcn32/irq_service_dcn32.c  | 367 ++++++++++++++++++
 .../display/dc/irq/dcn32/irq_service_dcn32.h  |  35 ++
 3 files changed, 411 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.h

diff --git a/drivers/gpu/drm/amd/display/dc/irq/Makefile b/drivers/gpu/drm/amd/display/dc/irq/Makefile
index 5f49048dde47..41da81c85fdc 100644
--- a/drivers/gpu/drm/amd/display/dc/irq/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/irq/Makefile
@@ -135,7 +135,6 @@ IRQ_DCN31 = irq_service_dcn31.o
 AMD_DAL_IRQ_DCN31= $(addprefix $(AMDDALPATH)/dc/irq/dcn31/,$(IRQ_DCN31))
 
 AMD_DISPLAY_FILES += $(AMD_DAL_IRQ_DCN31)
-
 ###############################################################################
 # DCN 315
 ###############################################################################
@@ -144,3 +143,12 @@ IRQ_DCN315 = irq_service_dcn315.o
 AMD_DAL_IRQ_DCN315= $(addprefix $(AMDDALPATH)/dc/irq/dcn315/,$(IRQ_DCN315))
 
 AMD_DISPLAY_FILES += $(AMD_DAL_IRQ_DCN315)
+
+###############################################################################
+# DCN 32
+###############################################################################
+IRQ_DCN32 = irq_service_dcn32.o
+
+AMD_DAL_IRQ_DCN32= $(addprefix $(AMDDALPATH)/dc/irq/dcn32/,$(IRQ_DCN32))
+
+AMD_DISPLAY_FILES += $(AMD_DAL_IRQ_DCN32)
diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c b/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c
new file mode 100644
index 000000000000..5c87057a6abc
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c
@@ -0,0 +1,367 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+#include "include/logger_interface.h"
+#include "../dce110/irq_service_dce110.h"
+
+#include "dcn/dcn_3_2_0_offset.h"
+#include "dcn/dcn_3_2_0_sh_mask.h"
+
+#include "irq_service_dcn32.h"
+
+#include "ivsrcid/dcn/irqsrcs_dcn_1_0.h"
+
+enum dc_irq_source to_dal_irq_source_dcn32(
+		struct irq_service *irq_service,
+		uint32_t src_id,
+		uint32_t ext_id)
+{
+	switch (src_id) {
+	case DCN_1_0__SRCID__DC_D1_OTG_VSTARTUP:
+		return DC_IRQ_SOURCE_VBLANK1;
+	case DCN_1_0__SRCID__DC_D2_OTG_VSTARTUP:
+		return DC_IRQ_SOURCE_VBLANK2;
+	case DCN_1_0__SRCID__DC_D3_OTG_VSTARTUP:
+		return DC_IRQ_SOURCE_VBLANK3;
+	case DCN_1_0__SRCID__DC_D4_OTG_VSTARTUP:
+		return DC_IRQ_SOURCE_VBLANK4;
+	case DCN_1_0__SRCID__DC_D5_OTG_VSTARTUP:
+		return DC_IRQ_SOURCE_VBLANK5;
+	case DCN_1_0__SRCID__DC_D6_OTG_VSTARTUP:
+		return DC_IRQ_SOURCE_VBLANK6;
+	case DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT:
+		return DC_IRQ_SOURCE_PFLIP1;
+	case DCN_1_0__SRCID__HUBP1_FLIP_INTERRUPT:
+		return DC_IRQ_SOURCE_PFLIP2;
+	case DCN_1_0__SRCID__HUBP2_FLIP_INTERRUPT:
+		return DC_IRQ_SOURCE_PFLIP3;
+	case DCN_1_0__SRCID__HUBP3_FLIP_INTERRUPT:
+		return DC_IRQ_SOURCE_PFLIP4;
+	case DCN_1_0__SRCID__HUBP4_FLIP_INTERRUPT:
+		return DC_IRQ_SOURCE_PFLIP5;
+	case DCN_1_0__SRCID__HUBP5_FLIP_INTERRUPT:
+		return DC_IRQ_SOURCE_PFLIP6;
+	case DCN_1_0__SRCID__OTG0_IHC_V_UPDATE_NO_LOCK_INTERRUPT:
+		return DC_IRQ_SOURCE_VUPDATE1;
+	case DCN_1_0__SRCID__OTG1_IHC_V_UPDATE_NO_LOCK_INTERRUPT:
+		return DC_IRQ_SOURCE_VUPDATE2;
+	case DCN_1_0__SRCID__OTG2_IHC_V_UPDATE_NO_LOCK_INTERRUPT:
+		return DC_IRQ_SOURCE_VUPDATE3;
+	case DCN_1_0__SRCID__OTG3_IHC_V_UPDATE_NO_LOCK_INTERRUPT:
+		return DC_IRQ_SOURCE_VUPDATE4;
+	case DCN_1_0__SRCID__OTG4_IHC_V_UPDATE_NO_LOCK_INTERRUPT:
+		return DC_IRQ_SOURCE_VUPDATE5;
+	case DCN_1_0__SRCID__OTG5_IHC_V_UPDATE_NO_LOCK_INTERRUPT:
+		return DC_IRQ_SOURCE_VUPDATE6;
+
+	case DCN_1_0__SRCID__DC_HPD1_INT:
+		/* generic src_id for all HPD and HPDRX interrupts */
+		switch (ext_id) {
+		case DCN_1_0__CTXID__DC_HPD1_INT:
+			return DC_IRQ_SOURCE_HPD1;
+		case DCN_1_0__CTXID__DC_HPD2_INT:
+			return DC_IRQ_SOURCE_HPD2;
+		case DCN_1_0__CTXID__DC_HPD3_INT:
+			return DC_IRQ_SOURCE_HPD3;
+		case DCN_1_0__CTXID__DC_HPD4_INT:
+			return DC_IRQ_SOURCE_HPD4;
+		case DCN_1_0__CTXID__DC_HPD5_INT:
+			return DC_IRQ_SOURCE_HPD5;
+		case DCN_1_0__CTXID__DC_HPD6_INT:
+			return DC_IRQ_SOURCE_HPD6;
+		case DCN_1_0__CTXID__DC_HPD1_RX_INT:
+			return DC_IRQ_SOURCE_HPD1RX;
+		case DCN_1_0__CTXID__DC_HPD2_RX_INT:
+			return DC_IRQ_SOURCE_HPD2RX;
+		case DCN_1_0__CTXID__DC_HPD3_RX_INT:
+			return DC_IRQ_SOURCE_HPD3RX;
+		case DCN_1_0__CTXID__DC_HPD4_RX_INT:
+			return DC_IRQ_SOURCE_HPD4RX;
+		case DCN_1_0__CTXID__DC_HPD5_RX_INT:
+			return DC_IRQ_SOURCE_HPD5RX;
+		case DCN_1_0__CTXID__DC_HPD6_RX_INT:
+			return DC_IRQ_SOURCE_HPD6RX;
+		default:
+			return DC_IRQ_SOURCE_INVALID;
+		}
+		break;
+
+	default:
+		return DC_IRQ_SOURCE_INVALID;
+	}
+}
+
+static bool hpd_ack(
+	struct irq_service *irq_service,
+	const struct irq_source_info *info)
+{
+	uint32_t addr = info->status_reg;
+	uint32_t value = dm_read_reg(irq_service->ctx, addr);
+	uint32_t current_status =
+		get_reg_field_value(
+			value,
+			HPD0_DC_HPD_INT_STATUS,
+			DC_HPD_SENSE_DELAYED);
+
+	dal_irq_service_ack_generic(irq_service, info);
+
+	value = dm_read_reg(irq_service->ctx, info->enable_reg);
+
+	set_reg_field_value(
+		value,
+		current_status ? 0 : 1,
+		HPD0_DC_HPD_INT_CONTROL,
+		DC_HPD_INT_POLARITY);
+
+	dm_write_reg(irq_service->ctx, info->enable_reg, value);
+
+	return true;
+}
+
+static const struct irq_source_info_funcs hpd_irq_info_funcs = {
+	.set = NULL,
+	.ack = hpd_ack
+};
+
+static const struct irq_source_info_funcs hpd_rx_irq_info_funcs = {
+	.set = NULL,
+	.ack = NULL
+};
+
+static const struct irq_source_info_funcs pflip_irq_info_funcs = {
+	.set = NULL,
+	.ack = NULL
+};
+
+static const struct irq_source_info_funcs vupdate_no_lock_irq_info_funcs = {
+	.set = NULL,
+	.ack = NULL
+};
+
+static const struct irq_source_info_funcs vblank_irq_info_funcs = {
+	.set = NULL,
+	.ack = NULL
+};
+
+#undef BASE_INNER
+#define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
+
+/* compile time expand base address. */
+#define BASE(seg) \
+	BASE_INNER(seg)
+
+#define SRI(reg_name, block, id)\
+	BASE(reg ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+			reg ## block ## id ## _ ## reg_name
+
+#define IRQ_REG_ENTRY(block, reg_num, reg1, mask1, reg2, mask2)\
+	.enable_reg = SRI(reg1, block, reg_num),\
+	.enable_mask = \
+		block ## reg_num ## _ ## reg1 ## __ ## mask1 ## _MASK,\
+	.enable_value = {\
+		block ## reg_num ## _ ## reg1 ## __ ## mask1 ## _MASK,\
+		~block ## reg_num ## _ ## reg1 ## __ ## mask1 ## _MASK \
+	},\
+	.ack_reg = SRI(reg2, block, reg_num),\
+	.ack_mask = \
+		block ## reg_num ## _ ## reg2 ## __ ## mask2 ## _MASK,\
+	.ack_value = \
+		block ## reg_num ## _ ## reg2 ## __ ## mask2 ## _MASK \
+
+#define hpd_int_entry(reg_num)\
+	[DC_IRQ_SOURCE_HPD1 + reg_num] = {\
+		IRQ_REG_ENTRY(HPD, reg_num,\
+			DC_HPD_INT_CONTROL, DC_HPD_INT_EN,\
+			DC_HPD_INT_CONTROL, DC_HPD_INT_ACK),\
+		.status_reg = SRI(DC_HPD_INT_STATUS, HPD, reg_num),\
+		.funcs = &hpd_irq_info_funcs\
+	}
+
+#define hpd_rx_int_entry(reg_num)\
+	[DC_IRQ_SOURCE_HPD1RX + reg_num] = {\
+		IRQ_REG_ENTRY(HPD, reg_num,\
+			DC_HPD_INT_CONTROL, DC_HPD_RX_INT_EN,\
+			DC_HPD_INT_CONTROL, DC_HPD_RX_INT_ACK),\
+		.status_reg = SRI(DC_HPD_INT_STATUS, HPD, reg_num),\
+		.funcs = &hpd_rx_irq_info_funcs\
+	}
+#define pflip_int_entry(reg_num)\
+	[DC_IRQ_SOURCE_PFLIP1 + reg_num] = {\
+		IRQ_REG_ENTRY(HUBPREQ, reg_num,\
+			DCSURF_SURFACE_FLIP_INTERRUPT, SURFACE_FLIP_INT_MASK,\
+			DCSURF_SURFACE_FLIP_INTERRUPT, SURFACE_FLIP_CLEAR),\
+		.funcs = &pflip_irq_info_funcs\
+	}
+
+/* vupdate_no_lock_int_entry maps to DC_IRQ_SOURCE_VUPDATEx, to match semantic
+ * of DCE's DC_IRQ_SOURCE_VUPDATEx.
+ */
+#define vupdate_no_lock_int_entry(reg_num)\
+	[DC_IRQ_SOURCE_VUPDATE1 + reg_num] = {\
+		IRQ_REG_ENTRY(OTG, reg_num,\
+			OTG_GLOBAL_SYNC_STATUS, VUPDATE_NO_LOCK_INT_EN,\
+			OTG_GLOBAL_SYNC_STATUS, VUPDATE_NO_LOCK_EVENT_CLEAR),\
+		.funcs = &vupdate_no_lock_irq_info_funcs\
+	}
+
+#define vblank_int_entry(reg_num)\
+	[DC_IRQ_SOURCE_VBLANK1 + reg_num] = {\
+		IRQ_REG_ENTRY(OTG, reg_num,\
+			OTG_GLOBAL_SYNC_STATUS, VSTARTUP_INT_EN,\
+			OTG_GLOBAL_SYNC_STATUS, VSTARTUP_EVENT_CLEAR),\
+		.funcs = &vblank_irq_info_funcs\
+}
+
+#define dummy_irq_entry() \
+	{\
+		.funcs = &dummy_irq_info_funcs\
+	}
+
+#define i2c_int_entry(reg_num) \
+	[DC_IRQ_SOURCE_I2C_DDC ## reg_num] = dummy_irq_entry()
+
+#define dp_sink_int_entry(reg_num) \
+	[DC_IRQ_SOURCE_DPSINK ## reg_num] = dummy_irq_entry()
+
+#define gpio_pad_int_entry(reg_num) \
+	[DC_IRQ_SOURCE_GPIOPAD ## reg_num] = dummy_irq_entry()
+
+#define dc_underflow_int_entry(reg_num) \
+	[DC_IRQ_SOURCE_DC ## reg_num ## UNDERFLOW] = dummy_irq_entry()
+
+static const struct irq_source_info_funcs dummy_irq_info_funcs = {
+	.set = dal_irq_service_dummy_set,
+	.ack = dal_irq_service_dummy_ack
+};
+
+static const struct irq_source_info
+irq_source_info_dcn32[DAL_IRQ_SOURCES_NUMBER] = {
+	[DC_IRQ_SOURCE_INVALID] = dummy_irq_entry(),
+	hpd_int_entry(0),
+	hpd_int_entry(1),
+	hpd_int_entry(2),
+	hpd_int_entry(3),
+	hpd_int_entry(4),
+	hpd_rx_int_entry(0),
+	hpd_rx_int_entry(1),
+	hpd_rx_int_entry(2),
+	hpd_rx_int_entry(3),
+	hpd_rx_int_entry(4),
+	i2c_int_entry(1),
+	i2c_int_entry(2),
+	i2c_int_entry(3),
+	i2c_int_entry(4),
+	i2c_int_entry(5),
+	i2c_int_entry(6),
+	dp_sink_int_entry(1),
+	dp_sink_int_entry(2),
+	dp_sink_int_entry(3),
+	dp_sink_int_entry(4),
+	dp_sink_int_entry(5),
+	dp_sink_int_entry(6),
+	[DC_IRQ_SOURCE_TIMER] = dummy_irq_entry(),
+	pflip_int_entry(0),
+	pflip_int_entry(1),
+	pflip_int_entry(2),
+	pflip_int_entry(3),
+	[DC_IRQ_SOURCE_PFLIP5] = dummy_irq_entry(),
+	[DC_IRQ_SOURCE_PFLIP6] = dummy_irq_entry(),
+	[DC_IRQ_SOURCE_PFLIP_UNDERLAY0] = dummy_irq_entry(),
+	gpio_pad_int_entry(0),
+	gpio_pad_int_entry(1),
+	gpio_pad_int_entry(2),
+	gpio_pad_int_entry(3),
+	gpio_pad_int_entry(4),
+	gpio_pad_int_entry(5),
+	gpio_pad_int_entry(6),
+	gpio_pad_int_entry(7),
+	gpio_pad_int_entry(8),
+	gpio_pad_int_entry(9),
+	gpio_pad_int_entry(10),
+	gpio_pad_int_entry(11),
+	gpio_pad_int_entry(12),
+	gpio_pad_int_entry(13),
+	gpio_pad_int_entry(14),
+	gpio_pad_int_entry(15),
+	gpio_pad_int_entry(16),
+	gpio_pad_int_entry(17),
+	gpio_pad_int_entry(18),
+	gpio_pad_int_entry(19),
+	gpio_pad_int_entry(20),
+	gpio_pad_int_entry(21),
+	gpio_pad_int_entry(22),
+	gpio_pad_int_entry(23),
+	gpio_pad_int_entry(24),
+	gpio_pad_int_entry(25),
+	gpio_pad_int_entry(26),
+	gpio_pad_int_entry(27),
+	gpio_pad_int_entry(28),
+	gpio_pad_int_entry(29),
+	gpio_pad_int_entry(30),
+	dc_underflow_int_entry(1),
+	dc_underflow_int_entry(2),
+	dc_underflow_int_entry(3),
+	dc_underflow_int_entry(4),
+	dc_underflow_int_entry(5),
+	dc_underflow_int_entry(6),
+	[DC_IRQ_SOURCE_DMCU_SCP] = dummy_irq_entry(),
+	[DC_IRQ_SOURCE_VBIOS_SW] = dummy_irq_entry(),
+	vupdate_no_lock_int_entry(0),
+	vupdate_no_lock_int_entry(1),
+	vupdate_no_lock_int_entry(2),
+	vupdate_no_lock_int_entry(3),
+	vblank_int_entry(0),
+	vblank_int_entry(1),
+	vblank_int_entry(2),
+	vblank_int_entry(3),
+};
+
+static const struct irq_service_funcs irq_service_funcs_dcn32 = {
+		.to_dal_irq_source = to_dal_irq_source_dcn32
+};
+
+static void dcn32_irq_construct(
+	struct irq_service *irq_service,
+	struct irq_service_init_data *init_data)
+{
+	dal_irq_service_construct(irq_service, init_data);
+
+	irq_service->info = irq_source_info_dcn32;
+	irq_service->funcs = &irq_service_funcs_dcn32;
+}
+
+struct irq_service *dal_irq_service_dcn32_create(
+	struct irq_service_init_data *init_data)
+{
+	struct irq_service *irq_service = kzalloc(sizeof(*irq_service),
+						  GFP_KERNEL);
+
+	if (!irq_service)
+		return NULL;
+
+	dcn32_irq_construct(irq_service, init_data);
+	return irq_service;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.h b/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.h
new file mode 100644
index 000000000000..a0d9c9e4e17f
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.h
@@ -0,0 +1,35 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+
+#ifndef __DAL_IRQ_SERVICE_DCN32_H__
+#define __DAL_IRQ_SERVICE_DCN32_H__
+
+#include "../irq_service.h"
+
+struct irq_service *dal_irq_service_dcn32_create(
+	struct irq_service_init_data *init_data);
+
+#endif /* __DAL_IRQ_SERVICE_DCN32_H__ */
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 07/43] drm/amd/display: add GPIO changes for DCN32/321
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (4 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 06/43] drm/amd/display: add dcn32 IRQ changes Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 08/43] drm/amd/display: DML " Alex Deucher
                   ` (35 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

Add support for the GPIO changes for DCN3.2.x.

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/gpio/Makefile  |   8 +-
 .../display/dc/gpio/dcn32/hw_factory_dcn32.c  | 255 +++++++++++++
 .../hw_factory_dcn32.h}                       |  13 +-
 .../dc/gpio/dcn32/hw_translate_dcn32.c        | 349 ++++++++++++++++++
 .../hw_translate_dcn32.h}                     |  11 +-
 .../dc/gpio/diagnostics/hw_factory_diag.c     |  62 ----
 .../dc/gpio/diagnostics/hw_translate_diag.c   |  41 --
 .../gpu/drm/amd/display/dc/gpio/hw_factory.c  |  16 +-
 .../drm/amd/display/dc/gpio/hw_translate.c    |  13 +-
 .../display/dc/irq/dcn32/irq_service_dcn32.c  |   2 +
 10 files changed, 630 insertions(+), 140 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_factory_dcn32.c
 rename drivers/gpu/drm/amd/display/dc/gpio/{diagnostics/hw_factory_diag.h => dcn32/hw_factory_dcn32.h} (81%)
 create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_translate_dcn32.c
 rename drivers/gpu/drm/amd/display/dc/gpio/{diagnostics/hw_translate_diag.h => dcn32/hw_translate_dcn32.h} (82%)
 delete mode 100644 drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_factory_diag.c
 delete mode 100644 drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_translate_diag.c

diff --git a/drivers/gpu/drm/amd/display/dc/gpio/Makefile b/drivers/gpu/drm/amd/display/dc/gpio/Makefile
index 0f4a22be8c40..bc47481a158e 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/gpio/Makefile
@@ -115,10 +115,10 @@ AMD_DAL_GPIO_DCN315 = $(addprefix $(AMDDALPATH)/dc/gpio/dcn315/,$(GPIO_DCN315))
 AMD_DISPLAY_FILES += $(AMD_DAL_GPIO_DCN315)
 
 ###############################################################################
-# Diagnostics on FPGA
+# DCN 3.2
 ###############################################################################
-GPIO_DIAG_FPGA = hw_translate_diag.o hw_factory_diag.o
+GPIO_DCN32 = hw_translate_dcn32.o hw_factory_dcn32.o
 
-AMD_DAL_GPIO_DIAG_FPGA = $(addprefix $(AMDDALPATH)/dc/gpio/diagnostics/,$(GPIO_DIAG_FPGA))
+AMD_DAL_GPIO_DCN32 = $(addprefix $(AMDDALPATH)/dc/gpio/dcn32/,$(GPIO_DCN32))
 
-AMD_DISPLAY_FILES += $(AMD_DAL_GPIO_DIAG_FPGA)
+AMD_DISPLAY_FILES += $(AMD_DAL_GPIO_DCN32)
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_factory_dcn32.c b/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_factory_dcn32.c
new file mode 100644
index 000000000000..d635b73af46f
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_factory_dcn32.c
@@ -0,0 +1,255 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+#include "dm_services.h"
+#include "include/gpio_types.h"
+#include "../hw_factory.h"
+
+
+#include "../hw_gpio.h"
+#include "../hw_ddc.h"
+#include "../hw_hpd.h"
+#include "../hw_generic.h"
+
+#include "hw_factory_dcn32.h"
+
+#include "dcn/dcn_3_2_0_offset.h"
+#include "dcn/dcn_3_2_0_sh_mask.h"
+
+#include "reg_helper.h"
+#include "../hpd_regs.h"
+
+#define DCN_BASE__INST0_SEG2                       0x000034C0
+
+/* begin *********************
+ * macros to expend register list macro defined in HW object header file */
+
+/* DCN */
+#define block HPD
+#define reg_num 0
+
+#undef BASE_INNER
+#define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
+
+#define BASE(seg) BASE_INNER(seg)
+
+
+
+#define REG(reg_name)\
+		BASE(reg ## reg_name ## _BASE_IDX) + reg ## reg_name
+
+#define SF_HPD(reg_name, field_name, post_fix)\
+	.field_name = HPD0_ ## reg_name ## __ ## field_name ## post_fix
+
+#define REGI(reg_name, block, id)\
+	BASE(reg ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+				reg ## block ## id ## _ ## reg_name
+
+#define SF(reg_name, field_name, post_fix)\
+	.field_name = reg_name ## __ ## field_name ## post_fix
+
+/* macros to expend register list macro defined in HW object header file
+ * end *********************/
+
+
+
+#define hpd_regs(id) \
+{\
+	HPD_REG_LIST(id)\
+}
+
+static const struct hpd_registers hpd_regs[] = {
+	hpd_regs(0),
+	hpd_regs(1),
+	hpd_regs(2),
+	hpd_regs(3),
+	hpd_regs(4),
+};
+
+static const struct hpd_sh_mask hpd_shift = {
+		HPD_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct hpd_sh_mask hpd_mask = {
+		HPD_MASK_SH_LIST(_MASK)
+};
+
+#include "../ddc_regs.h"
+
+ /* set field name */
+#define SF_DDC(reg_name, field_name, post_fix)\
+	.field_name = reg_name ## __ ## field_name ## post_fix
+
+static const struct ddc_registers ddc_data_regs_dcn[] = {
+	ddc_data_regs_dcn2(1),
+	ddc_data_regs_dcn2(2),
+	ddc_data_regs_dcn2(3),
+	ddc_data_regs_dcn2(4),
+	ddc_data_regs_dcn2(5),
+	{
+			DDC_GPIO_VGA_REG_LIST(DATA),
+			.ddc_setup = 0,
+			.phy_aux_cntl = 0,
+			.dc_gpio_aux_ctrl_5 = 0
+	}
+};
+
+static const struct ddc_registers ddc_clk_regs_dcn[] = {
+	ddc_clk_regs_dcn2(1),
+	ddc_clk_regs_dcn2(2),
+	ddc_clk_regs_dcn2(3),
+	ddc_clk_regs_dcn2(4),
+	ddc_clk_regs_dcn2(5),
+	{
+			DDC_GPIO_VGA_REG_LIST(CLK),
+			.ddc_setup = 0,
+			.phy_aux_cntl = 0,
+			.dc_gpio_aux_ctrl_5 = 0
+	}
+};
+
+static const struct ddc_sh_mask ddc_shift[] = {
+	DDC_MASK_SH_LIST_DCN2(__SHIFT, 1),
+	DDC_MASK_SH_LIST_DCN2(__SHIFT, 2),
+	DDC_MASK_SH_LIST_DCN2(__SHIFT, 3),
+	DDC_MASK_SH_LIST_DCN2(__SHIFT, 4),
+	DDC_MASK_SH_LIST_DCN2(__SHIFT, 5),
+	DDC_MASK_SH_LIST_DCN2(__SHIFT, 6)
+};
+
+static const struct ddc_sh_mask ddc_mask[] = {
+	DDC_MASK_SH_LIST_DCN2(_MASK, 1),
+	DDC_MASK_SH_LIST_DCN2(_MASK, 2),
+	DDC_MASK_SH_LIST_DCN2(_MASK, 3),
+	DDC_MASK_SH_LIST_DCN2(_MASK, 4),
+	DDC_MASK_SH_LIST_DCN2(_MASK, 5),
+	DDC_MASK_SH_LIST_DCN2(_MASK, 6)
+};
+
+#include "../generic_regs.h"
+
+/* set field name */
+#define SF_GENERIC(reg_name, field_name, post_fix)\
+	.field_name = reg_name ## __ ## field_name ## post_fix
+
+#define generic_regs(id) \
+{\
+	GENERIC_REG_LIST(id)\
+}
+
+static const struct generic_registers generic_regs[] = {
+	generic_regs(A),
+	generic_regs(B),
+};
+
+static const struct generic_sh_mask generic_shift[] = {
+	GENERIC_MASK_SH_LIST(__SHIFT, A),
+	GENERIC_MASK_SH_LIST(__SHIFT, B),
+};
+
+static const struct generic_sh_mask generic_mask[] = {
+	GENERIC_MASK_SH_LIST(_MASK, A),
+	GENERIC_MASK_SH_LIST(_MASK, B),
+};
+
+static void define_generic_registers(struct hw_gpio_pin *pin, uint32_t en)
+{
+	struct hw_generic *generic = HW_GENERIC_FROM_BASE(pin);
+
+	generic->regs = &generic_regs[en];
+	generic->shifts = &generic_shift[en];
+	generic->masks = &generic_mask[en];
+	generic->base.regs = &generic_regs[en].gpio;
+}
+
+static void define_ddc_registers(
+		struct hw_gpio_pin *pin,
+		uint32_t en)
+{
+	struct hw_ddc *ddc = HW_DDC_FROM_BASE(pin);
+
+	switch (pin->id) {
+	case GPIO_ID_DDC_DATA:
+		ddc->regs = &ddc_data_regs_dcn[en];
+		ddc->base.regs = &ddc_data_regs_dcn[en].gpio;
+		break;
+	case GPIO_ID_DDC_CLOCK:
+		ddc->regs = &ddc_clk_regs_dcn[en];
+		ddc->base.regs = &ddc_clk_regs_dcn[en].gpio;
+		break;
+	default:
+		ASSERT_CRITICAL(false);
+		return;
+	}
+
+	ddc->shifts = &ddc_shift[en];
+	ddc->masks = &ddc_mask[en];
+
+}
+
+static void define_hpd_registers(struct hw_gpio_pin *pin, uint32_t en)
+{
+	struct hw_hpd *hpd = HW_HPD_FROM_BASE(pin);
+
+	hpd->regs = &hpd_regs[en];
+	hpd->shifts = &hpd_shift;
+	hpd->masks = &hpd_mask;
+	hpd->base.regs = &hpd_regs[en].gpio;
+}
+
+
+/* fucntion table */
+static const struct hw_factory_funcs funcs = {
+	.init_ddc_data = dal_hw_ddc_init,
+	.init_generic = dal_hw_generic_init,
+	.init_hpd = dal_hw_hpd_init,
+	.get_ddc_pin = dal_hw_ddc_get_pin,
+	.get_hpd_pin = dal_hw_hpd_get_pin,
+	.get_generic_pin = dal_hw_generic_get_pin,
+	.define_hpd_registers = define_hpd_registers,
+	.define_ddc_registers = define_ddc_registers,
+	.define_generic_registers = define_generic_registers
+};
+/*
+ * dal_hw_factory_dcn32_init
+ *
+ * @brief
+ * Initialize HW factory function pointers and pin info
+ *
+ * @param
+ * struct hw_factory *factory - [out] struct of function pointers
+ */
+void dal_hw_factory_dcn32_init(struct hw_factory *factory)
+{
+	factory->number_of_pins[GPIO_ID_DDC_DATA] = 6;
+	factory->number_of_pins[GPIO_ID_DDC_CLOCK] = 6;
+	factory->number_of_pins[GPIO_ID_GENERIC] = 4;
+	factory->number_of_pins[GPIO_ID_HPD] = 5;
+	factory->number_of_pins[GPIO_ID_GPIO_PAD] = 28;
+	factory->number_of_pins[GPIO_ID_VIP_PAD] = 0;
+	factory->number_of_pins[GPIO_ID_SYNC] = 0;
+	factory->number_of_pins[GPIO_ID_GSL] = 0;/*add this*/
+
+	factory->funcs = &funcs;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_factory_diag.h b/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_factory_dcn32.h
similarity index 81%
rename from drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_factory_diag.h
rename to drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_factory_dcn32.h
index bf68eb1d9a1d..71138d0e192b 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_factory_diag.h
+++ b/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_factory_dcn32.h
@@ -1,5 +1,5 @@
 /*
- * Copyright 2013-16 Advanced Micro Devices, Inc.
+ * Copyright 2022 Advanced Micro Devices, Inc.
  *
  * Permission is hereby granted, free of charge, to any person obtaining a
  * copy of this software and associated documentation files (the "Software"),
@@ -22,13 +22,10 @@
  * Authors: AMD
  *
  */
-
-#ifndef __DAL_HW_FACTORY_DIAG_FPGA_H__
-#define __DAL_HW_FACTORY_DIAG_FPGA_H__
-
-struct hw_factory;
+#ifndef __DAL_HW_FACTORY_DCN32_H__
+#define __DAL_HW_FACTORY_DCN32_H__
 
 /* Initialize HW factory function pointers and pin info */
-void dal_hw_factory_diag_fpga_init(struct hw_factory *factory);
+void dal_hw_factory_dcn32_init(struct hw_factory *factory);
 
-#endif /* __DAL_HW_FACTORY_DIAG_FPGA_H__ */
+#endif /* __DAL_HW_FACTORY_DCN32_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_translate_dcn32.c b/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_translate_dcn32.c
new file mode 100644
index 000000000000..8493b9981f9e
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_translate_dcn32.c
@@ -0,0 +1,349 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+/*
+ * Pre-requisites: headers required by header of this unit
+ */
+#include "hw_translate_dcn32.h"
+
+#include "dm_services.h"
+#include "include/gpio_types.h"
+#include "../hw_translate.h"
+
+#include "dcn/dcn_3_2_0_offset.h"
+#include "dcn/dcn_3_2_0_sh_mask.h"
+
+#define DCN_BASE__INST0_SEG2                       0x000034C0
+
+/* begin *********************
+ * macros to expend register list macro defined in HW object header file */
+
+/* DCN */
+#define block HPD
+#define reg_num 0
+
+#undef BASE_INNER
+#define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
+
+#define BASE(seg) BASE_INNER(seg)
+
+#undef REG
+#define REG(reg_name)\
+		BASE(reg ## reg_name ## _BASE_IDX) + reg ## reg_name
+#define SF_HPD(reg_name, field_name, post_fix)\
+	.field_name = reg_name ## __ ## field_name ## post_fix
+
+
+/* macros to expend register list macro defined in HW object header file
+ * end *********************/
+
+
+static bool offset_to_id(
+	uint32_t offset,
+	uint32_t mask,
+	enum gpio_id *id,
+	uint32_t *en)
+{
+	switch (offset) {
+	/* GENERIC */
+	case REG(DC_GPIO_GENERIC_A):
+		*id = GPIO_ID_GENERIC;
+		switch (mask) {
+		case DC_GPIO_GENERIC_A__DC_GPIO_GENERICA_A_MASK:
+			*en = GPIO_GENERIC_A;
+			return true;
+		case DC_GPIO_GENERIC_A__DC_GPIO_GENERICB_A_MASK:
+			*en = GPIO_GENERIC_B;
+			return true;
+		case DC_GPIO_GENERIC_A__DC_GPIO_GENERICC_A_MASK:
+			*en = GPIO_GENERIC_C;
+			return true;
+		case DC_GPIO_GENERIC_A__DC_GPIO_GENERICD_A_MASK:
+			*en = GPIO_GENERIC_D;
+			return true;
+		case DC_GPIO_GENERIC_A__DC_GPIO_GENERICE_A_MASK:
+			*en = GPIO_GENERIC_E;
+			return true;
+		case DC_GPIO_GENERIC_A__DC_GPIO_GENERICF_A_MASK:
+			*en = GPIO_GENERIC_F;
+			return true;
+		default:
+			ASSERT_CRITICAL(false);
+			return false;
+		}
+	break;
+	/* HPD */
+	case REG(DC_GPIO_HPD_A):
+		*id = GPIO_ID_HPD;
+		switch (mask) {
+		case DC_GPIO_HPD_A__DC_GPIO_HPD1_A_MASK:
+			*en = GPIO_HPD_1;
+			return true;
+		case DC_GPIO_HPD_A__DC_GPIO_HPD2_A_MASK:
+			*en = GPIO_HPD_2;
+			return true;
+		case DC_GPIO_HPD_A__DC_GPIO_HPD3_A_MASK:
+			*en = GPIO_HPD_3;
+			return true;
+		case DC_GPIO_HPD_A__DC_GPIO_HPD4_A_MASK:
+			*en = GPIO_HPD_4;
+			return true;
+		case DC_GPIO_HPD_A__DC_GPIO_HPD5_A_MASK:
+			*en = GPIO_HPD_5;
+			return true;
+		default:
+			ASSERT_CRITICAL(false);
+			return false;
+		}
+	break;
+	/* REG(DC_GPIO_GENLK_MASK */
+	case REG(DC_GPIO_GENLK_A):
+		*id = GPIO_ID_GSL;
+		switch (mask) {
+		case DC_GPIO_GENLK_A__DC_GPIO_GENLK_CLK_A_MASK:
+			*en = GPIO_GSL_GENLOCK_CLOCK;
+			return true;
+		case DC_GPIO_GENLK_A__DC_GPIO_GENLK_VSYNC_A_MASK:
+			*en = GPIO_GSL_GENLOCK_VSYNC;
+			return true;
+		case DC_GPIO_GENLK_A__DC_GPIO_SWAPLOCK_A_A_MASK:
+			*en = GPIO_GSL_SWAPLOCK_A;
+			return true;
+		case DC_GPIO_GENLK_A__DC_GPIO_SWAPLOCK_B_A_MASK:
+			*en = GPIO_GSL_SWAPLOCK_B;
+			return true;
+		default:
+			ASSERT_CRITICAL(false);
+			return false;
+		}
+	break;
+	/* DDC */
+	/* we don't care about the GPIO_ID for DDC
+	 * in DdcHandle it will use GPIO_ID_DDC_DATA/GPIO_ID_DDC_CLOCK
+	 * directly in the create method */
+	case REG(DC_GPIO_DDC1_A):
+		*en = GPIO_DDC_LINE_DDC1;
+		return true;
+	case REG(DC_GPIO_DDC2_A):
+		*en = GPIO_DDC_LINE_DDC2;
+		return true;
+	case REG(DC_GPIO_DDC3_A):
+		*en = GPIO_DDC_LINE_DDC3;
+		return true;
+	case REG(DC_GPIO_DDC4_A):
+		*en = GPIO_DDC_LINE_DDC4;
+		return true;
+	case REG(DC_GPIO_DDC5_A):
+		*en = GPIO_DDC_LINE_DDC5;
+		return true;
+	case REG(DC_GPIO_DDCVGA_A):
+		*en = GPIO_DDC_LINE_DDC_VGA;
+		return true;
+	default:
+		ASSERT_CRITICAL(false);
+		return false;
+	}
+}
+
+static bool id_to_offset(
+	enum gpio_id id,
+	uint32_t en,
+	struct gpio_pin_info *info)
+{
+	bool result = true;
+
+	switch (id) {
+	case GPIO_ID_DDC_DATA:
+		info->mask = DC_GPIO_DDC1_A__DC_GPIO_DDC1DATA_A_MASK;
+		switch (en) {
+		case GPIO_DDC_LINE_DDC1:
+			info->offset = REG(DC_GPIO_DDC1_A);
+		break;
+		case GPIO_DDC_LINE_DDC2:
+			info->offset = REG(DC_GPIO_DDC2_A);
+		break;
+		case GPIO_DDC_LINE_DDC3:
+			info->offset = REG(DC_GPIO_DDC3_A);
+		break;
+		case GPIO_DDC_LINE_DDC4:
+			info->offset = REG(DC_GPIO_DDC4_A);
+		break;
+		case GPIO_DDC_LINE_DDC5:
+			info->offset = REG(DC_GPIO_DDC5_A);
+		break;
+		case GPIO_DDC_LINE_DDC_VGA:
+			info->offset = REG(DC_GPIO_DDCVGA_A);
+		break;
+		case GPIO_DDC_LINE_I2C_PAD:
+		default:
+			ASSERT_CRITICAL(false);
+			result = false;
+		}
+	break;
+	case GPIO_ID_DDC_CLOCK:
+		info->mask = DC_GPIO_DDC1_A__DC_GPIO_DDC1CLK_A_MASK;
+		switch (en) {
+		case GPIO_DDC_LINE_DDC1:
+			info->offset = REG(DC_GPIO_DDC1_A);
+		break;
+		case GPIO_DDC_LINE_DDC2:
+			info->offset = REG(DC_GPIO_DDC2_A);
+		break;
+		case GPIO_DDC_LINE_DDC3:
+			info->offset = REG(DC_GPIO_DDC3_A);
+		break;
+		case GPIO_DDC_LINE_DDC4:
+			info->offset = REG(DC_GPIO_DDC4_A);
+		break;
+		case GPIO_DDC_LINE_DDC5:
+			info->offset = REG(DC_GPIO_DDC5_A);
+		break;
+		case GPIO_DDC_LINE_DDC_VGA:
+			info->offset = REG(DC_GPIO_DDCVGA_A);
+		break;
+		case GPIO_DDC_LINE_I2C_PAD:
+		default:
+			ASSERT_CRITICAL(false);
+			result = false;
+		}
+	break;
+	case GPIO_ID_GENERIC:
+		info->offset = REG(DC_GPIO_GENERIC_A);
+		switch (en) {
+		case GPIO_GENERIC_A:
+			info->mask = DC_GPIO_GENERIC_A__DC_GPIO_GENERICA_A_MASK;
+		break;
+		case GPIO_GENERIC_B:
+			info->mask = DC_GPIO_GENERIC_A__DC_GPIO_GENERICB_A_MASK;
+		break;
+		case GPIO_GENERIC_C:
+			info->mask = DC_GPIO_GENERIC_A__DC_GPIO_GENERICC_A_MASK;
+		break;
+		case GPIO_GENERIC_D:
+			info->mask = DC_GPIO_GENERIC_A__DC_GPIO_GENERICD_A_MASK;
+		break;
+		case GPIO_GENERIC_E:
+			info->mask = DC_GPIO_GENERIC_A__DC_GPIO_GENERICE_A_MASK;
+		break;
+		case GPIO_GENERIC_F:
+			info->mask = DC_GPIO_GENERIC_A__DC_GPIO_GENERICF_A_MASK;
+		break;
+		default:
+			ASSERT_CRITICAL(false);
+			result = false;
+		}
+	break;
+	case GPIO_ID_HPD:
+		info->offset = REG(DC_GPIO_HPD_A);
+		switch (en) {
+		case GPIO_HPD_1:
+			info->mask = DC_GPIO_HPD_A__DC_GPIO_HPD1_A_MASK;
+		break;
+		case GPIO_HPD_2:
+			info->mask = DC_GPIO_HPD_A__DC_GPIO_HPD2_A_MASK;
+		break;
+		case GPIO_HPD_3:
+			info->mask = DC_GPIO_HPD_A__DC_GPIO_HPD3_A_MASK;
+		break;
+		case GPIO_HPD_4:
+			info->mask = DC_GPIO_HPD_A__DC_GPIO_HPD4_A_MASK;
+		break;
+		case GPIO_HPD_5:
+			info->mask = DC_GPIO_HPD_A__DC_GPIO_HPD5_A_MASK;
+		break;
+		default:
+			ASSERT_CRITICAL(false);
+			result = false;
+		}
+	break;
+	case GPIO_ID_GSL:
+		switch (en) {
+		case GPIO_GSL_GENLOCK_CLOCK:
+				/*not implmented*/
+			ASSERT_CRITICAL(false);
+			result = false;
+		break;
+		case GPIO_GSL_GENLOCK_VSYNC:
+			/*not implmented*/
+			ASSERT_CRITICAL(false);
+			result = false;
+		break;
+		case GPIO_GSL_SWAPLOCK_A:
+			/*not implmented*/
+			ASSERT_CRITICAL(false);
+			result = false;
+		break;
+		case GPIO_GSL_SWAPLOCK_B:
+			/*not implmented*/
+			ASSERT_CRITICAL(false);
+			result = false;
+
+		break;
+		default:
+			ASSERT_CRITICAL(false);
+			result = false;
+		}
+	break;
+	case GPIO_ID_SYNC:
+	case GPIO_ID_VIP_PAD:
+	default:
+		ASSERT_CRITICAL(false);
+		result = false;
+	}
+
+	if (result) {
+		info->offset_y = info->offset + 2;
+		info->offset_en = info->offset + 1;
+		info->offset_mask = info->offset - 1;
+
+		info->mask_y = info->mask;
+		info->mask_en = info->mask;
+		info->mask_mask = info->mask;
+	}
+
+	return result;
+}
+
+/* function table */
+static const struct hw_translate_funcs funcs = {
+	.offset_to_id = offset_to_id,
+	.id_to_offset = id_to_offset,
+};
+
+/*
+ * dal_hw_translate_dcn32_init
+ *
+ * @brief
+ * Initialize Hw translate function pointers.
+ *
+ * @param
+ * struct hw_translate *tr - [out] struct of function pointers
+ *
+ */
+void dal_hw_translate_dcn32_init(struct hw_translate *tr)
+{
+	tr->funcs = &funcs;
+}
+
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_translate_diag.h b/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_translate_dcn32.h
similarity index 82%
rename from drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_translate_diag.h
rename to drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_translate_dcn32.h
index 4f053241fe96..af64e104d84c 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_translate_diag.h
+++ b/drivers/gpu/drm/amd/display/dc/gpio/dcn32/hw_translate_dcn32.h
@@ -1,5 +1,5 @@
 /*
- * Copyright 2013-16 Advanced Micro Devices, Inc.
+ * Copyright 2022 Advanced Micro Devices, Inc.
  *
  * Permission is hereby granted, free of charge, to any person obtaining a
  * copy of this software and associated documentation files (the "Software"),
@@ -22,13 +22,12 @@
  * Authors: AMD
  *
  */
-
-#ifndef __DAL_HW_TRANSLATE_DIAG_FPGA_H__
-#define __DAL_HW_TRANSLATE_DIAG_FPGA_H__
+#ifndef __DAL_HW_TRANSLATE_DCN32_H__
+#define __DAL_HW_TRANSLATE_DCN32_H__
 
 struct hw_translate;
 
 /* Initialize Hw translate function pointers */
-void dal_hw_translate_diag_fpga_init(struct hw_translate *tr);
+void dal_hw_translate_dcn32_init(struct hw_translate *tr);
 
-#endif /* __DAL_HW_TRANSLATE_DIAG_FPGA_H__ */
+#endif /* __DAL_HW_TRANSLATE_DCN32_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_factory_diag.c b/drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_factory_diag.c
deleted file mode 100644
index c6e28f6bf1a2..000000000000
--- a/drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_factory_diag.c
+++ /dev/null
@@ -1,62 +0,0 @@
-/*
- * Copyright 2013-16 Advanced Micro Devices, Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- * Authors: AMD
- *
- */
-
-/*
- * Pre-requisites: headers required by header of this unit
- */
-
-#include "dm_services.h"
-#include "hw_factory_diag.h"
-#include "include/gpio_types.h"
-#include "../hw_factory.h"
-
-/*
- * Header of this unit
- */
-
-#include "../hw_gpio.h"
-#include "../hw_ddc.h"
-#include "../hw_hpd.h"
-#include "../hw_generic.h"
-
-/* function table */
-static const struct hw_factory_funcs funcs = {
-	.init_ddc_data = NULL,
-	.init_generic = NULL,
-	.init_hpd = NULL,
-};
-
-void dal_hw_factory_diag_fpga_init(struct hw_factory *factory)
-{
-	factory->number_of_pins[GPIO_ID_DDC_DATA] = 8;
-	factory->number_of_pins[GPIO_ID_DDC_CLOCK] = 8;
-	factory->number_of_pins[GPIO_ID_GENERIC] = 7;
-	factory->number_of_pins[GPIO_ID_HPD] = 6;
-	factory->number_of_pins[GPIO_ID_GPIO_PAD] = 31;
-	factory->number_of_pins[GPIO_ID_VIP_PAD] = 0;
-	factory->number_of_pins[GPIO_ID_SYNC] = 2;
-	factory->number_of_pins[GPIO_ID_GSL] = 4;
-	factory->funcs = &funcs;
-}
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_translate_diag.c b/drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_translate_diag.c
deleted file mode 100644
index e5138a5a8eb5..000000000000
--- a/drivers/gpu/drm/amd/display/dc/gpio/diagnostics/hw_translate_diag.c
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Copyright 2013-16 Advanced Micro Devices, Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- * Authors: AMD
- *
- */
-
-#include "dm_services.h"
-#include "hw_translate_diag.h"
-#include "include/gpio_types.h"
-
-#include "../hw_translate.h"
-
-/* function table */
-static const struct hw_translate_funcs funcs = {
-	.offset_to_id = NULL,
-	.id_to_offset = NULL,
-};
-
-void dal_hw_translate_diag_fpga_init(struct hw_translate *tr)
-{
-	tr->funcs = &funcs;
-}
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c b/drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c
index ef4f69612097..9756640411b9 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c
+++ b/drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c
@@ -53,23 +53,13 @@
 #include "dcn21/hw_factory_dcn21.h"
 #include "dcn30/hw_factory_dcn30.h"
 #include "dcn315/hw_factory_dcn315.h"
-
-#include "diagnostics/hw_factory_diag.h"
-
-/*
- * This unit
- */
+#include "dcn32/hw_factory_dcn32.h"
 
 bool dal_hw_factory_init(
 	struct hw_factory *factory,
 	enum dce_version dce_version,
 	enum dce_environment dce_environment)
 {
-	if (IS_FPGA_MAXIMUS_DC(dce_environment)) {
-		dal_hw_factory_diag_fpga_init(factory);
-		return true;
-	}
-
 	switch (dce_version) {
 #if defined(CONFIG_DRM_AMD_DC_SI)
 	case DCE_VERSION_6_0:
@@ -118,6 +108,10 @@ bool dal_hw_factory_init(
 	case DCN_VERSION_3_15:
 		dal_hw_factory_dcn315_init(factory);
 		return true;
+	case DCN_VERSION_3_2:
+	case DCN_VERSION_3_21:
+		dal_hw_factory_dcn32_init(factory);
+		return true;
 	default:
 		ASSERT_CRITICAL(false);
 		return false;
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c b/drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c
index 1db4f1414d7e..82aad7bc0300 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c
+++ b/drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c
@@ -51,8 +51,7 @@
 #include "dcn21/hw_translate_dcn21.h"
 #include "dcn30/hw_translate_dcn30.h"
 #include "dcn315/hw_translate_dcn315.h"
-
-#include "diagnostics/hw_translate_diag.h"
+#include "dcn32/hw_translate_dcn32.h"
 
 /*
  * This unit
@@ -63,11 +62,6 @@ bool dal_hw_translate_init(
 	enum dce_version dce_version,
 	enum dce_environment dce_environment)
 {
-	if (IS_FPGA_MAXIMUS_DC(dce_environment)) {
-		dal_hw_translate_diag_fpga_init(translate);
-		return true;
-	}
-
 	switch (dce_version) {
 #if defined(CONFIG_DRM_AMD_DC_SI)
 	case DCE_VERSION_6_0:
@@ -113,7 +107,10 @@ bool dal_hw_translate_init(
 	case DCN_VERSION_3_15:
 		dal_hw_translate_dcn315_init(translate);
 		return true;
-
+	case DCN_VERSION_3_2:
+	case DCN_VERSION_3_21:
+		dal_hw_translate_dcn32_init(translate);
+		return true;
 	default:
 		BREAK_TO_DEBUGGER();
 		return false;
diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c b/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c
index 5c87057a6abc..3f9d6531c563 100644
--- a/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c
+++ b/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c
@@ -34,6 +34,8 @@
 
 #include "ivsrcid/dcn/irqsrcs_dcn_1_0.h"
 
+#define DCN_BASE__INST0_SEG2                       0x000034C0
+
 enum dc_irq_source to_dal_irq_source_dcn32(
 		struct irq_service *irq_service,
 		uint32_t src_id,
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 08/43] drm/amd/display: DML changes for DCN32/321
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (5 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 07/43] drm/amd/display: add GPIO changes for DCN32/321 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 09/43] drm/amd/display: add CLKMGR " Alex Deucher
                   ` (34 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

DML is required for display configuration modelling for things like
bandwidth management and validation.

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dml/Makefile   |    7 +
 .../dc/dml/dcn30/display_mode_vba_30.c        |    8 +-
 .../dc/dml/dcn31/display_mode_vba_31.c        |    2 +-
 .../dc/dml/dcn32/display_mode_vba_32.c        | 3835 ++++++++++
 .../dc/dml/dcn32/display_mode_vba_32.h        |   57 +
 .../dc/dml/dcn32/display_mode_vba_util_32.c   | 6254 +++++++++++++++++
 .../dc/dml/dcn32/display_mode_vba_util_32.h   | 1175 ++++
 .../dc/dml/dcn32/display_rq_dlg_calc_32.c     |  616 ++
 .../dc/dml/dcn32/display_rq_dlg_calc_32.h     |   70 +
 .../amd/display/dc/dml/display_mode_enums.h   |   88 +-
 .../drm/amd/display/dc/dml/display_mode_lib.c |   12 +
 .../drm/amd/display/dc/dml/display_mode_lib.h |   15 +
 .../amd/display/dc/dml/display_mode_structs.h |  132 +
 .../drm/amd/display/dc/dml/display_mode_vba.c |  171 +
 .../drm/amd/display/dc/dml/display_mode_vba.h |  242 +-
 .../gpu/drm/amd/display/dc/dml/dml_wrapper.c  |   71 +-
 16 files changed, 12710 insertions(+), 45 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.h

diff --git a/drivers/gpu/drm/amd/display/dc/dml/Makefile b/drivers/gpu/drm/amd/display/dc/dml/Makefile
index a64b88ca01a9..c48688cdd7f7 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/dml/Makefile
@@ -72,6 +72,9 @@ CFLAGS_$(AMDDALPATH)/dc/dml/dcn30/display_rq_dlg_calc_30.o := $(dml_ccflags)
 CFLAGS_$(AMDDALPATH)/dc/dml/dcn31/display_mode_vba_31.o := $(dml_ccflags) $(frame_warn_flag)
 CFLAGS_$(AMDDALPATH)/dc/dml/dcn31/display_rq_dlg_calc_31.o := $(dml_ccflags)
 CFLAGS_$(AMDDALPATH)/dc/dml/dcn30/dcn30_fpu.o := $(dml_ccflags)
+CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_32.o := $(dml_ccflags) $(frame_warn_flag)
+CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_rq_dlg_calc_32.o := $(dml_ccflags)
+CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_util_32.o := $(dml_ccflags)
 CFLAGS_$(AMDDALPATH)/dc/dml/dcn31/dcn31_fpu.o := $(dml_ccflags)
 CFLAGS_$(AMDDALPATH)/dc/dml/dcn301/dcn301_fpu.o := $(dml_ccflags)
 CFLAGS_$(AMDDALPATH)/dc/dml/dcn302/dcn302_fpu.o := $(dml_ccflags)
@@ -93,6 +96,9 @@ CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn30/display_mode_vba_30.o := $(dml_rcflags)
 CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn30/display_rq_dlg_calc_30.o := $(dml_rcflags)
 CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn31/display_mode_vba_31.o := $(dml_rcflags)
 CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn31/display_rq_dlg_calc_31.o := $(dml_rcflags)
+CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_32.o := $(dml_rcflags)
+CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn32/display_rq_dlg_calc_32.o := $(dml_rcflags)
+CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_util_32.o := $(dml_rcflags)
 CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dcn301/dcn301_fpu.o := $(dml_rcflags)
 CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/display_mode_lib.o := $(dml_rcflags)
 CFLAGS_REMOVE_$(AMDDALPATH)/dc/dml/dsc/rc_calc_fpu.o  := $(dml_rcflags)
@@ -116,6 +122,7 @@ DML += dcn20/display_rq_dlg_calc_20v2.o dcn20/display_mode_vba_20v2.o
 DML += dcn21/display_rq_dlg_calc_21.o dcn21/display_mode_vba_21.o
 DML += dcn30/dcn30_fpu.o dcn30/display_mode_vba_30.o dcn30/display_rq_dlg_calc_30.o
 DML += dcn31/display_mode_vba_31.o dcn31/display_rq_dlg_calc_31.o
+DML += dcn32/display_mode_vba_32.o dcn32/display_rq_dlg_calc_32.o dcn32/display_mode_vba_util_32.o
 DML += dcn31/dcn31_fpu.o
 DML += dcn301/dcn301_fpu.o
 DML += dcn302/dcn302_fpu.o
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
index f47d82da115c..fb4aa4c800bf 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
@@ -438,8 +438,8 @@ static void UseMinimumDCFCLK(
 		int dpte_group_bytes[],
 		double PrefetchLinesY[][2][DC__NUM_DPP__MAX],
 		double PrefetchLinesC[][2][DC__NUM_DPP__MAX],
-		int swath_width_luma_ub_all_states[][2][DC__NUM_DPP__MAX],
-		int swath_width_chroma_ub_all_states[][2][DC__NUM_DPP__MAX],
+		unsigned int swath_width_luma_ub_all_states[][2][DC__NUM_DPP__MAX],
+		unsigned int swath_width_chroma_ub_all_states[][2][DC__NUM_DPP__MAX],
 		int BytePerPixelY[],
 		int BytePerPixelC[],
 		int HTotal[],
@@ -6696,8 +6696,8 @@ static void UseMinimumDCFCLK(
 		int dpte_group_bytes[],
 		double PrefetchLinesY[][2][DC__NUM_DPP__MAX],
 		double PrefetchLinesC[][2][DC__NUM_DPP__MAX],
-		int swath_width_luma_ub_all_states[][2][DC__NUM_DPP__MAX],
-		int swath_width_chroma_ub_all_states[][2][DC__NUM_DPP__MAX],
+		unsigned int swath_width_luma_ub_all_states[][2][DC__NUM_DPP__MAX],
+		unsigned int swath_width_chroma_ub_all_states[][2][DC__NUM_DPP__MAX],
 		int BytePerPixelY[],
 		int BytePerPixelC[],
 		int HTotal[],
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
index e4b9fd31223c..448fbbcdf88a 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
@@ -2996,7 +2996,7 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
 				v->ImmediateFlipSupported)) ? true : false;
 #ifdef __DML_VBA_DEBUG__
 		dml_print("DML::%s: PrefetchModeSupported %d\n", __func__, v->PrefetchModeSupported);
-		dml_print("DML::%s: ImmediateFlipRequirement %d\n", __func__, v->ImmediateFlipRequirement == dm_immediate_flip_required);
+		dml_print("DML::%s: ImmediateFlipRequirement[0] %d\n", __func__, v->ImmediateFlipRequirement[0] == dm_immediate_flip_required);
 		dml_print("DML::%s: ImmediateFlipSupported %d\n", __func__, v->ImmediateFlipSupported);
 		dml_print("DML::%s: ImmediateFlipSupport %d\n", __func__, v->ImmediateFlipSupport);
 		dml_print("DML::%s: HostVMEnable %d\n", __func__, v->HostVMEnable);
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
new file mode 100644
index 000000000000..b77a1ae792d1
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
@@ -0,0 +1,3835 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc. All rights reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dc.h"
+#include "dc_link.h"
+#include "../display_mode_lib.h"
+#include "display_mode_vba_32.h"
+#include "../dml_inline_defs.h"
+#include "display_mode_vba_util_32.h"
+
+static const unsigned int NumberOfStates = DC__VOLTAGE_STATES;
+
+void dml32_recalculate(struct display_mode_lib *mode_lib);
+static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation(
+		struct display_mode_lib *mode_lib);
+void dml32_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_lib);
+
+void dml32_recalculate(struct display_mode_lib *mode_lib)
+{
+	ModeSupportAndSystemConfiguration(mode_lib);
+
+	dml32_CalculateMaxDETAndMinCompressedBufferSize(mode_lib->vba.ConfigReturnBufferSizeInKByte,
+			mode_lib->vba.ROBBufferSizeInKByte,
+			DC__NUM_DPP,
+			false, //mode_lib->vba.override_setting.nomDETInKByteOverrideEnable,
+			0, //mode_lib->vba.override_setting.nomDETInKByteOverrideValue,
+
+			/* Output */
+			&mode_lib->vba.MaxTotalDETInKByte, &mode_lib->vba.nomDETInKByte,
+			&mode_lib->vba.MinCompressedBufferSizeInKByte);
+
+	PixelClockAdjustmentForProgressiveToInterlaceUnit(mode_lib);
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: Calling DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation\n", __func__);
+#endif
+	DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation(mode_lib);
+}
+
+static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation(
+		struct display_mode_lib *mode_lib)
+{
+	struct vba_vars_st *v = &mode_lib->vba;
+	unsigned int j, k;
+	bool ImmediateFlipRequirementFinal;
+	int iteration;
+	double MaxTotalRDBandwidth;
+	unsigned int NextPrefetchMode;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: --- START ---\n", __func__);
+	dml_print("DML::%s: mode_lib->vba.PrefetchMode = %d\n", __func__, mode_lib->vba.PrefetchMode);
+	dml_print("DML::%s: mode_lib->vba.ImmediateFlipSupport = %d\n", __func__, mode_lib->vba.ImmediateFlipSupport);
+	dml_print("DML::%s: mode_lib->vba.VoltageLevel = %d\n", __func__, mode_lib->vba.VoltageLevel);
+#endif
+
+	v->WritebackDISPCLK = 0.0;
+	v->GlobalDPPCLK = 0.0;
+
+	// DISPCLK and DPPCLK Calculation
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		if (mode_lib->vba.WritebackEnable[k]) {
+			v->WritebackDISPCLK = dml_max(v->WritebackDISPCLK,
+					dml32_CalculateWriteBackDISPCLK(
+							mode_lib->vba.WritebackPixelFormat[k],
+							mode_lib->vba.PixelClock[k], mode_lib->vba.WritebackHRatio[k],
+							mode_lib->vba.WritebackVRatio[k],
+							mode_lib->vba.WritebackHTaps[k],
+							mode_lib->vba.WritebackVTaps[k],
+							mode_lib->vba.WritebackSourceWidth[k],
+							mode_lib->vba.WritebackDestinationWidth[k],
+							mode_lib->vba.HTotal[k], mode_lib->vba.WritebackLineBufferSize,
+							mode_lib->vba.DISPCLKDPPCLKVCOSpeed));
+		}
+	}
+
+	v->DISPCLK_calculated = v->WritebackDISPCLK;
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		if (mode_lib->vba.BlendingAndTiming[k] == k) {
+			v->DISPCLK_calculated = dml_max(v->DISPCLK_calculated,
+					dml32_CalculateRequiredDispclk(
+							mode_lib->vba.ODMCombineEnabled[k],
+							mode_lib->vba.PixelClock[k],
+							mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading,
+							mode_lib->vba.DISPCLKRampingMargin,
+							mode_lib->vba.DISPCLKDPPCLKVCOSpeed,
+							mode_lib->vba.MaxDppclk[v->soc.num_states - 1]));
+		}
+	}
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		dml32_CalculateSinglePipeDPPCLKAndSCLThroughput(mode_lib->vba.HRatio[k],
+				mode_lib->vba.HRatioChroma[k],
+				mode_lib->vba.VRatio[k],
+				mode_lib->vba.VRatioChroma[k],
+				mode_lib->vba.MaxDCHUBToPSCLThroughput,
+				mode_lib->vba.MaxPSCLToLBThroughput,
+				mode_lib->vba.PixelClock[k],
+				mode_lib->vba.SourcePixelFormat[k],
+				mode_lib->vba.htaps[k],
+				mode_lib->vba.HTAPsChroma[k],
+				mode_lib->vba.vtaps[k],
+				mode_lib->vba.VTAPsChroma[k],
+
+				/* Output */
+				&v->PSCL_THROUGHPUT_LUMA[k], &v->PSCL_THROUGHPUT_CHROMA[k],
+				&v->DPPCLKUsingSingleDPP[k]);
+	}
+
+	dml32_CalculateDPPCLK(mode_lib->vba.NumberOfActiveSurfaces, mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading,
+			mode_lib->vba.DISPCLKDPPCLKVCOSpeed, v->DPPCLKUsingSingleDPP, mode_lib->vba.DPPPerPlane,
+			/* Output */
+			&v->GlobalDPPCLK, v->DPPCLK);
+
+	for (k = 0; k < v->NumberOfActiveSurfaces; ++k) {
+		v->DPPCLK_calculated[k] = v->DPPCLK[k];
+	}
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		dml32_CalculateBytePerPixelAndBlockSizes(
+				mode_lib->vba.SourcePixelFormat[k],
+				mode_lib->vba.SurfaceTiling[k],
+
+				/* Output */
+				&v->BytePerPixelY[k],
+				&v->BytePerPixelC[k],
+				&v->BytePerPixelDETY[k],
+				&v->BytePerPixelDETC[k],
+				&v->BlockHeight256BytesY[k],
+				&v->BlockHeight256BytesC[k],
+				&v->BlockWidth256BytesY[k],
+				&v->BlockWidth256BytesC[k],
+				&v->BlockHeightY[k],
+				&v->BlockHeightC[k],
+				&v->BlockWidthY[k],
+				&v->BlockWidthC[k]);
+	}
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: %d\n", __func__, __LINE__);
+#endif
+	dml32_CalculateSwathWidth(
+			false,  // ForceSingleDPP
+			mode_lib->vba.NumberOfActiveSurfaces,
+			mode_lib->vba.SourcePixelFormat,
+			mode_lib->vba.SourceRotation,
+			mode_lib->vba.ViewportStationary,
+			mode_lib->vba.ViewportWidth,
+			mode_lib->vba.ViewportHeight,
+			mode_lib->vba.ViewportXStartY,
+			mode_lib->vba.ViewportYStartY,
+			mode_lib->vba.ViewportXStartC,
+			mode_lib->vba.ViewportYStartC,
+			mode_lib->vba.SurfaceWidthY,
+			mode_lib->vba.SurfaceWidthC,
+			mode_lib->vba.SurfaceHeightY,
+			mode_lib->vba.SurfaceHeightC,
+			mode_lib->vba.ODMCombineEnabled,
+			v->BytePerPixelY,
+			v->BytePerPixelC,
+			v->BlockHeight256BytesY,
+			v->BlockHeight256BytesC,
+			v->BlockWidth256BytesY,
+			v->BlockWidth256BytesC,
+			mode_lib->vba.BlendingAndTiming,
+			mode_lib->vba.HActive,
+			mode_lib->vba.HRatio,
+			mode_lib->vba.DPPPerPlane,
+
+			/* Output */
+			v->SwathWidthSingleDPPY, v->SwathWidthSingleDPPC, v->SwathWidthY, v->SwathWidthC,
+			v->dummy_vars
+				.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+				.dummy_integer_array[0], // Integer             MaximumSwathHeightY[]
+			v->dummy_vars
+				.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+				.dummy_integer_array[1], // Integer             MaximumSwathHeightC[]
+			v->swath_width_luma_ub, v->swath_width_chroma_ub);
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		v->ReadBandwidthSurfaceLuma[k] = v->SwathWidthSingleDPPY[k] * v->BytePerPixelY[k]
+				/ (mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k]) * mode_lib->vba.VRatio[k];
+		v->ReadBandwidthSurfaceChroma[k] = v->SwathWidthSingleDPPC[k] * v->BytePerPixelC[k]
+				/ (mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k])
+				* mode_lib->vba.VRatioChroma[k];
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: ReadBandwidthSurfaceLuma[%i] = %fBps\n",
+				__func__, k, v->ReadBandwidthSurfaceLuma[k]);
+		dml_print("DML::%s: ReadBandwidthSurfaceChroma[%i] = %fBps\n",
+				__func__, k, v->ReadBandwidthSurfaceChroma[k]);
+#endif
+	}
+
+	{
+		// VBA_DELTA
+		// Calculate DET size, swath height
+		dml32_CalculateSwathAndDETConfiguration(
+				mode_lib->vba.DETSizeOverride,
+				mode_lib->vba.UsesMALLForPStateChange,
+				mode_lib->vba.ConfigReturnBufferSizeInKByte,
+				mode_lib->vba.MaxTotalDETInKByte,
+				mode_lib->vba.MinCompressedBufferSizeInKByte,
+				false, /* ForceSingleDPP */
+				mode_lib->vba.NumberOfActiveSurfaces,
+				mode_lib->vba.nomDETInKByte,
+				mode_lib->vba.UseUnboundedRequesting,
+				mode_lib->vba.CompressedBufferSegmentSizeInkByteFinal,
+				v->dummy_vars
+					.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+					.dummy_output_encoder_array, /* output_encoder_class Output[] */
+				v->ReadBandwidthSurfaceLuma,
+				v->ReadBandwidthSurfaceChroma,
+				v->dummy_vars
+					.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+					.dummy_single_array[0], /* Single MaximumSwathWidthLuma[] */
+				v->dummy_vars
+					.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+					.dummy_single_array[1], /* Single MaximumSwathWidthChroma[] */
+				mode_lib->vba.SourceRotation,
+				mode_lib->vba.ViewportStationary,
+				mode_lib->vba.SourcePixelFormat,
+				mode_lib->vba.SurfaceTiling,
+				mode_lib->vba.ViewportWidth,
+				mode_lib->vba.ViewportHeight,
+				mode_lib->vba.ViewportXStartY,
+				mode_lib->vba.ViewportYStartY,
+				mode_lib->vba.ViewportXStartC,
+				mode_lib->vba.ViewportYStartC,
+				mode_lib->vba.SurfaceWidthY,
+				mode_lib->vba.SurfaceWidthC,
+				mode_lib->vba.SurfaceHeightY,
+				mode_lib->vba.SurfaceHeightC,
+				v->BlockHeight256BytesY,
+				v->BlockHeight256BytesC,
+				v->BlockWidth256BytesY,
+				v->BlockWidth256BytesC,
+				mode_lib->vba.ODMCombineEnabled,
+				mode_lib->vba.BlendingAndTiming,
+				v->BytePerPixelY,
+				v->BytePerPixelC,
+				v->BytePerPixelDETY,
+				v->BytePerPixelDETC,
+				mode_lib->vba.HActive,
+				mode_lib->vba.HRatio,
+				mode_lib->vba.HRatioChroma,
+				mode_lib->vba.DPPPerPlane,
+
+				/* Output */
+				v->dummy_vars
+					.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+					.dummy_long_array[0], /* Long swath_width_luma_ub[] */
+				v->dummy_vars
+					.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+					.dummy_long_array[1], /* Long swath_width_chroma_ub[] */
+				v->dummy_vars
+					.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+					.dummy_double_array[0], /* Long SwathWidth[] */
+				v->dummy_vars
+					.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+					.dummy_double_array[1], /* Long SwathWidthChroma[] */
+				mode_lib->vba.SwathHeightY,
+				mode_lib->vba.SwathHeightC,
+				mode_lib->vba.DETBufferSizeInKByte,
+				mode_lib->vba.DETBufferSizeY,
+				mode_lib->vba.DETBufferSizeC,
+				&v->UnboundedRequestEnabled,
+				&v->CompressedBufferSizeInkByte,
+				v->dummy_vars
+					.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+					.dummy_boolean_array, /* bool ViewportSizeSupportPerSurface[] */
+				&v->dummy_vars
+					 .DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+					 .dummy_boolean); /* bool *ViewportSizeSupport */
+	}
+
+	// DCFCLK Deep Sleep
+	dml32_CalculateDCFCLKDeepSleep(
+			mode_lib->vba.NumberOfActiveSurfaces,
+			v->BytePerPixelY,
+			v->BytePerPixelC,
+			mode_lib->vba.VRatio,
+			mode_lib->vba.VRatioChroma,
+			v->SwathWidthY,
+			v->SwathWidthC,
+			mode_lib->vba.DPPPerPlane,
+			mode_lib->vba.HRatio,
+			mode_lib->vba.HRatioChroma,
+			mode_lib->vba.PixelClock,
+			v->PSCL_THROUGHPUT_LUMA,
+			v->PSCL_THROUGHPUT_CHROMA,
+			mode_lib->vba.DPPCLK,
+			v->ReadBandwidthSurfaceLuma,
+			v->ReadBandwidthSurfaceChroma,
+			mode_lib->vba.ReturnBusWidth,
+
+			/* Output */
+			&v->DCFCLKDeepSleep);
+
+	// DSCCLK
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		if ((mode_lib->vba.BlendingAndTiming[k] != k) || !mode_lib->vba.DSCEnabled[k]) {
+			v->DSCCLK_calculated[k] = 0.0;
+		} else {
+			if (mode_lib->vba.OutputFormat[k] == dm_420)
+				mode_lib->vba.DSCFormatFactor = 2;
+			else if (mode_lib->vba.OutputFormat[k] == dm_444)
+				mode_lib->vba.DSCFormatFactor = 1;
+			else if (mode_lib->vba.OutputFormat[k] == dm_n422)
+				mode_lib->vba.DSCFormatFactor = 2;
+			else
+				mode_lib->vba.DSCFormatFactor = 1;
+			if (mode_lib->vba.ODMCombineEnabled[k] == dm_odm_combine_mode_4to1)
+				v->DSCCLK_calculated[k] = mode_lib->vba.PixelClockBackEnd[k] / 12
+						/ mode_lib->vba.DSCFormatFactor
+						/ (1 - mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100);
+			else if (mode_lib->vba.ODMCombineEnabled[k] == dm_odm_combine_mode_2to1)
+				v->DSCCLK_calculated[k] = mode_lib->vba.PixelClockBackEnd[k] / 6
+						/ mode_lib->vba.DSCFormatFactor
+						/ (1 - mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100);
+			else
+				v->DSCCLK_calculated[k] = mode_lib->vba.PixelClockBackEnd[k] / 3
+						/ mode_lib->vba.DSCFormatFactor
+						/ (1 - mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100);
+		}
+	}
+
+	// DSC Delay
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		v->DSCDelay[k] = dml32_DSCDelayRequirement(mode_lib->vba.DSCEnabled[k],
+				mode_lib->vba.ODMCombineEnabled[k], mode_lib->vba.DSCInputBitPerComponent[k],
+				mode_lib->vba.OutputBpp[k], mode_lib->vba.HActive[k], mode_lib->vba.HTotal[k],
+				mode_lib->vba.NumberOfDSCSlices[k], mode_lib->vba.OutputFormat[k],
+				mode_lib->vba.Output[k], mode_lib->vba.PixelClock[k],
+				mode_lib->vba.PixelClockBackEnd[k]);
+	}
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k)
+		for (j = 0; j < mode_lib->vba.NumberOfActiveSurfaces; ++j) // NumberOfSurfaces
+			if (j != k && mode_lib->vba.BlendingAndTiming[k] == j && mode_lib->vba.DSCEnabled[j])
+				v->DSCDelay[k] = v->DSCDelay[j];
+
+	//Immediate Flip
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		v->ImmediateFlipSupportedSurface[k] = mode_lib->vba.ImmediateFlipSupport
+				&& (mode_lib->vba.ImmediateFlipRequirement[k] != dm_immediate_flip_not_required);
+	}
+
+	// Prefetch
+	dml32_CalculateSurfaceSizeInMall(
+				mode_lib->vba.NumberOfActiveSurfaces,
+				mode_lib->vba.MALLAllocatedForDCNFinal,
+				mode_lib->vba.UseMALLForStaticScreen,
+				mode_lib->vba.DCCEnable,
+				mode_lib->vba.ViewportStationary,
+				mode_lib->vba.ViewportXStartY,
+				mode_lib->vba.ViewportYStartY,
+				mode_lib->vba.ViewportXStartC,
+				mode_lib->vba.ViewportYStartC,
+				mode_lib->vba.ViewportWidth,
+				mode_lib->vba.ViewportHeight,
+				v->BytePerPixelY,
+				mode_lib->vba.ViewportWidthChroma,
+				mode_lib->vba.ViewportHeightChroma,
+				v->BytePerPixelC,
+				mode_lib->vba.SurfaceWidthY,
+				mode_lib->vba.SurfaceWidthC,
+				mode_lib->vba.SurfaceHeightY,
+				mode_lib->vba.SurfaceHeightC,
+				v->BlockWidth256BytesY,
+				v->BlockWidth256BytesC,
+				v->BlockHeight256BytesY,
+				v->BlockHeight256BytesC,
+				v->BlockWidthY,
+				v->BlockWidthC,
+				v->BlockHeightY,
+				v->BlockHeightC,
+
+				/* Output */
+				v->SurfaceSizeInMALL,
+				&v->dummy_vars.
+				DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+				.dummy_boolean2); /* Boolean *ExceededMALLSize */
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].PixelClock = mode_lib->vba.PixelClock[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].DPPPerSurface = mode_lib->vba.DPPPerPlane[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].SourceRotation = mode_lib->vba.SourceRotation[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].ViewportHeight = mode_lib->vba.ViewportHeight[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].ViewportHeightChroma = mode_lib->vba.ViewportHeightChroma[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].BlockWidth256BytesY = v->BlockWidth256BytesY[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].BlockHeight256BytesY = v->BlockHeight256BytesY[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].BlockWidth256BytesC = v->BlockWidth256BytesC[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].BlockHeight256BytesC = v->BlockHeight256BytesC[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].BlockWidthY = v->BlockWidthY[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].BlockHeightY = v->BlockHeightY[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].BlockWidthC = v->BlockWidthC[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].BlockHeightC = v->BlockHeightC[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].InterlaceEnable = mode_lib->vba.Interlace[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].HTotal = mode_lib->vba.HTotal[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].DCCEnable = mode_lib->vba.DCCEnable[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].SourcePixelFormat = mode_lib->vba.SourcePixelFormat[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].SurfaceTiling = mode_lib->vba.SurfaceTiling[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].BytePerPixelY = v->BytePerPixelY[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].BytePerPixelC = v->BytePerPixelC[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].ProgressiveToInterlaceUnitInOPP = mode_lib->vba.ProgressiveToInterlaceUnitInOPP;
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].VRatio = mode_lib->vba.VRatio[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].VRatioChroma = mode_lib->vba.VRatioChroma[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].VTaps = mode_lib->vba.vtaps[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].VTapsChroma = mode_lib->vba.VTAPsChroma[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].PitchY = mode_lib->vba.PitchY[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].DCCMetaPitchY = mode_lib->vba.DCCMetaPitchY[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].PitchC = mode_lib->vba.PitchC[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].DCCMetaPitchC = mode_lib->vba.DCCMetaPitchC[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].ViewportStationary = mode_lib->vba.ViewportStationary[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].ViewportXStart = mode_lib->vba.ViewportXStartY[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].ViewportYStart = mode_lib->vba.ViewportYStartY[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].ViewportXStartC = mode_lib->vba.ViewportXStartC[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].ViewportYStartC = mode_lib->vba.ViewportYStartC[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].FORCE_ONE_ROW_FOR_FRAME = mode_lib->vba.ForceOneRowForFrame[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].SwathHeightY = mode_lib->vba.SwathHeightY[k];
+		v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters[k].SwathHeightC = mode_lib->vba.SwathHeightC[k];
+	}
+
+	{
+
+		dml32_CalculateVMRowAndSwath(
+				mode_lib->vba.NumberOfActiveSurfaces,
+				v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.SurfaceParameters,
+				v->SurfaceSizeInMALL,
+				mode_lib->vba.PTEBufferSizeInRequestsLuma,
+				mode_lib->vba.PTEBufferSizeInRequestsChroma,
+				mode_lib->vba.DCCMetaBufferSizeBytes,
+				mode_lib->vba.UseMALLForStaticScreen,
+				mode_lib->vba.UsesMALLForPStateChange,
+				mode_lib->vba.MALLAllocatedForDCNFinal,
+				v->SwathWidthY,
+				v->SwathWidthC,
+				mode_lib->vba.GPUVMEnable,
+				mode_lib->vba.HostVMEnable,
+				mode_lib->vba.HostVMMaxNonCachedPageTableLevels,
+				mode_lib->vba.GPUVMMaxPageTableLevels,
+				mode_lib->vba.GPUVMMinPageSizeKBytes,
+				mode_lib->vba.HostVMMinPageSize,
+
+				/* Output */
+				v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.dummy_boolean_array2[0],  // Boolean PTEBufferSizeNotExceeded[]
+				v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.dummy_boolean_array2[1],  // Boolean DCCMetaBufferSizeNotExceeded[]
+				v->dpte_row_width_luma_ub,
+				v->dpte_row_width_chroma_ub,
+				v->dpte_row_height,
+				v->dpte_row_height_chroma,
+				v->dpte_row_height_linear,
+				v->dpte_row_height_linear_chroma,
+				v->meta_req_width,
+				v->meta_req_width_chroma,
+				v->meta_req_height,
+				v->meta_req_height_chroma,
+				v->meta_row_width,
+				v->meta_row_width_chroma,
+				v->meta_row_height,
+				v->meta_row_height_chroma,
+				v->vm_group_bytes,
+				v->dpte_group_bytes,
+				v->PixelPTEReqWidthY,
+				v->PixelPTEReqHeightY,
+				v->PTERequestSizeY,
+				v->PixelPTEReqWidthC,
+				v->PixelPTEReqHeightC,
+				v->PTERequestSizeC,
+				v->dpde0_bytes_per_frame_ub_l,
+				v->meta_pte_bytes_per_frame_ub_l,
+				v->dpde0_bytes_per_frame_ub_c,
+				v->meta_pte_bytes_per_frame_ub_c,
+				v->PrefetchSourceLinesY,
+				v->PrefetchSourceLinesC,
+				v->VInitPreFillY, v->VInitPreFillC,
+				v->MaxNumSwathY,
+				v->MaxNumSwathC,
+				v->meta_row_bw,
+				v->dpte_row_bw,
+				v->PixelPTEBytesPerRow,
+				v->PDEAndMetaPTEBytesFrame,
+				v->MetaRowByte,
+				v->Use_One_Row_For_Frame,
+				v->Use_One_Row_For_Frame_Flip,
+				v->UsesMALLForStaticScreen,
+				v->PTE_BUFFER_MODE,
+				v->BIGK_FRAGMENT_SIZE);
+	}
+
+
+	v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.ReorderBytes = mode_lib->vba.NumberOfChannels
+			* dml_max3(mode_lib->vba.UrgentOutOfOrderReturnPerChannelPixelDataOnly,
+					mode_lib->vba.UrgentOutOfOrderReturnPerChannelPixelMixedWithVMData,
+					mode_lib->vba.UrgentOutOfOrderReturnPerChannelVMDataOnly);
+
+	v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.VMDataOnlyReturnBW = dml32_get_return_bw_mbps_vm_only(
+			&mode_lib->vba.soc,
+			mode_lib->vba.VoltageLevel,
+			mode_lib->vba.DCFCLK,
+			mode_lib->vba.FabricClock,
+			mode_lib->vba.DRAMSpeed);
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: mode_lib->vba.ReturnBusWidth = %f\n", __func__, mode_lib->vba.ReturnBusWidth);
+	dml_print("DML::%s: mode_lib->vba.DCFCLK = %f\n", __func__, mode_lib->vba.DCFCLK);
+	dml_print("DML::%s: mode_lib->vba.FabricClock = %f\n", __func__, mode_lib->vba.FabricClock);
+	dml_print("DML::%s: mode_lib->vba.FabricDatapathToDCNDataReturn = %f\n", __func__,
+			mode_lib->vba.FabricDatapathToDCNDataReturn);
+	dml_print("DML::%s: mode_lib->vba.PercentOfIdealSDPPortBWReceivedAfterUrgLatency = %f\n",
+			__func__, mode_lib->vba.PercentOfIdealSDPPortBWReceivedAfterUrgLatency);
+	dml_print("DML::%s: mode_lib->vba.DRAMSpeed = %f\n", __func__, mode_lib->vba.DRAMSpeed);
+	dml_print("DML::%s: mode_lib->vba.NumberOfChannels = %f\n", __func__, mode_lib->vba.NumberOfChannels);
+	dml_print("DML::%s: mode_lib->vba.DRAMChannelWidth = %f\n", __func__, mode_lib->vba.DRAMChannelWidth);
+	dml_print("DML::%s: mode_lib->vba.PercentOfIdealDRAMBWReceivedAfterUrgLatencyVMDataOnly = %f\n",
+			__func__, mode_lib->vba.PercentOfIdealDRAMBWReceivedAfterUrgLatencyVMDataOnly);
+	dml_print("DML::%s: VMDataOnlyReturnBW = %f\n", __func__, VMDataOnlyReturnBW);
+	dml_print("DML::%s: ReturnBW = %f\n", __func__, mode_lib->vba.ReturnBW);
+#endif
+
+	v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.HostVMInefficiencyFactor = 1.0;
+
+	if (mode_lib->vba.GPUVMEnable && mode_lib->vba.HostVMEnable)
+		v->dummy_vars
+			.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+			.HostVMInefficiencyFactor =
+			mode_lib->vba.ReturnBW / v->dummy_vars
+				.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+				.VMDataOnlyReturnBW;
+
+	mode_lib->vba.TotalDCCActiveDPP = 0;
+	mode_lib->vba.TotalActiveDPP = 0;
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		mode_lib->vba.TotalActiveDPP = mode_lib->vba.TotalActiveDPP + mode_lib->vba.DPPPerPlane[k];
+		if (mode_lib->vba.DCCEnable[k])
+			mode_lib->vba.TotalDCCActiveDPP = mode_lib->vba.TotalDCCActiveDPP
+					+ mode_lib->vba.DPPPerPlane[k];
+	}
+
+	v->UrgentExtraLatency = dml32_CalculateExtraLatency(
+			mode_lib->vba.RoundTripPingLatencyCycles,
+			v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.ReorderBytes,
+			mode_lib->vba.DCFCLK,
+			mode_lib->vba.TotalActiveDPP,
+			mode_lib->vba.PixelChunkSizeInKByte,
+			mode_lib->vba.TotalDCCActiveDPP,
+			mode_lib->vba.MetaChunkSize,
+			mode_lib->vba.ReturnBW,
+			mode_lib->vba.GPUVMEnable,
+			mode_lib->vba.HostVMEnable,
+			mode_lib->vba.NumberOfActiveSurfaces,
+			mode_lib->vba.DPPPerPlane,
+			v->dpte_group_bytes,
+			v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.HostVMInefficiencyFactor,
+			mode_lib->vba.HostVMMinPageSize,
+			mode_lib->vba.HostVMMaxNonCachedPageTableLevels);
+
+	mode_lib->vba.TCalc = 24.0 / v->DCFCLKDeepSleep;
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		if (mode_lib->vba.BlendingAndTiming[k] == k) {
+			if (mode_lib->vba.WritebackEnable[k] == true) {
+				v->WritebackDelay[mode_lib->vba.VoltageLevel][k] = mode_lib->vba.WritebackLatency
+						+ dml32_CalculateWriteBackDelay(
+								mode_lib->vba.WritebackPixelFormat[k],
+								mode_lib->vba.WritebackHRatio[k],
+								mode_lib->vba.WritebackVRatio[k],
+								mode_lib->vba.WritebackVTaps[k],
+								mode_lib->vba.WritebackDestinationWidth[k],
+								mode_lib->vba.WritebackDestinationHeight[k],
+								mode_lib->vba.WritebackSourceHeight[k],
+								mode_lib->vba.HTotal[k]) / mode_lib->vba.DISPCLK;
+			} else
+				v->WritebackDelay[mode_lib->vba.VoltageLevel][k] = 0;
+			for (j = 0; j < mode_lib->vba.NumberOfActiveSurfaces; ++j) {
+				if (mode_lib->vba.BlendingAndTiming[j] == k &&
+					mode_lib->vba.WritebackEnable[j] == true) {
+					v->WritebackDelay[mode_lib->vba.VoltageLevel][k] =
+						dml_max(v->WritebackDelay[mode_lib->vba.VoltageLevel][k],
+						mode_lib->vba.WritebackLatency +
+						dml32_CalculateWriteBackDelay(
+								mode_lib->vba.WritebackPixelFormat[j],
+								mode_lib->vba.WritebackHRatio[j],
+								mode_lib->vba.WritebackVRatio[j],
+								mode_lib->vba.WritebackVTaps[j],
+								mode_lib->vba.WritebackDestinationWidth[j],
+								mode_lib->vba.WritebackDestinationHeight[j],
+								mode_lib->vba.WritebackSourceHeight[j],
+								mode_lib->vba.HTotal[k]) / mode_lib->vba.DISPCLK);
+				}
+			}
+		}
+	}
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k)
+		for (j = 0; j < mode_lib->vba.NumberOfActiveSurfaces; ++j)
+			if (mode_lib->vba.BlendingAndTiming[k] == j)
+				v->WritebackDelay[mode_lib->vba.VoltageLevel][k] =
+						v->WritebackDelay[mode_lib->vba.VoltageLevel][j];
+
+	v->UrgentLatency = dml32_CalculateUrgentLatency(mode_lib->vba.UrgentLatencyPixelDataOnly,
+			mode_lib->vba.UrgentLatencyPixelMixedWithVMData,
+			mode_lib->vba.UrgentLatencyVMDataOnly,
+			mode_lib->vba.DoUrgentLatencyAdjustment,
+			mode_lib->vba.UrgentLatencyAdjustmentFabricClockComponent,
+			mode_lib->vba.UrgentLatencyAdjustmentFabricClockReference,
+			mode_lib->vba.FabricClock);
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		dml32_CalculateUrgentBurstFactor(mode_lib->vba.UsesMALLForPStateChange[k],
+				v->swath_width_luma_ub[k],
+				v->swath_width_chroma_ub[k],
+				mode_lib->vba.SwathHeightY[k],
+				mode_lib->vba.SwathHeightC[k],
+				mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k],
+				v->UrgentLatency,
+				mode_lib->vba.CursorBufferSize,
+				mode_lib->vba.CursorWidth[k][0],
+				mode_lib->vba.CursorBPP[k][0],
+				mode_lib->vba.VRatio[k],
+				mode_lib->vba.VRatioChroma[k],
+				v->BytePerPixelDETY[k],
+				v->BytePerPixelDETC[k],
+				mode_lib->vba.DETBufferSizeY[k],
+				mode_lib->vba.DETBufferSizeC[k],
+
+				/* output */
+				&v->UrgBurstFactorCursor[k],
+				&v->UrgBurstFactorLuma[k],
+				&v->UrgBurstFactorChroma[k],
+				&v->NoUrgentLatencyHiding[k]);
+
+		v->cursor_bw[k] = mode_lib->vba.NumberOfCursors[k] * mode_lib->vba.CursorWidth[k][0] * mode_lib->vba.CursorBPP[k][0] / 8 / (mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k]) * mode_lib->vba.VRatio[k];
+	}
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		v->MaxVStartupLines[k] = ((mode_lib->vba.Interlace[k] &&
+				!mode_lib->vba.ProgressiveToInterlaceUnitInOPP) ?
+				dml_floor((mode_lib->vba.VTotal[k] - mode_lib->vba.VActive[k]) / 2.0, 1.0) :
+				mode_lib->vba.VTotal[k] - mode_lib->vba.VActive[k]) - dml_max(1.0,
+				dml_ceil((double) v->WritebackDelay[mode_lib->vba.VoltageLevel][k]
+				/ (mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k]), 1));
+
+	// Clamp to max OTG vstartup register limit
+	if (v->MaxVStartupLines[k] > 1023)
+		v->MaxVStartupLines[k] = 1023;
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d MaxVStartupLines = %d\n", __func__, k, v->MaxVStartupLines[k]);
+		dml_print("DML::%s: k=%d VoltageLevel = %d\n", __func__, k, mode_lib->vba.VoltageLevel);
+		dml_print("DML::%s: k=%d WritebackDelay = %f\n", __func__,
+				k, v->WritebackDelay[mode_lib->vba.VoltageLevel][k]);
+#endif
+	}
+
+	v->MaximumMaxVStartupLines = 0;
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k)
+		v->MaximumMaxVStartupLines = dml_max(v->MaximumMaxVStartupLines, v->MaxVStartupLines[k]);
+
+	ImmediateFlipRequirementFinal = false;
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		ImmediateFlipRequirementFinal = ImmediateFlipRequirementFinal
+				|| (mode_lib->vba.ImmediateFlipRequirement[k] == dm_immediate_flip_required);
+	}
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: ImmediateFlipRequirementFinal = %d\n", __func__, ImmediateFlipRequirementFinal);
+#endif
+	// ModeProgramming will not repeat the schedule calculation using different prefetch mode,
+	//it is just calcualated once with given prefetch mode
+	dml32_CalculateMinAndMaxPrefetchMode(
+			mode_lib->vba.AllowForPStateChangeOrStutterInVBlankFinal,
+			&mode_lib->vba.MinPrefetchMode,
+			&mode_lib->vba.MaxPrefetchMode);
+
+	v->VStartupLines = __DML_VBA_MIN_VSTARTUP__;
+
+	iteration = 0;
+	MaxTotalRDBandwidth = 0;
+	NextPrefetchMode = mode_lib->vba.PrefetchModePerState[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb];
+
+	do {
+		double MaxTotalRDBandwidthNoUrgentBurst = 0.0;
+		bool DestinationLineTimesForPrefetchLessThan2 = false;
+		bool VRatioPrefetchMoreThanMax = false;
+		double dummy_unit_vector[DC__NUM_DPP__MAX];
+
+		MaxTotalRDBandwidth = 0;
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: Start loop: VStartup = %d\n", __func__, mode_lib->vba.VStartupLines);
+#endif
+		for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+			/* NOTE PerfetchMode variable is invalid in DAL as per the input received.
+			 * Hence the direction is to use PrefetchModePerState.
+			 */
+			double TWait = dml32_CalculateTWait(
+					mode_lib->vba.PrefetchModePerState[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb],
+					mode_lib->vba.UsesMALLForPStateChange[k],
+					mode_lib->vba.SynchronizeDRRDisplaysForUCLKPStateChangeFinal,
+					mode_lib->vba.DRRDisplay[k],
+					mode_lib->vba.DRAMClockChangeLatency,
+					mode_lib->vba.FCLKChangeLatency, v->UrgentLatency,
+					mode_lib->vba.SREnterPlusExitTime);
+
+			DmlPipe myPipe;
+
+			myPipe.Dppclk = mode_lib->vba.DPPCLK[k];
+			myPipe.Dispclk = mode_lib->vba.DISPCLK;
+			myPipe.PixelClock = mode_lib->vba.PixelClock[k];
+			myPipe.DCFClkDeepSleep = v->DCFCLKDeepSleep;
+			myPipe.DPPPerSurface = mode_lib->vba.DPPPerPlane[k];
+			myPipe.ScalerEnabled = mode_lib->vba.ScalerEnabled[k];
+			myPipe.SourceRotation = mode_lib->vba.SourceRotation[k];
+			myPipe.BlockWidth256BytesY = v->BlockWidth256BytesY[k];
+			myPipe.BlockHeight256BytesY = v->BlockHeight256BytesY[k];
+			myPipe.BlockWidth256BytesC = v->BlockWidth256BytesC[k];
+			myPipe.BlockHeight256BytesC = v->BlockHeight256BytesC[k];
+			myPipe.InterlaceEnable = mode_lib->vba.Interlace[k];
+			myPipe.NumberOfCursors = mode_lib->vba.NumberOfCursors[k];
+			myPipe.VBlank = mode_lib->vba.VTotal[k] - mode_lib->vba.VActive[k];
+			myPipe.HTotal = mode_lib->vba.HTotal[k];
+			myPipe.HActive = mode_lib->vba.HActive[k];
+			myPipe.DCCEnable = mode_lib->vba.DCCEnable[k];
+			myPipe.ODMMode = mode_lib->vba.ODMCombineEnabled[k];
+			myPipe.SourcePixelFormat = mode_lib->vba.SourcePixelFormat[k];
+			myPipe.BytePerPixelY = v->BytePerPixelY[k];
+			myPipe.BytePerPixelC = v->BytePerPixelC[k];
+			myPipe.ProgressiveToInterlaceUnitInOPP = mode_lib->vba.ProgressiveToInterlaceUnitInOPP;
+			v->ErrorResult[k] = dml32_CalculatePrefetchSchedule(v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.HostVMInefficiencyFactor,
+					&myPipe, v->DSCDelay[k],
+					mode_lib->vba.DPPCLKDelaySubtotal + mode_lib->vba.DPPCLKDelayCNVCFormater,
+					mode_lib->vba.DPPCLKDelaySCL,
+					mode_lib->vba.DPPCLKDelaySCLLBOnly,
+					mode_lib->vba.DPPCLKDelayCNVCCursor,
+					mode_lib->vba.DISPCLKDelaySubtotal,
+					(unsigned int) (v->SwathWidthY[k] / mode_lib->vba.HRatio[k]),
+					mode_lib->vba.OutputFormat[k],
+					mode_lib->vba.MaxInterDCNTileRepeaters,
+					dml_min(v->VStartupLines, v->MaxVStartupLines[k]),
+					v->MaxVStartupLines[k],
+					mode_lib->vba.GPUVMMaxPageTableLevels,
+					mode_lib->vba.GPUVMEnable,
+					mode_lib->vba.HostVMEnable,
+					mode_lib->vba.HostVMMaxNonCachedPageTableLevels,
+					mode_lib->vba.HostVMMinPageSize,
+					mode_lib->vba.DynamicMetadataEnable[k],
+					mode_lib->vba.DynamicMetadataVMEnabled,
+					mode_lib->vba.DynamicMetadataLinesBeforeActiveRequired[k],
+					mode_lib->vba.DynamicMetadataTransmittedBytes[k],
+					v->UrgentLatency,
+					v->UrgentExtraLatency,
+					mode_lib->vba.TCalc,
+					v->PDEAndMetaPTEBytesFrame[k],
+					v->MetaRowByte[k],
+					v->PixelPTEBytesPerRow[k],
+					v->PrefetchSourceLinesY[k],
+					v->SwathWidthY[k],
+					v->VInitPreFillY[k],
+					v->MaxNumSwathY[k],
+					v->PrefetchSourceLinesC[k],
+					v->SwathWidthC[k],
+					v->VInitPreFillC[k],
+					v->MaxNumSwathC[k],
+					v->swath_width_luma_ub[k],
+					v->swath_width_chroma_ub[k],
+					mode_lib->vba.SwathHeightY[k],
+					mode_lib->vba.SwathHeightC[k],
+					TWait,
+					/* Output */
+					&v->DSTXAfterScaler[k],
+					&v->DSTYAfterScaler[k],
+					&v->DestinationLinesForPrefetch[k],
+					&v->PrefetchBandwidth[k],
+					&v->DestinationLinesToRequestVMInVBlank[k],
+					&v->DestinationLinesToRequestRowInVBlank[k],
+					&v->VRatioPrefetchY[k],
+					&v->VRatioPrefetchC[k],
+					&v->RequiredPrefetchPixDataBWLuma[k],
+					&v->RequiredPrefetchPixDataBWChroma[k],
+					&v->NotEnoughTimeForDynamicMetadata[k],
+					&v->Tno_bw[k], &v->prefetch_vmrow_bw[k],
+					&v->Tdmdl_vm[k],
+					&v->Tdmdl[k],
+					&v->TSetup[k],
+					&v->VUpdateOffsetPix[k],
+					&v->VUpdateWidthPix[k],
+					&v->VReadyOffsetPix[k]);
+
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: k=%0d Prefetch calculation errResult=%0d\n",
+					__func__, k, mode_lib->vba.ErrorResult[k]);
+#endif
+			v->VStartup[k] = dml_min(v->VStartupLines, v->MaxVStartupLines[k]);
+		}
+
+		for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+			dml32_CalculateUrgentBurstFactor(mode_lib->vba.UsesMALLForPStateChange[k],
+					v->swath_width_luma_ub[k],
+					v->swath_width_chroma_ub[k],
+					mode_lib->vba.SwathHeightY[k],
+					mode_lib->vba.SwathHeightC[k],
+					mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k],
+					v->UrgentLatency,
+					mode_lib->vba.CursorBufferSize,
+					mode_lib->vba.CursorWidth[k][0],
+					mode_lib->vba.CursorBPP[k][0],
+					v->VRatioPrefetchY[k],
+					v->VRatioPrefetchC[k],
+					v->BytePerPixelDETY[k],
+					v->BytePerPixelDETC[k],
+					mode_lib->vba.DETBufferSizeY[k],
+					mode_lib->vba.DETBufferSizeC[k],
+					/* Output */
+					&v->UrgBurstFactorCursorPre[k],
+					&v->UrgBurstFactorLumaPre[k],
+					&v->UrgBurstFactorChromaPre[k],
+					&v->NoUrgentLatencyHidingPre[k]);
+
+			v->cursor_bw_pre[k] = mode_lib->vba.NumberOfCursors[k] * mode_lib->vba.CursorWidth[k][0] * mode_lib->vba.CursorBPP[k][0] /
+					8.0 / (mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k]) * v->VRatioPrefetchY[k];
+
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: k=%0d DPPPerSurface=%d\n", __func__, k, mode_lib->vba.DPPPerPlane[k]);
+			dml_print("DML::%s: k=%0d UrgBurstFactorLuma=%f\n", __func__, k, v->UrgBurstFactorLuma[k]);
+			dml_print("DML::%s: k=%0d UrgBurstFactorChroma=%f\n", __func__, k, v->UrgBurstFactorChroma[k]);
+			dml_print("DML::%s: k=%0d UrgBurstFactorLumaPre=%f\n", __func__, k,
+					v->UrgBurstFactorLumaPre[k]);
+			dml_print("DML::%s: k=%0d UrgBurstFactorChromaPre=%f\n", __func__, k,
+					v->UrgBurstFactorChromaPre[k]);
+
+			dml_print("DML::%s: k=%0d VRatioPrefetchY=%f\n", __func__, k, v->VRatioPrefetchY[k]);
+			dml_print("DML::%s: k=%0d VRatioY=%f\n", __func__, k, mode_lib->vba.VRatio[k]);
+
+			dml_print("DML::%s: k=%0d prefetch_vmrow_bw=%f\n", __func__, k, v->prefetch_vmrow_bw[k]);
+			dml_print("DML::%s: k=%0d ReadBandwidthSurfaceLuma=%f\n", __func__, k,
+					v->ReadBandwidthSurfaceLuma[k]);
+			dml_print("DML::%s: k=%0d ReadBandwidthSurfaceChroma=%f\n", __func__, k,
+					v->ReadBandwidthSurfaceChroma[k]);
+			dml_print("DML::%s: k=%0d cursor_bw=%f\n", __func__, k, v->cursor_bw[k]);
+			dml_print("DML::%s: k=%0d meta_row_bw=%f\n", __func__, k, v->meta_row_bw[k]);
+			dml_print("DML::%s: k=%0d dpte_row_bw=%f\n", __func__, k, v->dpte_row_bw[k]);
+			dml_print("DML::%s: k=%0d RequiredPrefetchPixDataBWLuma=%f\n", __func__, k,
+					v->RequiredPrefetchPixDataBWLuma[k]);
+			dml_print("DML::%s: k=%0d RequiredPrefetchPixDataBWChroma=%f\n", __func__, k,
+					v->RequiredPrefetchPixDataBWChroma[k]);
+			dml_print("DML::%s: k=%0d cursor_bw_pre=%f\n", __func__, k, v->cursor_bw_pre[k]);
+			dml_print("DML::%s: k=%0d MaxTotalRDBandwidthNoUrgentBurst=%f\n", __func__, k,
+					MaxTotalRDBandwidthNoUrgentBurst);
+#endif
+			if (v->DestinationLinesForPrefetch[k] < 2)
+				DestinationLineTimesForPrefetchLessThan2 = true;
+
+			if (v->VRatioPrefetchY[k] > __DML_MAX_VRATIO_PRE__
+					|| v->VRatioPrefetchC[k] > __DML_MAX_VRATIO_PRE__)
+				VRatioPrefetchMoreThanMax = true;
+
+			//bool DestinationLinesToRequestVMInVBlankEqualOrMoreThan32 = false;
+			//bool DestinationLinesToRequestRowInVBlankEqualOrMoreThan16 = false;
+			//if (v->DestinationLinesToRequestVMInVBlank[k] >= 32) {
+			//    DestinationLinesToRequestVMInVBlankEqualOrMoreThan32 = true;
+			//}
+
+			//if (v->DestinationLinesToRequestRowInVBlank[k] >= 16) {
+			//    DestinationLinesToRequestRowInVBlankEqualOrMoreThan16 = true;
+			//}
+		}
+
+		v->FractionOfUrgentBandwidth = MaxTotalRDBandwidthNoUrgentBurst / mode_lib->vba.ReturnBW;
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: MaxTotalRDBandwidthNoUrgentBurst=%f\n",
+				__func__, MaxTotalRDBandwidthNoUrgentBurst);
+		dml_print("DML::%s: ReturnBW=%f\n", __func__, mode_lib->vba.ReturnBW);
+		dml_print("DML::%s: FractionOfUrgentBandwidth=%f\n",
+				__func__, mode_lib->vba.FractionOfUrgentBandwidth);
+#endif
+
+		{
+			double dummy_single[1];
+
+			dml32_CalculatePrefetchBandwithSupport(
+					mode_lib->vba.NumberOfActiveSurfaces,
+					mode_lib->vba.ReturnBW,
+					v->NoUrgentLatencyHidingPre,
+					v->ReadBandwidthSurfaceLuma,
+					v->ReadBandwidthSurfaceChroma,
+					v->RequiredPrefetchPixDataBWLuma,
+					v->RequiredPrefetchPixDataBWChroma,
+					v->cursor_bw,
+					v->meta_row_bw,
+					v->dpte_row_bw,
+					v->cursor_bw_pre,
+					v->prefetch_vmrow_bw,
+					mode_lib->vba.DPPPerPlane,
+					v->UrgBurstFactorLuma,
+					v->UrgBurstFactorChroma,
+					v->UrgBurstFactorCursor,
+					v->UrgBurstFactorLumaPre,
+					v->UrgBurstFactorChromaPre,
+					v->UrgBurstFactorCursorPre,
+
+					/* output */
+					&MaxTotalRDBandwidth,
+					&dummy_single[0],
+					&v->PrefetchModeSupported);
+		}
+
+		for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k)
+			dummy_unit_vector[k] = 1.0;
+
+		{
+			double  dummy_single[1];
+			bool dummy_boolean[1];
+			dml32_CalculatePrefetchBandwithSupport(mode_lib->vba.NumberOfActiveSurfaces,
+					mode_lib->vba.ReturnBW,
+					v->NoUrgentLatencyHidingPre,
+					v->ReadBandwidthSurfaceLuma,
+					v->ReadBandwidthSurfaceChroma,
+					v->RequiredPrefetchPixDataBWLuma,
+					v->RequiredPrefetchPixDataBWChroma,
+					v->cursor_bw,
+					v->meta_row_bw,
+					v->dpte_row_bw,
+					v->cursor_bw_pre,
+					v->prefetch_vmrow_bw,
+					mode_lib->vba.DPPPerPlane,
+					dummy_unit_vector,
+					dummy_unit_vector,
+					dummy_unit_vector,
+					dummy_unit_vector,
+					dummy_unit_vector,
+					dummy_unit_vector,
+
+					/* output */
+					&dummy_single[0],
+					&v->FractionOfUrgentBandwidth,
+					&dummy_boolean[0]);
+		}
+
+		if (VRatioPrefetchMoreThanMax != false || DestinationLineTimesForPrefetchLessThan2 != false) {
+			v->PrefetchModeSupported = false;
+		}
+
+		for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+			if (v->ErrorResult[k] == true || v->NotEnoughTimeForDynamicMetadata[k]) {
+				v->PrefetchModeSupported = false;
+			}
+		}
+
+		if (v->PrefetchModeSupported == true && mode_lib->vba.ImmediateFlipSupport == true) {
+			mode_lib->vba.BandwidthAvailableForImmediateFlip = dml32_CalculateBandwidthAvailableForImmediateFlip(
+					mode_lib->vba.NumberOfActiveSurfaces,
+					mode_lib->vba.ReturnBW,
+					v->ReadBandwidthSurfaceLuma,
+					v->ReadBandwidthSurfaceChroma,
+					v->RequiredPrefetchPixDataBWLuma,
+					v->RequiredPrefetchPixDataBWChroma,
+					v->cursor_bw,
+					v->cursor_bw_pre,
+					mode_lib->vba.DPPPerPlane,
+					v->UrgBurstFactorLuma,
+					v->UrgBurstFactorChroma,
+					v->UrgBurstFactorCursor,
+					v->UrgBurstFactorLumaPre,
+					v->UrgBurstFactorChromaPre,
+					v->UrgBurstFactorCursorPre);
+
+			mode_lib->vba.TotImmediateFlipBytes = 0;
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				if (mode_lib->vba.ImmediateFlipRequirement[k] != dm_immediate_flip_not_required) {
+					mode_lib->vba.TotImmediateFlipBytes = mode_lib->vba.TotImmediateFlipBytes
+							+ mode_lib->vba.DPPPerPlane[k]
+									* (v->PDEAndMetaPTEBytesFrame[k]
+											+ v->MetaRowByte[k]);
+					if (v->use_one_row_for_frame_flip[k]) {
+						mode_lib->vba.TotImmediateFlipBytes =
+								mode_lib->vba.TotImmediateFlipBytes
+										+ 2 * v->PixelPTEBytesPerRow[k];
+					} else {
+						mode_lib->vba.TotImmediateFlipBytes =
+								mode_lib->vba.TotImmediateFlipBytes
+										+ v->PixelPTEBytesPerRow[k];
+					}
+				}
+			}
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				dml32_CalculateFlipSchedule(v->dummy_vars.DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation.HostVMInefficiencyFactor,
+						v->UrgentExtraLatency,
+						v->UrgentLatency,
+						mode_lib->vba.GPUVMMaxPageTableLevels,
+						mode_lib->vba.HostVMEnable,
+						mode_lib->vba.HostVMMaxNonCachedPageTableLevels,
+						mode_lib->vba.GPUVMEnable,
+						mode_lib->vba.HostVMMinPageSize,
+						v->PDEAndMetaPTEBytesFrame[k],
+						v->MetaRowByte[k],
+						v->PixelPTEBytesPerRow[k],
+						mode_lib->vba.BandwidthAvailableForImmediateFlip,
+						mode_lib->vba.TotImmediateFlipBytes,
+						mode_lib->vba.SourcePixelFormat[k],
+						mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k],
+						mode_lib->vba.VRatio[k],
+						mode_lib->vba.VRatioChroma[k],
+						v->Tno_bw[k],
+						mode_lib->vba.DCCEnable[k],
+						v->dpte_row_height[k],
+						v->meta_row_height[k],
+						v->dpte_row_height_chroma[k],
+						v->meta_row_height_chroma[k],
+						v->Use_One_Row_For_Frame_Flip[k],
+
+						/* Output */
+						&v->DestinationLinesToRequestVMInImmediateFlip[k],
+						&v->DestinationLinesToRequestRowInImmediateFlip[k],
+						&v->final_flip_bw[k],
+						&v->ImmediateFlipSupportedForPipe[k]);
+			}
+
+			{
+				double  dummy_single[2];
+				bool dummy_boolean[1];
+				dml32_CalculateImmediateFlipBandwithSupport(mode_lib->vba.NumberOfActiveSurfaces,
+						mode_lib->vba.ReturnBW,
+						mode_lib->vba.ImmediateFlipRequirement,
+						v->final_flip_bw,
+						v->ReadBandwidthSurfaceLuma,
+						v->ReadBandwidthSurfaceChroma,
+						v->RequiredPrefetchPixDataBWLuma,
+						v->RequiredPrefetchPixDataBWChroma,
+						v->cursor_bw,
+						v->meta_row_bw,
+						v->dpte_row_bw,
+						v->cursor_bw_pre,
+						v->prefetch_vmrow_bw,
+						mode_lib->vba.DPPPerPlane,
+						v->UrgBurstFactorLuma,
+						v->UrgBurstFactorChroma,
+						v->UrgBurstFactorCursor,
+						v->UrgBurstFactorLumaPre,
+						v->UrgBurstFactorChromaPre,
+						v->UrgBurstFactorCursorPre,
+
+						/* output */
+						&v->total_dcn_read_bw_with_flip,    // Single  *TotalBandwidth
+						&dummy_single[0],                        // Single  *FractionOfUrgentBandwidth
+						&v->ImmediateFlipSupported);        // Boolean *ImmediateFlipBandwidthSupport
+
+				dml32_CalculateImmediateFlipBandwithSupport(mode_lib->vba.NumberOfActiveSurfaces,
+						mode_lib->vba.ReturnBW,
+						mode_lib->vba.ImmediateFlipRequirement,
+						v->final_flip_bw,
+						v->ReadBandwidthSurfaceLuma,
+						v->ReadBandwidthSurfaceChroma,
+						v->RequiredPrefetchPixDataBWLuma,
+						v->RequiredPrefetchPixDataBWChroma,
+						v->cursor_bw,
+						v->meta_row_bw,
+						v->dpte_row_bw,
+						v->cursor_bw_pre,
+						v->prefetch_vmrow_bw,
+						mode_lib->vba.DPPPerPlane,
+						dummy_unit_vector,
+						dummy_unit_vector,
+						dummy_unit_vector,
+						dummy_unit_vector,
+						dummy_unit_vector,
+						dummy_unit_vector,
+
+						/* output */
+						&dummy_single[1],                                // Single  *TotalBandwidth
+						&v->FractionOfUrgentBandwidthImmediateFlip, // Single  *FractionOfUrgentBandwidth
+						&dummy_boolean[0]);                              // Boolean *ImmediateFlipBandwidthSupport
+			}
+
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				if (mode_lib->vba.ImmediateFlipRequirement[k] != dm_immediate_flip_not_required && v->ImmediateFlipSupportedForPipe[k] == false) {
+					v->ImmediateFlipSupported = false;
+#ifdef __DML_VBA_DEBUG__
+					dml_print("DML::%s: Pipe %0d not supporing iflip\n", __func__, k);
+#endif
+				}
+			}
+		} else {
+			v->ImmediateFlipSupported = false;
+		}
+
+		/* consider flip support is okay if the flip bw is ok or (when user does't require a iflip and there is no host vm) */
+		v->PrefetchAndImmediateFlipSupported = (v->PrefetchModeSupported == true &&
+				((!mode_lib->vba.ImmediateFlipSupport && !mode_lib->vba.HostVMEnable && !ImmediateFlipRequirementFinal) ||
+						v->ImmediateFlipSupported)) ? true : false;
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: PrefetchModeSupported = %d\n", __func__, locals->PrefetchModeSupported);
+		for (uint k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k)
+			dml_print("DML::%s: ImmediateFlipRequirement[%d] = %d\n", __func__, k,  mode_lib->vba.ImmediateFlipRequirement[k] == dm_immediate_flip_required);
+		dml_print("DML::%s: ImmediateFlipSupported = %d\n", __func__, locals->ImmediateFlipSupported);
+		dml_print("DML::%s: ImmediateFlipSupport = %d\n", __func__, mode_lib->vba.ImmediateFlipSupport);
+		dml_print("DML::%s: HostVMEnable = %d\n", __func__, mode_lib->vba.HostVMEnable);
+		dml_print("DML::%s: PrefetchAndImmediateFlipSupported = %d\n", __func__, locals->PrefetchAndImmediateFlipSupported);
+		dml_print("DML::%s: Done loop: Vstartup=%d, Max Vstartup=%d\n", __func__, locals->VStartupLines, locals->MaximumMaxVStartupLines);
+#endif
+
+		v->VStartupLines = v->VStartupLines + 1;
+
+		if (v->VStartupLines > v->MaximumMaxVStartupLines) {
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: Vstartup exceeds max vstartup, exiting loop\n", __func__);
+#endif
+			break; // VBA_DELTA: Implementation divergence! Gabe is *still* iterating across prefetch modes which we don't care to do
+		}
+		iteration++;
+		if (iteration > 2500) {
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: too many errors, exit now\n", __func__);
+			assert(0);
+#endif
+		}
+	} while (!(v->PrefetchAndImmediateFlipSupported || NextPrefetchMode > mode_lib->vba.MaxPrefetchMode));
+
+
+	if (v->VStartupLines <= v->MaximumMaxVStartupLines) {
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: Good, Prefetch and flip scheduling found solution at VStartupLines=%d\n", __func__, locals->VStartupLines-1);
+#endif
+	}
+
+
+	//Watermarks and NB P-State/DRAM Clock Change Support
+	{
+		SOCParametersList mmSOCParameters;
+		enum clock_change_support dummy_dramchange_support;
+		enum dm_fclock_change_support dummy_fclkchange_support;
+		bool dummy_USRRetrainingSupport;
+
+		mmSOCParameters.UrgentLatency = v->UrgentLatency;
+		mmSOCParameters.ExtraLatency = v->UrgentExtraLatency;
+		mmSOCParameters.WritebackLatency = mode_lib->vba.WritebackLatency;
+		mmSOCParameters.DRAMClockChangeLatency = mode_lib->vba.DRAMClockChangeLatency;
+		mmSOCParameters.FCLKChangeLatency = mode_lib->vba.FCLKChangeLatency;
+		mmSOCParameters.SRExitTime = mode_lib->vba.SRExitTime;
+		mmSOCParameters.SREnterPlusExitTime = mode_lib->vba.SREnterPlusExitTime;
+		mmSOCParameters.SRExitZ8Time = mode_lib->vba.SRExitZ8Time;
+		mmSOCParameters.SREnterPlusExitZ8Time = mode_lib->vba.SREnterPlusExitZ8Time;
+		mmSOCParameters.USRRetrainingLatency = mode_lib->vba.USRRetrainingLatency;
+		mmSOCParameters.SMNLatency = mode_lib->vba.SMNLatency;
+
+		dml32_CalculateWatermarksMALLUseAndDRAMSpeedChangeSupport(
+			mode_lib->vba.USRRetrainingRequiredFinal,
+			mode_lib->vba.UsesMALLForPStateChange,
+			mode_lib->vba.PrefetchModePerState[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb],
+			mode_lib->vba.NumberOfActiveSurfaces,
+			mode_lib->vba.MaxLineBufferLines,
+			mode_lib->vba.LineBufferSizeFinal,
+			mode_lib->vba.WritebackInterfaceBufferSize,
+			mode_lib->vba.DCFCLK,
+			mode_lib->vba.ReturnBW,
+			mode_lib->vba.SynchronizeTimingsFinal,
+			mode_lib->vba.SynchronizeDRRDisplaysForUCLKPStateChangeFinal,
+			mode_lib->vba.DRRDisplay,
+			v->dpte_group_bytes,
+			v->meta_row_height,
+			v->meta_row_height_chroma,
+			mmSOCParameters,
+			mode_lib->vba.WritebackChunkSize,
+			mode_lib->vba.SOCCLK,
+			v->DCFCLKDeepSleep,
+			mode_lib->vba.DETBufferSizeY,
+			mode_lib->vba.DETBufferSizeC,
+			mode_lib->vba.SwathHeightY,
+			mode_lib->vba.SwathHeightC,
+			mode_lib->vba.LBBitPerPixel,
+			v->SwathWidthY,
+			v->SwathWidthC,
+			mode_lib->vba.HRatio,
+			mode_lib->vba.HRatioChroma,
+			mode_lib->vba.vtaps,
+			mode_lib->vba.VTAPsChroma,
+			mode_lib->vba.VRatio,
+			mode_lib->vba.VRatioChroma,
+			mode_lib->vba.HTotal,
+			mode_lib->vba.VTotal,
+			mode_lib->vba.VActive,
+			mode_lib->vba.PixelClock,
+			mode_lib->vba.BlendingAndTiming,
+			mode_lib->vba.DPPPerPlane,
+			v->BytePerPixelDETY,
+			v->BytePerPixelDETC,
+			v->DSTXAfterScaler,
+			v->DSTYAfterScaler,
+			mode_lib->vba.WritebackEnable,
+			mode_lib->vba.WritebackPixelFormat,
+			mode_lib->vba.WritebackDestinationWidth,
+			mode_lib->vba.WritebackDestinationHeight,
+			mode_lib->vba.WritebackSourceHeight,
+			v->UnboundedRequestEnabled,
+			v->CompressedBufferSizeInkByte,
+
+			/* Output */
+			&v->Watermark,
+			&dummy_dramchange_support,
+			v->MaxActiveDRAMClockChangeLatencySupported,
+			v->SubViewportLinesNeededInMALL,
+			&dummy_fclkchange_support,
+			&v->MinActiveFCLKChangeLatencySupported,
+			&dummy_USRRetrainingSupport,
+			mode_lib->vba.ActiveDRAMClockChangeLatencyMargin);
+
+		/* DCN32 has a new struct Watermarks (typedef) which is used to store
+		 * calculated WM values. Copy over values from struct to vba varaibles
+		 * to ensure that the DCN32 getters return the correct value.
+		 */
+		v->UrgentWatermark = v->Watermark.UrgentWatermark;
+		v->WritebackUrgentWatermark = v->Watermark.WritebackUrgentWatermark;
+		v->DRAMClockChangeWatermark = v->Watermark.DRAMClockChangeWatermark;
+		v->WritebackDRAMClockChangeWatermark = v->Watermark.WritebackDRAMClockChangeWatermark;
+		v->StutterExitWatermark = v->Watermark.StutterExitWatermark;
+		v->StutterEnterPlusExitWatermark = v->Watermark.StutterEnterPlusExitWatermark;
+		v->Z8StutterExitWatermark = v->Watermark.Z8StutterExitWatermark;
+		v->Z8StutterEnterPlusExitWatermark = v->Watermark.Z8StutterEnterPlusExitWatermark;
+
+		for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+			if (mode_lib->vba.WritebackEnable[k] == true) {
+				v->WritebackAllowDRAMClockChangeEndPosition[k] = dml_max(0,
+						v->VStartup[k] * mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k]
+								- v->Watermark.WritebackDRAMClockChangeWatermark);
+				v->WritebackAllowFCLKChangeEndPosition[k] = dml_max(0,
+						v->VStartup[k] * mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k]
+								- v->Watermark.WritebackFCLKChangeWatermark);
+			} else {
+				v->WritebackAllowDRAMClockChangeEndPosition[k] = 0;
+				v->WritebackAllowFCLKChangeEndPosition[k] = 0;
+			}
+		}
+	}
+
+	//Display Pipeline Delivery Time in Prefetch, Groups
+	dml32_CalculatePixelDeliveryTimes(
+			mode_lib->vba.NumberOfActiveSurfaces,
+			mode_lib->vba.VRatio,
+			mode_lib->vba.VRatioChroma,
+			v->VRatioPrefetchY,
+			v->VRatioPrefetchC,
+			v->swath_width_luma_ub,
+			v->swath_width_chroma_ub,
+			mode_lib->vba.DPPPerPlane,
+			mode_lib->vba.HRatio,
+			mode_lib->vba.HRatioChroma,
+			mode_lib->vba.PixelClock,
+			v->PSCL_THROUGHPUT_LUMA,
+			v->PSCL_THROUGHPUT_CHROMA,
+			mode_lib->vba.DPPCLK,
+			v->BytePerPixelC,
+			mode_lib->vba.SourceRotation,
+			mode_lib->vba.NumberOfCursors,
+			mode_lib->vba.CursorWidth,
+			mode_lib->vba.CursorBPP,
+			v->BlockWidth256BytesY,
+			v->BlockHeight256BytesY,
+			v->BlockWidth256BytesC,
+			v->BlockHeight256BytesC,
+
+			/* Output */
+			v->DisplayPipeLineDeliveryTimeLuma,
+			v->DisplayPipeLineDeliveryTimeChroma,
+			v->DisplayPipeLineDeliveryTimeLumaPrefetch,
+			v->DisplayPipeLineDeliveryTimeChromaPrefetch,
+			v->DisplayPipeRequestDeliveryTimeLuma,
+			v->DisplayPipeRequestDeliveryTimeChroma,
+			v->DisplayPipeRequestDeliveryTimeLumaPrefetch,
+			v->DisplayPipeRequestDeliveryTimeChromaPrefetch,
+			v->CursorRequestDeliveryTime,
+			v->CursorRequestDeliveryTimePrefetch);
+
+	dml32_CalculateMetaAndPTETimes(v->Use_One_Row_For_Frame,
+			mode_lib->vba.NumberOfActiveSurfaces,
+			mode_lib->vba.GPUVMEnable,
+			mode_lib->vba.MetaChunkSize,
+			mode_lib->vba.MinMetaChunkSizeBytes,
+			mode_lib->vba.HTotal,
+			mode_lib->vba.VRatio,
+			mode_lib->vba.VRatioChroma,
+			v->DestinationLinesToRequestRowInVBlank,
+			v->DestinationLinesToRequestRowInImmediateFlip,
+			mode_lib->vba.DCCEnable,
+			mode_lib->vba.PixelClock,
+			v->BytePerPixelY,
+			v->BytePerPixelC,
+			mode_lib->vba.SourceRotation,
+			v->dpte_row_height,
+			v->dpte_row_height_chroma,
+			v->meta_row_width,
+			v->meta_row_width_chroma,
+			v->meta_row_height,
+			v->meta_row_height_chroma,
+			v->meta_req_width,
+			v->meta_req_width_chroma,
+			v->meta_req_height,
+			v->meta_req_height_chroma,
+			v->dpte_group_bytes,
+			v->PTERequestSizeY,
+			v->PTERequestSizeC,
+			v->PixelPTEReqWidthY,
+			v->PixelPTEReqHeightY,
+			v->PixelPTEReqWidthC,
+			v->PixelPTEReqHeightC,
+			v->dpte_row_width_luma_ub,
+			v->dpte_row_width_chroma_ub,
+
+			/* Output */
+			v->DST_Y_PER_PTE_ROW_NOM_L,
+			v->DST_Y_PER_PTE_ROW_NOM_C,
+			v->DST_Y_PER_META_ROW_NOM_L,
+			v->DST_Y_PER_META_ROW_NOM_C,
+			v->TimePerMetaChunkNominal,
+			v->TimePerChromaMetaChunkNominal,
+			v->TimePerMetaChunkVBlank,
+			v->TimePerChromaMetaChunkVBlank,
+			v->TimePerMetaChunkFlip,
+			v->TimePerChromaMetaChunkFlip,
+			v->time_per_pte_group_nom_luma,
+			v->time_per_pte_group_vblank_luma,
+			v->time_per_pte_group_flip_luma,
+			v->time_per_pte_group_nom_chroma,
+			v->time_per_pte_group_vblank_chroma,
+			v->time_per_pte_group_flip_chroma);
+
+	dml32_CalculateVMGroupAndRequestTimes(
+			mode_lib->vba.NumberOfActiveSurfaces,
+			mode_lib->vba.GPUVMEnable,
+			mode_lib->vba.GPUVMMaxPageTableLevels,
+			mode_lib->vba.HTotal,
+			v->BytePerPixelC,
+			v->DestinationLinesToRequestVMInVBlank,
+			v->DestinationLinesToRequestVMInImmediateFlip,
+			mode_lib->vba.DCCEnable,
+			mode_lib->vba.PixelClock,
+			v->dpte_row_width_luma_ub,
+			v->dpte_row_width_chroma_ub,
+			v->vm_group_bytes,
+			v->dpde0_bytes_per_frame_ub_l,
+			v->dpde0_bytes_per_frame_ub_c,
+			v->meta_pte_bytes_per_frame_ub_l,
+			v->meta_pte_bytes_per_frame_ub_c,
+
+			/* Output */
+			v->TimePerVMGroupVBlank,
+			v->TimePerVMGroupFlip,
+			v->TimePerVMRequestVBlank,
+			v->TimePerVMRequestFlip);
+
+	// Min TTUVBlank
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		if (mode_lib->vba.PrefetchModePerState[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb] == 0) {
+			v->MinTTUVBlank[k] = dml_max4(v->Watermark.DRAMClockChangeWatermark,
+					v->Watermark.FCLKChangeWatermark, v->Watermark.StutterEnterPlusExitWatermark,
+					v->Watermark.UrgentWatermark);
+		} else if (mode_lib->vba.PrefetchModePerState[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb]
+				== 1) {
+			v->MinTTUVBlank[k] = dml_max3(v->Watermark.FCLKChangeWatermark,
+					v->Watermark.StutterEnterPlusExitWatermark, v->Watermark.UrgentWatermark);
+		} else if (mode_lib->vba.PrefetchModePerState[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb]
+				== 2) {
+			v->MinTTUVBlank[k] = dml_max(v->Watermark.StutterEnterPlusExitWatermark,
+					v->Watermark.UrgentWatermark);
+		} else {
+			v->MinTTUVBlank[k] = v->Watermark.UrgentWatermark;
+		}
+		if (!mode_lib->vba.DynamicMetadataEnable[k])
+			v->MinTTUVBlank[k] = mode_lib->vba.TCalc + v->MinTTUVBlank[k];
+	}
+
+	// DCC Configuration
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: Calculate DCC configuration for surface k=%d\n", __func__, k);
+#endif
+		dml32_CalculateDCCConfiguration(
+				mode_lib->vba.DCCEnable[k],
+				mode_lib->vba.DCCProgrammingAssumesScanDirectionUnknownFinal,
+				mode_lib->vba.SourcePixelFormat[k], mode_lib->vba.SurfaceWidthY[k],
+				mode_lib->vba.SurfaceWidthC[k],
+				mode_lib->vba.SurfaceHeightY[k],
+				mode_lib->vba.SurfaceHeightC[k],
+				mode_lib->vba.nomDETInKByte,
+				v->BlockHeight256BytesY[k],
+				v->BlockHeight256BytesC[k],
+				mode_lib->vba.SurfaceTiling[k],
+				v->BytePerPixelY[k],
+				v->BytePerPixelC[k],
+				v->BytePerPixelDETY[k],
+				v->BytePerPixelDETC[k],
+				mode_lib->vba.SourceScan[k],
+				/* Output */
+				&v->DCCYMaxUncompressedBlock[k],
+				&v->DCCCMaxUncompressedBlock[k],
+				&v->DCCYMaxCompressedBlock[k],
+				&v->DCCCMaxCompressedBlock[k],
+				&v->DCCYIndependentBlock[k],
+				&v->DCCCIndependentBlock[k]);
+	}
+
+	// VStartup Adjustment
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		bool isInterlaceTiming;
+		double Tvstartup_margin = (v->MaxVStartupLines[k] - v->VStartup[k]) * mode_lib->vba.HTotal[k]
+				/ mode_lib->vba.PixelClock[k];
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d, MinTTUVBlank = %f (before vstartup margin)\n", __func__, k,
+				v->MinTTUVBlank[k]);
+#endif
+
+		v->MinTTUVBlank[k] = v->MinTTUVBlank[k] + Tvstartup_margin;
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d, Tvstartup_margin = %f\n", __func__, k, Tvstartup_margin);
+		dml_print("DML::%s: k=%d, MaxVStartupLines = %d\n", __func__, k, v->MaxVStartupLines[k]);
+		dml_print("DML::%s: k=%d, VStartup = %d\n", __func__, k, v->VStartup[k]);
+		dml_print("DML::%s: k=%d, MinTTUVBlank = %f\n", __func__, k, v->MinTTUVBlank[k]);
+#endif
+
+		v->Tdmdl[k] = v->Tdmdl[k] + Tvstartup_margin;
+		if (mode_lib->vba.DynamicMetadataEnable[k] && mode_lib->vba.DynamicMetadataVMEnabled)
+			v->Tdmdl_vm[k] = v->Tdmdl_vm[k] + Tvstartup_margin;
+
+		isInterlaceTiming = (mode_lib->vba.Interlace[k] &&
+				!mode_lib->vba.ProgressiveToInterlaceUnitInOPP);
+
+		v->MIN_DST_Y_NEXT_START[k] = ((isInterlaceTiming ? dml_floor((mode_lib->vba.VTotal[k] -
+						mode_lib->vba.VFrontPorch[k]) / 2.0, 1.0) :
+						mode_lib->vba.VTotal[k]) - mode_lib->vba.VFrontPorch[k])
+						+ dml_max(1.0,
+						dml_ceil(v->WritebackDelay[mode_lib->vba.VoltageLevel][k]
+						/ (mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k]), 1.0))
+						+ dml_floor(4.0 * v->TSetup[k] / (mode_lib->vba.HTotal[k]
+						/ mode_lib->vba.PixelClock[k]), 1.0) / 4.0;
+
+		v->VStartup[k] = (isInterlaceTiming ? (2 * v->MaxVStartupLines[k]) : v->MaxVStartupLines[k]);
+
+		if (((v->VUpdateOffsetPix[k] + v->VUpdateWidthPix[k] + v->VReadyOffsetPix[k])
+			/ mode_lib->vba.HTotal[k]) <= (isInterlaceTiming ? dml_floor((mode_lib->vba.VTotal[k]
+			- mode_lib->vba.VActive[k] - mode_lib->vba.VFrontPorch[k] - v->VStartup[k]) / 2.0, 1.0) :
+			(int) (mode_lib->vba.VTotal[k] - mode_lib->vba.VActive[k]
+		       - mode_lib->vba.VFrontPorch[k] - v->VStartup[k]))) {
+			v->VREADY_AT_OR_AFTER_VSYNC[k] = true;
+		} else {
+			v->VREADY_AT_OR_AFTER_VSYNC[k] = false;
+		}
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d, VStartup = %d (max)\n", __func__, k, v->VStartup[k]);
+		dml_print("DML::%s: k=%d, VUpdateOffsetPix = %d\n", __func__, k, v->VUpdateOffsetPix[k]);
+		dml_print("DML::%s: k=%d, VUpdateWidthPix = %d\n", __func__, k, v->VUpdateWidthPix[k]);
+		dml_print("DML::%s: k=%d, VReadyOffsetPix = %d\n", __func__, k, v->VReadyOffsetPix[k]);
+		dml_print("DML::%s: k=%d, HTotal = %d\n", __func__, k, mode_lib->vba.HTotal[k]);
+		dml_print("DML::%s: k=%d, VTotal = %d\n", __func__, k, mode_lib->vba.VTotal[k]);
+		dml_print("DML::%s: k=%d, VActive = %d\n", __func__, k, mode_lib->vba.VActive[k]);
+		dml_print("DML::%s: k=%d, VFrontPorch = %d\n", __func__, k, mode_lib->vba.VFrontPorch[k]);
+		dml_print("DML::%s: k=%d, VStartup = %d\n", __func__, k, v->VStartup[k]);
+		dml_print("DML::%s: k=%d, TSetup = %f\n", __func__, k, v->TSetup[k]);
+		dml_print("DML::%s: k=%d, MIN_DST_Y_NEXT_START = %f\n", __func__, k, v->MIN_DST_Y_NEXT_START[k]);
+		dml_print("DML::%s: k=%d, VREADY_AT_OR_AFTER_VSYNC = %d\n", __func__, k,
+				v->VREADY_AT_OR_AFTER_VSYNC[k]);
+#endif
+	}
+
+	{
+		//Maximum Bandwidth Used
+		double TotalWRBandwidth = 0;
+		double WRBandwidth = 0;
+
+		for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+			if (mode_lib->vba.WritebackEnable[k] == true
+					&& mode_lib->vba.WritebackPixelFormat[k] == dm_444_32) {
+				WRBandwidth = mode_lib->vba.WritebackDestinationWidth[k]
+						* mode_lib->vba.WritebackDestinationHeight[k]
+						/ (mode_lib->vba.HTotal[k] * mode_lib->vba.WritebackSourceHeight[k]
+								/ mode_lib->vba.PixelClock[k]) * 4;
+			} else if (mode_lib->vba.WritebackEnable[k] == true) {
+				WRBandwidth = mode_lib->vba.WritebackDestinationWidth[k]
+						* mode_lib->vba.WritebackDestinationHeight[k]
+						/ (mode_lib->vba.HTotal[k] * mode_lib->vba.WritebackSourceHeight[k]
+								/ mode_lib->vba.PixelClock[k]) * 8;
+			}
+			TotalWRBandwidth = TotalWRBandwidth + WRBandwidth;
+		}
+
+		v->TotalDataReadBandwidth = 0;
+		for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+			v->TotalDataReadBandwidth = v->TotalDataReadBandwidth + v->ReadBandwidthSurfaceLuma[k]
+					+ v->ReadBandwidthSurfaceChroma[k];
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: k=%d, TotalDataReadBandwidth = %f\n",
+					__func__, k, v->TotalDataReadBandwidth);
+			dml_print("DML::%s: k=%d, ReadBandwidthSurfaceLuma = %f\n",
+					__func__, k, v->ReadBandwidthSurfaceLuma[k]);
+			dml_print("DML::%s: k=%d, ReadBandwidthSurfaceChroma = %f\n",
+					__func__, k, v->ReadBandwidthSurfaceChroma[k]);
+#endif
+		}
+	}
+
+	// Stutter Efficiency
+	dml32_CalculateStutterEfficiency(v->CompressedBufferSizeInkByte,
+			mode_lib->vba.UsesMALLForPStateChange,
+			v->UnboundedRequestEnabled,
+			mode_lib->vba.MetaFIFOSizeInKEntries,
+			mode_lib->vba.ZeroSizeBufferEntries,
+			mode_lib->vba.PixelChunkSizeInKByte,
+			mode_lib->vba.NumberOfActiveSurfaces,
+			mode_lib->vba.ROBBufferSizeInKByte,
+			v->TotalDataReadBandwidth,
+			mode_lib->vba.DCFCLK,
+			mode_lib->vba.ReturnBW,
+			mode_lib->vba.CompbufReservedSpace64B,
+			mode_lib->vba.CompbufReservedSpaceZs,
+			mode_lib->vba.SRExitTime,
+			mode_lib->vba.SRExitZ8Time,
+			mode_lib->vba.SynchronizeTimingsFinal,
+			mode_lib->vba.BlendingAndTiming,
+			v->Watermark.StutterEnterPlusExitWatermark,
+			v->Watermark.Z8StutterEnterPlusExitWatermark,
+			mode_lib->vba.ProgressiveToInterlaceUnitInOPP,
+			mode_lib->vba.Interlace,
+			v->MinTTUVBlank, mode_lib->vba.DPPPerPlane,
+			mode_lib->vba.DETBufferSizeY,
+			v->BytePerPixelY,
+			v->BytePerPixelDETY,
+			v->SwathWidthY,
+			mode_lib->vba.SwathHeightY,
+			mode_lib->vba.SwathHeightC,
+			mode_lib->vba.DCCRateLuma,
+			mode_lib->vba.DCCRateChroma,
+			mode_lib->vba.DCCFractionOfZeroSizeRequestsLuma,
+			mode_lib->vba.DCCFractionOfZeroSizeRequestsChroma,
+			mode_lib->vba.HTotal, mode_lib->vba.VTotal,
+			mode_lib->vba.PixelClock,
+			mode_lib->vba.VRatio,
+			mode_lib->vba.SourceRotation,
+			v->BlockHeight256BytesY,
+			v->BlockWidth256BytesY,
+			v->BlockHeight256BytesC,
+			v->BlockWidth256BytesC,
+			v->DCCYMaxUncompressedBlock,
+			v->DCCCMaxUncompressedBlock,
+			mode_lib->vba.VActive,
+			mode_lib->vba.DCCEnable,
+			mode_lib->vba.WritebackEnable,
+			v->ReadBandwidthSurfaceLuma,
+			v->ReadBandwidthSurfaceChroma,
+			v->meta_row_bw,
+			v->dpte_row_bw,
+			/* Output */
+			&v->StutterEfficiencyNotIncludingVBlank,
+			&v->StutterEfficiency,
+			&v->NumberOfStutterBurstsPerFrame,
+			&v->Z8StutterEfficiencyNotIncludingVBlank,
+			&v->Z8StutterEfficiency,
+			&v->Z8NumberOfStutterBurstsPerFrame,
+			&v->StutterPeriod,
+			&v->DCHUBBUB_ARB_CSTATE_MAX_CAP_MODE);
+
+#ifdef __DML_VBA_ALLOW_DELTA__
+	{
+		double dummy_single[2];
+		unsigned int dummy_integer[1];
+		bool dummy_boolean[1];
+
+		// Calculate z8 stutter eff assuming 0 reserved space
+		dml32_CalculateStutterEfficiency(v->CompressedBufferSizeInkByte,
+				mode_lib->vba.UsesMALLForPStateChange,
+				v->UnboundedRequestEnabled,
+				mode_lib->vba.MetaFIFOSizeInKEntries,
+				mode_lib->vba.ZeroSizeBufferEntries,
+				mode_lib->vba.PixelChunkSizeInKByte,
+				mode_lib->vba.NumberOfActiveSurfaces,
+				mode_lib->vba.ROBBufferSizeInKByte,
+				v->TotalDataReadBandwidth,
+				mode_lib->vba.DCFCLK,
+				mode_lib->vba.ReturnBW,
+				0, //mode_lib->vba.CompbufReservedSpace64B,
+				0, //mode_lib->vba.CompbufReservedSpaceZs,
+				mode_lib->vba.SRExitTime,
+				mode_lib->vba.SRExitZ8Time,
+				mode_lib->vba.SynchronizeTimingsFinal,
+				mode_lib->vba.BlendingAndTiming,
+				v->Watermark.StutterEnterPlusExitWatermark,
+				v->Watermark.Z8StutterEnterPlusExitWatermark,
+				mode_lib->vba.ProgressiveToInterlaceUnitInOPP,
+				mode_lib->vba.Interlace,
+				v->MinTTUVBlank,
+				mode_lib->vba.DPPPerPlane,
+				mode_lib->vba.DETBufferSizeY,
+				v->BytePerPixelY, v->BytePerPixelDETY,
+				v->SwathWidthY, mode_lib->vba.SwathHeightY,
+				mode_lib->vba.SwathHeightC,
+				mode_lib->vba.DCCRateLuma,
+				mode_lib->vba.DCCRateChroma,
+				mode_lib->vba.DCCFractionOfZeroSizeRequestsLuma,
+				mode_lib->vba.DCCFractionOfZeroSizeRequestsChroma,
+				mode_lib->vba.HTotal,
+				mode_lib->vba.VTotal,
+				mode_lib->vba.PixelClock,
+				mode_lib->vba.VRatio,
+				mode_lib->vba.SourceRotation,
+				v->BlockHeight256BytesY,
+				v->BlockWidth256BytesY,
+				v->BlockHeight256BytesC,
+				v->BlockWidth256BytesC,
+				v->DCCYMaxUncompressedBlock,
+				v->DCCCMaxUncompressedBlock,
+				mode_lib->vba.VActive,
+				mode_lib->vba.DCCEnable,
+				mode_lib->vba.WritebackEnable,
+				v->ReadBandwidthSurfaceLuma,
+				v->ReadBandwidthSurfaceChroma,
+				v->meta_row_bw, v->dpte_row_bw,
+
+				/* Output */
+				&dummy_single[0],
+				&dummy_single[1],
+				&dummy_integer[0],
+				&v->Z8StutterEfficiencyNotIncludingVBlankBestCase,
+				&v->Z8StutterEfficiencyBestCase,
+				&v->Z8NumberOfStutterBurstsPerFrameBestCase,
+				&v->StutterPeriodBestCase,
+				&dummy_boolean[0]);
+	}
+#else
+	v->Z8StutterEfficiencyNotIncludingVBlankBestCase = v->Z8StutterEfficiencyNotIncludingVBlank;
+	v->Z8StutterEfficiencyBestCase = v->Z8StutterEfficiency;
+	v->Z8NumberOfStutterBurstsPerFrameBestCase = v->Z8NumberOfStutterBurstsPerFrame;
+	v->StutterPeriodBestCase = v->StutterPeriod;
+#endif
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: --- END ---\n", __func__);
+#endif
+}
+
+void dml32_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_lib)
+{
+	bool dummy_boolean[2];
+	unsigned int dummy_integer[1];
+	bool MPCCombineMethodAsNeededForPStateChangeAndVoltage;
+	bool MPCCombineMethodAsPossible;
+	enum odm_combine_mode dummy_odm_mode[DC__NUM_DPP__MAX];
+	unsigned int TotalNumberOfActiveOTG;
+	unsigned int TotalNumberOfActiveHDMIFRL;
+	unsigned int TotalNumberOfActiveDP2p0;
+	unsigned int TotalNumberOfActiveDP2p0Outputs;
+	unsigned int TotalDSCUnitsRequired;
+	unsigned int m;
+	unsigned int ReorderingBytes;
+	bool FullFrameMALLPStateMethod;
+	bool SubViewportMALLPStateMethod;
+	bool PhantomPipeMALLPStateMethod;
+	double MaxTotalVActiveRDBandwidth;
+	double DSTYAfterScaler[DC__NUM_DPP__MAX];
+	double DSTXAfterScaler[DC__NUM_DPP__MAX];
+	unsigned int MaximumMPCCombine;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: called\n", __func__);
+#endif
+	struct vba_vars_st *v = &mode_lib->vba;
+
+	int i, j;
+	unsigned int k;
+
+	/*MODE SUPPORT, VOLTAGE STATE AND SOC CONFIGURATION*/
+
+	/*Scale Ratio, taps Support Check*/
+
+	mode_lib->vba.ScaleRatioAndTapsSupport = true;
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		if (mode_lib->vba.ScalerEnabled[k] == false
+				&& ((mode_lib->vba.SourcePixelFormat[k] != dm_444_64
+						&& mode_lib->vba.SourcePixelFormat[k] != dm_444_32
+						&& mode_lib->vba.SourcePixelFormat[k] != dm_444_16
+						&& mode_lib->vba.SourcePixelFormat[k] != dm_mono_16
+						&& mode_lib->vba.SourcePixelFormat[k] != dm_mono_8
+						&& mode_lib->vba.SourcePixelFormat[k] != dm_rgbe
+						&& mode_lib->vba.SourcePixelFormat[k] != dm_rgbe_alpha)
+						|| mode_lib->vba.HRatio[k] != 1.0 || mode_lib->vba.htaps[k] != 1.0
+						|| mode_lib->vba.VRatio[k] != 1.0 || mode_lib->vba.vtaps[k] != 1.0)) {
+			mode_lib->vba.ScaleRatioAndTapsSupport = false;
+		} else if (mode_lib->vba.vtaps[k] < 1.0 || mode_lib->vba.vtaps[k] > 8.0 || mode_lib->vba.htaps[k] < 1.0
+				|| mode_lib->vba.htaps[k] > 8.0
+				|| (mode_lib->vba.htaps[k] > 1.0 && (mode_lib->vba.htaps[k] % 2) == 1)
+				|| mode_lib->vba.HRatio[k] > mode_lib->vba.MaxHSCLRatio
+				|| mode_lib->vba.VRatio[k] > mode_lib->vba.MaxVSCLRatio
+				|| mode_lib->vba.HRatio[k] > mode_lib->vba.htaps[k]
+				|| mode_lib->vba.VRatio[k] > mode_lib->vba.vtaps[k]
+				|| (mode_lib->vba.SourcePixelFormat[k] != dm_444_64
+						&& mode_lib->vba.SourcePixelFormat[k] != dm_444_32
+						&& mode_lib->vba.SourcePixelFormat[k] != dm_444_16
+						&& mode_lib->vba.SourcePixelFormat[k] != dm_mono_16
+						&& mode_lib->vba.SourcePixelFormat[k] != dm_mono_8
+						&& mode_lib->vba.SourcePixelFormat[k] != dm_rgbe
+						&& (mode_lib->vba.VTAPsChroma[k] < 1
+								|| mode_lib->vba.VTAPsChroma[k] > 8
+								|| mode_lib->vba.HTAPsChroma[k] < 1
+								|| mode_lib->vba.HTAPsChroma[k] > 8
+								|| (mode_lib->vba.HTAPsChroma[k] > 1
+										&& mode_lib->vba.HTAPsChroma[k] % 2
+												== 1)
+								|| mode_lib->vba.HRatioChroma[k]
+										> mode_lib->vba.MaxHSCLRatio
+								|| mode_lib->vba.VRatioChroma[k]
+										> mode_lib->vba.MaxVSCLRatio
+								|| mode_lib->vba.HRatioChroma[k]
+										> mode_lib->vba.HTAPsChroma[k]
+								|| mode_lib->vba.VRatioChroma[k]
+										> mode_lib->vba.VTAPsChroma[k]))) {
+			mode_lib->vba.ScaleRatioAndTapsSupport = false;
+		}
+	}
+
+	/*Source Format, Pixel Format and Scan Support Check*/
+	mode_lib->vba.SourceFormatPixelAndScanSupport = true;
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		if (mode_lib->vba.SurfaceTiling[k] == dm_sw_linear
+			&& (!(!IsVertical(mode_lib->vba.SourceScan[k])) || mode_lib->vba.DCCEnable[k] == true)) {
+			mode_lib->vba.SourceFormatPixelAndScanSupport = false;
+		}
+	}
+
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		dml32_CalculateBytePerPixelAndBlockSizes(
+				mode_lib->vba.SourcePixelFormat[k],
+				mode_lib->vba.SurfaceTiling[k],
+
+				/* Output */
+				&mode_lib->vba.BytePerPixelY[k],
+				&mode_lib->vba.BytePerPixelC[k],
+				&mode_lib->vba.BytePerPixelInDETY[k],
+				&mode_lib->vba.BytePerPixelInDETC[k],
+				&mode_lib->vba.Read256BlockHeightY[k],
+				&mode_lib->vba.Read256BlockHeightC[k],
+				&mode_lib->vba.Read256BlockWidthY[k],
+				&mode_lib->vba.Read256BlockWidthC[k],
+				&mode_lib->vba.MicroTileHeightY[k],
+				&mode_lib->vba.MicroTileHeightC[k],
+				&mode_lib->vba.MicroTileWidthY[k],
+				&mode_lib->vba.MicroTileWidthC[k]);
+	}
+
+	/*Bandwidth Support Check*/
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		if (!IsVertical(mode_lib->vba.SourceRotation[k])) {
+			v->SwathWidthYSingleDPP[k] = mode_lib->vba.ViewportWidth[k];
+			v->SwathWidthCSingleDPP[k] = mode_lib->vba.ViewportWidthChroma[k];
+		} else {
+			v->SwathWidthYSingleDPP[k] = mode_lib->vba.ViewportHeight[k];
+			v->SwathWidthCSingleDPP[k] = mode_lib->vba.ViewportHeightChroma[k];
+		}
+	}
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		v->ReadBandwidthLuma[k] = v->SwathWidthYSingleDPP[k] * dml_ceil(v->BytePerPixelInDETY[k], 1.0)
+				/ (mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k]) * mode_lib->vba.VRatio[k];
+		v->ReadBandwidthChroma[k] = v->SwathWidthYSingleDPP[k] / 2 * dml_ceil(v->BytePerPixelInDETC[k], 2.0)
+				/ (mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k]) * mode_lib->vba.VRatio[k]
+				/ 2.0;
+	}
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		if (mode_lib->vba.WritebackEnable[k] == true && mode_lib->vba.WritebackPixelFormat[k] == dm_444_64) {
+			v->WriteBandwidth[k] = mode_lib->vba.WritebackDestinationWidth[k]
+					* mode_lib->vba.WritebackDestinationHeight[k]
+					/ (mode_lib->vba.WritebackSourceHeight[k] * mode_lib->vba.HTotal[k]
+							/ mode_lib->vba.PixelClock[k]) * 8.0;
+		} else if (mode_lib->vba.WritebackEnable[k] == true) {
+			v->WriteBandwidth[k] = mode_lib->vba.WritebackDestinationWidth[k]
+					* mode_lib->vba.WritebackDestinationHeight[k]
+					/ (mode_lib->vba.WritebackSourceHeight[k] * mode_lib->vba.HTotal[k]
+							/ mode_lib->vba.PixelClock[k]) * 4.0;
+		} else {
+			v->WriteBandwidth[k] = 0.0;
+		}
+	}
+
+	/*Writeback Latency support check*/
+
+	mode_lib->vba.WritebackLatencySupport = true;
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		if (mode_lib->vba.WritebackEnable[k] == true
+				&& (v->WriteBandwidth[k]
+						> mode_lib->vba.WritebackInterfaceBufferSize * 1024
+								/ mode_lib->vba.WritebackLatency)) {
+			mode_lib->vba.WritebackLatencySupport = false;
+		}
+	}
+
+	/*Writeback Mode Support Check*/
+	mode_lib->vba.EnoughWritebackUnits = true;
+	mode_lib->vba.TotalNumberOfActiveWriteback = 0;
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		if (mode_lib->vba.WritebackEnable[k] == true)
+			mode_lib->vba.TotalNumberOfActiveWriteback = mode_lib->vba.TotalNumberOfActiveWriteback + 1;
+	}
+
+	if (mode_lib->vba.TotalNumberOfActiveWriteback > mode_lib->vba.MaxNumWriteback)
+		mode_lib->vba.EnoughWritebackUnits = false;
+
+	/*Writeback Scale Ratio and Taps Support Check*/
+	mode_lib->vba.WritebackScaleRatioAndTapsSupport = true;
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		if (mode_lib->vba.WritebackEnable[k] == true) {
+			if (mode_lib->vba.WritebackHRatio[k] > mode_lib->vba.WritebackMaxHSCLRatio
+					|| mode_lib->vba.WritebackVRatio[k] > mode_lib->vba.WritebackMaxVSCLRatio
+					|| mode_lib->vba.WritebackHRatio[k] < mode_lib->vba.WritebackMinHSCLRatio
+					|| mode_lib->vba.WritebackVRatio[k] < mode_lib->vba.WritebackMinVSCLRatio
+					|| mode_lib->vba.WritebackHTaps[k] > mode_lib->vba.WritebackMaxHSCLTaps
+					|| mode_lib->vba.WritebackVTaps[k] > mode_lib->vba.WritebackMaxVSCLTaps
+					|| mode_lib->vba.WritebackHRatio[k] > mode_lib->vba.WritebackHTaps[k]
+					|| mode_lib->vba.WritebackVRatio[k] > mode_lib->vba.WritebackVTaps[k]
+					|| (mode_lib->vba.WritebackHTaps[k] > 2.0
+							&& ((mode_lib->vba.WritebackHTaps[k] % 2) == 1))) {
+				mode_lib->vba.WritebackScaleRatioAndTapsSupport = false;
+			}
+			if (2.0 * mode_lib->vba.WritebackDestinationWidth[k] * (mode_lib->vba.WritebackVTaps[k] - 1)
+					* 57 > mode_lib->vba.WritebackLineBufferSize) {
+				mode_lib->vba.WritebackScaleRatioAndTapsSupport = false;
+			}
+		}
+	}
+
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		dml32_CalculateSinglePipeDPPCLKAndSCLThroughput(mode_lib->vba.HRatio[k], mode_lib->vba.HRatioChroma[k],
+				mode_lib->vba.VRatio[k], mode_lib->vba.VRatioChroma[k],
+				mode_lib->vba.MaxDCHUBToPSCLThroughput, mode_lib->vba.MaxPSCLToLBThroughput,
+				mode_lib->vba.PixelClock[k], mode_lib->vba.SourcePixelFormat[k],
+				mode_lib->vba.htaps[k], mode_lib->vba.HTAPsChroma[k], mode_lib->vba.vtaps[k],
+				mode_lib->vba.VTAPsChroma[k],
+				/* Output */
+				&mode_lib->vba.PSCL_FACTOR[k], &mode_lib->vba.PSCL_FACTOR_CHROMA[k],
+				&mode_lib->vba.MinDPPCLKUsingSingleDPP[k]);
+	}
+
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+
+		if (mode_lib->vba.SurfaceTiling[k] == dm_sw_linear) {
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.MaximumSwathWidthSupportLuma = 8192;
+		} else if (!IsVertical(mode_lib->vba.SourceRotation[k]) && v->BytePerPixelC[k] > 0
+				&& mode_lib->vba.SourcePixelFormat[k] != dm_rgbe_alpha) {
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.MaximumSwathWidthSupportLuma = 7680;
+		} else if (IsVertical(mode_lib->vba.SourceRotation[k]) && v->BytePerPixelC[k] > 0
+				&& mode_lib->vba.SourcePixelFormat[k] != dm_rgbe_alpha) {
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.MaximumSwathWidthSupportLuma = 4320;
+		} else if (mode_lib->vba.SourcePixelFormat[k] == dm_rgbe_alpha) {
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.MaximumSwathWidthSupportLuma = 3840;
+		} else if (IsVertical(mode_lib->vba.SourceRotation[k]) && v->BytePerPixelY[k] == 8 &&
+				mode_lib->vba.DCCEnable[k] == true) {
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.MaximumSwathWidthSupportLuma = 3072;
+		} else {
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.MaximumSwathWidthSupportLuma = 6144;
+		}
+
+		if (mode_lib->vba.SourcePixelFormat[k] == dm_420_8 || mode_lib->vba.SourcePixelFormat[k] == dm_420_10
+				|| mode_lib->vba.SourcePixelFormat[k] == dm_420_12) {
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.MaximumSwathWidthSupportChroma = v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.MaximumSwathWidthSupportLuma / 2.0;
+		} else {
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.MaximumSwathWidthSupportChroma = v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.MaximumSwathWidthSupportLuma;
+		}
+		v->MaximumSwathWidthInLineBufferLuma = mode_lib->vba.LineBufferSizeFinal
+				* dml_max(mode_lib->vba.HRatio[k], 1.0) / mode_lib->vba.LBBitPerPixel[k]
+				/ (mode_lib->vba.vtaps[k] + dml_max(dml_ceil(mode_lib->vba.VRatio[k], 1.0) - 2, 0.0));
+		if (v->BytePerPixelC[k] == 0.0) {
+			v->MaximumSwathWidthInLineBufferChroma = 0;
+		} else {
+			v->MaximumSwathWidthInLineBufferChroma = mode_lib->vba.LineBufferSizeFinal
+					* dml_max(mode_lib->vba.HRatioChroma[k], 1.0) / mode_lib->vba.LBBitPerPixel[k]
+					/ (mode_lib->vba.VTAPsChroma[k]
+							+ dml_max(dml_ceil(mode_lib->vba.VRatioChroma[k], 1.0) - 2,
+									0.0));
+		}
+		v->MaximumSwathWidthLuma[k] = dml_min(v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.MaximumSwathWidthSupportLuma,
+				v->MaximumSwathWidthInLineBufferLuma);
+		v->MaximumSwathWidthChroma[k] = dml_min(v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.MaximumSwathWidthSupportChroma,
+				v->MaximumSwathWidthInLineBufferChroma);
+	}
+
+	/*Number Of DSC Slices*/
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		if (mode_lib->vba.BlendingAndTiming[k] == k) {
+			if (mode_lib->vba.PixelClockBackEnd[k] > 4800) {
+				mode_lib->vba.NumberOfDSCSlices[k] = dml_ceil(mode_lib->vba.PixelClockBackEnd[k] / 600,
+						4);
+			} else if (mode_lib->vba.PixelClockBackEnd[k] > 2400) {
+				mode_lib->vba.NumberOfDSCSlices[k] = 8;
+			} else if (mode_lib->vba.PixelClockBackEnd[k] > 1200) {
+				mode_lib->vba.NumberOfDSCSlices[k] = 4;
+			} else if (mode_lib->vba.PixelClockBackEnd[k] > 340) {
+				mode_lib->vba.NumberOfDSCSlices[k] = 2;
+			} else {
+				mode_lib->vba.NumberOfDSCSlices[k] = 1;
+			}
+		} else {
+			mode_lib->vba.NumberOfDSCSlices[k] = 0;
+		}
+	}
+
+	dml32_CalculateSwathAndDETConfiguration(
+			mode_lib->vba.DETSizeOverride,
+			mode_lib->vba.UsesMALLForPStateChange,
+			mode_lib->vba.ConfigReturnBufferSizeInKByte,
+			mode_lib->vba.MaxTotalDETInKByte,
+			mode_lib->vba.MinCompressedBufferSizeInKByte,
+			1, /* ForceSingleDPP */
+			mode_lib->vba.NumberOfActiveSurfaces,
+			mode_lib->vba.nomDETInKByte,
+			mode_lib->vba.UseUnboundedRequesting,
+			mode_lib->vba.CompressedBufferSegmentSizeInkByteFinal,
+			mode_lib->vba.Output,
+			mode_lib->vba.ReadBandwidthLuma,
+			mode_lib->vba.ReadBandwidthChroma,
+			mode_lib->vba.MaximumSwathWidthLuma,
+			mode_lib->vba.MaximumSwathWidthChroma,
+			mode_lib->vba.SourceRotation,
+			mode_lib->vba.ViewportStationary,
+			mode_lib->vba.SourcePixelFormat,
+			mode_lib->vba.SurfaceTiling,
+			mode_lib->vba.ViewportWidth,
+			mode_lib->vba.ViewportHeight,
+			mode_lib->vba.ViewportXStartY,
+			mode_lib->vba.ViewportYStartY,
+			mode_lib->vba.ViewportXStartC,
+			mode_lib->vba.ViewportYStartC,
+			mode_lib->vba.SurfaceWidthY,
+			mode_lib->vba.SurfaceWidthC,
+			mode_lib->vba.SurfaceHeightY,
+			mode_lib->vba.SurfaceHeightC,
+			mode_lib->vba.Read256BlockHeightY,
+			mode_lib->vba.Read256BlockHeightC,
+			mode_lib->vba.Read256BlockWidthY,
+			mode_lib->vba.Read256BlockWidthC,
+			dummy_odm_mode,
+			mode_lib->vba.BlendingAndTiming,
+			mode_lib->vba.BytePerPixelY,
+			mode_lib->vba.BytePerPixelC,
+			mode_lib->vba.BytePerPixelInDETY,
+			mode_lib->vba.BytePerPixelInDETC,
+			mode_lib->vba.HActive,
+			mode_lib->vba.HRatio,
+			mode_lib->vba.HRatioChroma,
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_integer_array[0], /*  Integer DPPPerSurface[] */
+
+			/* Output */
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_integer_array[1], /* Long            swath_width_luma_ub[] */
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_integer_array[2], /* Long            swath_width_chroma_ub[]  */
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_double_array[0], /* Long            SwathWidth[]  */
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_double_array[1], /* Long            SwathWidthChroma[]  */
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_integer_array[3], /* Integer         SwathHeightY[]  */
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_integer_array[4], /* Integer         SwathHeightC[]  */
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_integer_array[5], /* Long            DETBufferSizeInKByte[]  */
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_integer_array[6], /* Long            DETBufferSizeY[]  */
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_integer_array[7], /* Long            DETBufferSizeC[]  */
+			&dummy_boolean[0], /* bool           *UnboundedRequestEnabled  */
+			&dummy_integer[0], /* Long           *CompressedBufferSizeInkByte  */
+			mode_lib->vba.SingleDPPViewportSizeSupportPerSurface,/* bool ViewportSizeSupportPerSurface[] */
+			&dummy_boolean[1]); /* bool           *ViewportSizeSupport */
+
+	MPCCombineMethodAsNeededForPStateChangeAndVoltage = false;
+	MPCCombineMethodAsPossible = false;
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		if (mode_lib->vba.MPCCombineUse[k] == dm_mpc_reduce_voltage_and_clocks)
+			MPCCombineMethodAsNeededForPStateChangeAndVoltage = true;
+		if (mode_lib->vba.MPCCombineUse[k] == dm_mpc_always_when_possible)
+			MPCCombineMethodAsPossible = true;
+	}
+	mode_lib->vba.MPCCombineMethodIncompatible = MPCCombineMethodAsNeededForPStateChangeAndVoltage
+			&& MPCCombineMethodAsPossible;
+
+	for (i = 0; i < v->soc.num_states; i++) {
+		for (j = 0; j < 2; j++) {
+			bool NoChroma;
+			mode_lib->vba.TotalNumberOfActiveDPP[i][j] = 0;
+			mode_lib->vba.TotalAvailablePipesSupport[i][j] = true;
+
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+
+				bool TotalAvailablePipesSupportNoDSC;
+				unsigned int NumberOfDPPNoDSC;
+				enum odm_combine_mode ODMModeNoDSC = dm_odm_combine_mode_disabled;
+				double RequiredDISPCLKPerSurfaceNoDSC;
+				bool TotalAvailablePipesSupportDSC;
+				unsigned int NumberOfDPPDSC;
+				enum odm_combine_mode ODMModeDSC = dm_odm_combine_mode_disabled;
+				double RequiredDISPCLKPerSurfaceDSC;
+
+				dml32_CalculateODMMode(
+						mode_lib->vba.MaximumPixelsPerLinePerDSCUnit,
+						mode_lib->vba.HActive[k],
+						mode_lib->vba.Output[k],
+						mode_lib->vba.ODMUse[k],
+						mode_lib->vba.MaxDispclk[i],
+						mode_lib->vba.MaxDispclk[v->soc.num_states - 1],
+						false,
+						mode_lib->vba.TotalNumberOfActiveDPP[i][j],
+						mode_lib->vba.MaxNumDPP,
+						mode_lib->vba.PixelClock[k],
+						mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading,
+						mode_lib->vba.DISPCLKRampingMargin,
+						mode_lib->vba.DISPCLKDPPCLKVCOSpeed,
+
+						/* Output */
+						&TotalAvailablePipesSupportNoDSC,
+						&NumberOfDPPNoDSC,
+						&ODMModeNoDSC,
+						&RequiredDISPCLKPerSurfaceNoDSC);
+
+				dml32_CalculateODMMode(
+						mode_lib->vba.MaximumPixelsPerLinePerDSCUnit,
+						mode_lib->vba.HActive[k],
+						mode_lib->vba.Output[k],
+						mode_lib->vba.ODMUse[k],
+						mode_lib->vba.MaxDispclk[i],
+						mode_lib->vba.MaxDispclk[v->soc.num_states - 1],
+						true,
+						mode_lib->vba.TotalNumberOfActiveDPP[i][j],
+						mode_lib->vba.MaxNumDPP,
+						mode_lib->vba.PixelClock[k],
+						mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading,
+						mode_lib->vba.DISPCLKRampingMargin,
+						mode_lib->vba.DISPCLKDPPCLKVCOSpeed,
+
+						/* Output */
+						&TotalAvailablePipesSupportDSC,
+						&NumberOfDPPDSC,
+						&ODMModeDSC,
+						&RequiredDISPCLKPerSurfaceDSC);
+
+				dml32_CalculateOutputLink(
+						mode_lib->vba.PHYCLKPerState[i],
+						mode_lib->vba.PHYCLKD18PerState[i],
+						mode_lib->vba.PHYCLKD32PerState[i],
+						mode_lib->vba.Downspreading,
+						(mode_lib->vba.BlendingAndTiming[k] == k),
+						mode_lib->vba.Output[k],
+						mode_lib->vba.OutputFormat[k],
+						mode_lib->vba.HTotal[k],
+						mode_lib->vba.HActive[k],
+						mode_lib->vba.PixelClockBackEnd[k],
+						mode_lib->vba.ForcedOutputLinkBPP[k],
+						mode_lib->vba.DSCInputBitPerComponent[k],
+						mode_lib->vba.NumberOfDSCSlices[k],
+						mode_lib->vba.AudioSampleRate[k],
+						mode_lib->vba.AudioSampleLayout[k],
+						ODMModeNoDSC,
+						ODMModeDSC,
+						mode_lib->vba.DSCEnable[k],
+						mode_lib->vba.OutputLinkDPLanes[k],
+						mode_lib->vba.OutputLinkDPRate[k],
+
+						/* Output */
+						&mode_lib->vba.RequiresDSC[i][k],
+						&mode_lib->vba.RequiresFEC[i][k],
+						&mode_lib->vba.OutputBppPerState[i][k],
+						&mode_lib->vba.OutputTypePerState[i][k],
+						&mode_lib->vba.OutputRatePerState[i][k],
+						&mode_lib->vba.RequiredSlots[i][k]);
+
+				if (mode_lib->vba.RequiresDSC[i][k] == false) {
+					mode_lib->vba.ODMCombineEnablePerState[i][k] = ODMModeNoDSC;
+					mode_lib->vba.RequiredDISPCLKPerSurface[i][j][k] =
+							RequiredDISPCLKPerSurfaceNoDSC;
+					if (!TotalAvailablePipesSupportNoDSC)
+						mode_lib->vba.TotalAvailablePipesSupport[i][j] = false;
+					mode_lib->vba.TotalNumberOfActiveDPP[i][j] =
+							mode_lib->vba.TotalNumberOfActiveDPP[i][j] + NumberOfDPPNoDSC;
+				} else {
+					mode_lib->vba.ODMCombineEnablePerState[i][k] = ODMModeDSC;
+					mode_lib->vba.RequiredDISPCLKPerSurface[i][j][k] =
+							RequiredDISPCLKPerSurfaceDSC;
+					if (!TotalAvailablePipesSupportDSC)
+						mode_lib->vba.TotalAvailablePipesSupport[i][j] = false;
+					mode_lib->vba.TotalNumberOfActiveDPP[i][j] =
+							mode_lib->vba.TotalNumberOfActiveDPP[i][j] + NumberOfDPPDSC;
+				}
+			}
+
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				if (mode_lib->vba.ODMCombineEnablePerState[i][k] == dm_odm_combine_mode_4to1) {
+					mode_lib->vba.MPCCombine[i][j][k] = false;
+					mode_lib->vba.NoOfDPP[i][j][k] = 4;
+				} else if (mode_lib->vba.ODMCombineEnablePerState[i][k] == dm_odm_combine_mode_2to1) {
+					mode_lib->vba.MPCCombine[i][j][k] = false;
+					mode_lib->vba.NoOfDPP[i][j][k] = 2;
+				} else if (mode_lib->vba.MPCCombineUse[k] == dm_mpc_never) {
+					mode_lib->vba.MPCCombine[i][j][k] = false;
+					mode_lib->vba.NoOfDPP[i][j][k] = 1;
+				} else if (dml32_RoundToDFSGranularity(
+						mode_lib->vba.MinDPPCLKUsingSingleDPP[k]
+								* (1 + mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading
+												/ 100), 1,
+						mode_lib->vba.DISPCLKDPPCLKVCOSpeed) <= mode_lib->vba.MaxDppclk[i] &&
+				mode_lib->vba.SingleDPPViewportSizeSupportPerSurface[k] == true) {
+					mode_lib->vba.MPCCombine[i][j][k] = false;
+					mode_lib->vba.NoOfDPP[i][j][k] = 1;
+				} else if (mode_lib->vba.TotalNumberOfActiveDPP[i][j] < mode_lib->vba.MaxNumDPP) {
+					mode_lib->vba.MPCCombine[i][j][k] = true;
+					mode_lib->vba.NoOfDPP[i][j][k] = 2;
+					mode_lib->vba.TotalNumberOfActiveDPP[i][j] =
+							mode_lib->vba.TotalNumberOfActiveDPP[i][j] + 1;
+				} else {
+					mode_lib->vba.MPCCombine[i][j][k] = false;
+					mode_lib->vba.NoOfDPP[i][j][k] = 1;
+					mode_lib->vba.TotalAvailablePipesSupport[i][j] = false;
+				}
+			}
+
+			mode_lib->vba.TotalNumberOfSingleDPPSurfaces[i][j] = 0;
+			NoChroma = true;
+
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				if (mode_lib->vba.NoOfDPP[i][j][k] == 1)
+					mode_lib->vba.TotalNumberOfSingleDPPSurfaces[i][j] =
+							mode_lib->vba.TotalNumberOfSingleDPPSurfaces[i][j] + 1;
+				if (mode_lib->vba.SourcePixelFormat[k] == dm_420_8
+						|| mode_lib->vba.SourcePixelFormat[k] == dm_420_10
+						|| mode_lib->vba.SourcePixelFormat[k] == dm_420_12
+						|| mode_lib->vba.SourcePixelFormat[k] == dm_rgbe_alpha) {
+					NoChroma = false;
+				}
+			}
+
+			if (j == 1 && !dml32_UnboundedRequest(mode_lib->vba.UseUnboundedRequesting,
+							mode_lib->vba.TotalNumberOfActiveDPP[i][j], NoChroma,
+							mode_lib->vba.Output[0])) {
+				while (!(mode_lib->vba.TotalNumberOfActiveDPP[i][j] >= mode_lib->vba.MaxNumDPP
+						|| mode_lib->vba.TotalNumberOfSingleDPPSurfaces[i][j] == 0)) {
+					double BWOfNonCombinedSurfaceOfMaximumBandwidth = 0;
+					unsigned int NumberOfNonCombinedSurfaceOfMaximumBandwidth = 0;
+
+					for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+						if (mode_lib->vba.MPCCombineUse[k]
+							!= dm_mpc_never &&
+							mode_lib->vba.MPCCombineUse[k] != dm_mpc_reduce_voltage &&
+							mode_lib->vba.ReadBandwidthLuma[k] +
+							mode_lib->vba.ReadBandwidthChroma[k] >
+							BWOfNonCombinedSurfaceOfMaximumBandwidth &&
+							(mode_lib->vba.ODMCombineEnablePerState[i][k] !=
+							dm_odm_combine_mode_2to1 &&
+							mode_lib->vba.ODMCombineEnablePerState[i][k] !=
+							dm_odm_combine_mode_4to1) &&
+								mode_lib->vba.MPCCombine[i][j][k] == false) {
+							BWOfNonCombinedSurfaceOfMaximumBandwidth =
+								mode_lib->vba.ReadBandwidthLuma[k]
+								+ mode_lib->vba.ReadBandwidthChroma[k];
+							NumberOfNonCombinedSurfaceOfMaximumBandwidth = k;
+						}
+					}
+					mode_lib->vba.MPCCombine[i][j][NumberOfNonCombinedSurfaceOfMaximumBandwidth] =
+							true;
+					mode_lib->vba.NoOfDPP[i][j][NumberOfNonCombinedSurfaceOfMaximumBandwidth] = 2;
+					mode_lib->vba.TotalNumberOfActiveDPP[i][j] =
+							mode_lib->vba.TotalNumberOfActiveDPP[i][j] + 1;
+					mode_lib->vba.TotalNumberOfSingleDPPSurfaces[i][j] =
+							mode_lib->vba.TotalNumberOfSingleDPPSurfaces[i][j] - 1;
+				}
+			}
+
+			//DISPCLK/DPPCLK
+			mode_lib->vba.WritebackRequiredDISPCLK = 0;
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				if (mode_lib->vba.WritebackEnable[k]) {
+					mode_lib->vba.WritebackRequiredDISPCLK = dml_max(
+							mode_lib->vba.WritebackRequiredDISPCLK,
+							dml32_CalculateWriteBackDISPCLK(
+									mode_lib->vba.WritebackPixelFormat[k],
+									mode_lib->vba.PixelClock[k],
+									mode_lib->vba.WritebackHRatio[k],
+									mode_lib->vba.WritebackVRatio[k],
+									mode_lib->vba.WritebackHTaps[k],
+									mode_lib->vba.WritebackVTaps[k],
+									mode_lib->vba.WritebackSourceWidth[k],
+									mode_lib->vba.WritebackDestinationWidth[k],
+									mode_lib->vba.HTotal[k],
+									mode_lib->vba.WritebackLineBufferSize,
+									mode_lib->vba.DISPCLKDPPCLKVCOSpeed));
+				}
+			}
+
+			mode_lib->vba.RequiredDISPCLK[i][j] = mode_lib->vba.WritebackRequiredDISPCLK;
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				mode_lib->vba.RequiredDISPCLK[i][j] = dml_max(mode_lib->vba.RequiredDISPCLK[i][j],
+						mode_lib->vba.RequiredDISPCLKPerSurface[i][j][k]);
+			}
+
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k)
+				mode_lib->vba.NoOfDPPThisState[k] = mode_lib->vba.NoOfDPP[i][j][k];
+
+			dml32_CalculateDPPCLK(mode_lib->vba.NumberOfActiveSurfaces,
+					mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading,
+					mode_lib->vba.DISPCLKDPPCLKVCOSpeed, mode_lib->vba.MinDPPCLKUsingSingleDPP,
+					mode_lib->vba.NoOfDPPThisState,
+					/* Output */
+					&mode_lib->vba.GlobalDPPCLK, mode_lib->vba.RequiredDPPCLKThisState);
+
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k)
+				mode_lib->vba.RequiredDPPCLK[i][j][k] = mode_lib->vba.RequiredDPPCLKThisState[k];
+
+			mode_lib->vba.DISPCLK_DPPCLK_Support[i][j] = !((mode_lib->vba.RequiredDISPCLK[i][j]
+					> mode_lib->vba.MaxDispclk[i])
+					|| (mode_lib->vba.GlobalDPPCLK > mode_lib->vba.MaxDppclk[i]));
+
+			if (mode_lib->vba.TotalNumberOfActiveDPP[i][j] > mode_lib->vba.MaxNumDPP)
+				mode_lib->vba.TotalAvailablePipesSupport[i][j] = false;
+		} // j
+	} // i (VOLTAGE_STATE)
+
+	/* Total Available OTG, HDMIFRL, DP Support Check */
+	TotalNumberOfActiveOTG = 0;
+	TotalNumberOfActiveHDMIFRL = 0;
+	TotalNumberOfActiveDP2p0 = 0;
+	TotalNumberOfActiveDP2p0Outputs = 0;
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		if (mode_lib->vba.BlendingAndTiming[k] == k) {
+			TotalNumberOfActiveOTG = TotalNumberOfActiveOTG + 1;
+			if (mode_lib->vba.Output[k] == dm_dp2p0) {
+				TotalNumberOfActiveDP2p0 = TotalNumberOfActiveDP2p0 + 1;
+				if (mode_lib->vba.OutputMultistreamId[k]
+						== k || mode_lib->vba.OutputMultistreamEn[k] == false) {
+					TotalNumberOfActiveDP2p0Outputs = TotalNumberOfActiveDP2p0Outputs + 1;
+				}
+			}
+		}
+	}
+
+	mode_lib->vba.NumberOfOTGSupport = (TotalNumberOfActiveOTG <= mode_lib->vba.MaxNumOTG);
+	mode_lib->vba.NumberOfHDMIFRLSupport = (TotalNumberOfActiveHDMIFRL <= mode_lib->vba.MaxNumHDMIFRLOutputs);
+	mode_lib->vba.NumberOfDP2p0Support = (TotalNumberOfActiveDP2p0 <= mode_lib->vba.MaxNumDP2p0Streams
+			&& TotalNumberOfActiveDP2p0Outputs <= mode_lib->vba.MaxNumDP2p0Outputs);
+
+	/* Display IO and DSC Support Check */
+	mode_lib->vba.NonsupportedDSCInputBPC = false;
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		if (!(mode_lib->vba.DSCInputBitPerComponent[k] == 12.0
+				|| mode_lib->vba.DSCInputBitPerComponent[k] == 10.0
+				|| mode_lib->vba.DSCInputBitPerComponent[k] == 8.0
+				|| mode_lib->vba.DSCInputBitPerComponent[k] >
+				mode_lib->vba.MaximumDSCBitsPerComponent)) {
+			mode_lib->vba.NonsupportedDSCInputBPC = true;
+		}
+	}
+
+	for (i = 0; i < v->soc.num_states; ++i) {
+		unsigned int TotalSlots;
+
+		mode_lib->vba.ExceededMultistreamSlots[i] = false;
+		for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+			if (mode_lib->vba.OutputMultistreamEn[k] == true && mode_lib->vba.OutputMultistreamId[k] == k) {
+				TotalSlots = mode_lib->vba.RequiredSlots[i][k];
+				for (j = 0; j < mode_lib->vba.NumberOfActiveSurfaces; ++j) {
+					if (mode_lib->vba.OutputMultistreamId[j] == k)
+						TotalSlots = TotalSlots + mode_lib->vba.RequiredSlots[i][j];
+				}
+				if (mode_lib->vba.Output[k] == dm_dp && TotalSlots > 63)
+					mode_lib->vba.ExceededMultistreamSlots[i] = true;
+				if (mode_lib->vba.Output[k] == dm_dp2p0 && TotalSlots > 64)
+					mode_lib->vba.ExceededMultistreamSlots[i] = true;
+			}
+		}
+		mode_lib->vba.LinkCapacitySupport[i] = true;
+		for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+			if (mode_lib->vba.BlendingAndTiming[k] == k
+					&& (mode_lib->vba.Output[k] == dm_dp || mode_lib->vba.Output[k] == dm_dp2p0
+							|| mode_lib->vba.Output[k] == dm_edp
+							|| mode_lib->vba.Output[k] == dm_hdmi)
+					&& mode_lib->vba.OutputBppPerState[i][k] == 0) {
+				mode_lib->vba.LinkCapacitySupport[i] = false;
+			}
+		}
+	}
+
+	mode_lib->vba.P2IWith420 = false;
+	mode_lib->vba.DSCOnlyIfNecessaryWithBPP = false;
+	mode_lib->vba.DSC422NativeNotSupported = false;
+	mode_lib->vba.LinkRateDoesNotMatchDPVersion = false;
+	mode_lib->vba.LinkRateForMultistreamNotIndicated = false;
+	mode_lib->vba.BPPForMultistreamNotIndicated = false;
+	mode_lib->vba.MultistreamWithHDMIOreDP = false;
+	mode_lib->vba.MSOOrODMSplitWithNonDPLink = false;
+	mode_lib->vba.NotEnoughLanesForMSO = false;
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		if (mode_lib->vba.BlendingAndTiming[k] == k
+				&& (mode_lib->vba.Output[k] == dm_dp || mode_lib->vba.Output[k] == dm_dp2p0
+						|| mode_lib->vba.Output[k] == dm_edp
+						|| mode_lib->vba.Output[k] == dm_hdmi)) {
+			if (mode_lib->vba.OutputFormat[k]
+					== dm_420 && mode_lib->vba.Interlace[k] == 1 &&
+					mode_lib->vba.ProgressiveToInterlaceUnitInOPP == true)
+				mode_lib->vba.P2IWith420 = true;
+
+			if (mode_lib->vba.DSCEnable[k] && mode_lib->vba.ForcedOutputLinkBPP[k] != 0)
+				mode_lib->vba.DSCOnlyIfNecessaryWithBPP = true;
+			if ((mode_lib->vba.DSCEnable[k] || mode_lib->vba.DSCEnable[k])
+					&& mode_lib->vba.OutputFormat[k] == dm_n422
+					&& !mode_lib->vba.DSC422NativeSupport)
+				mode_lib->vba.DSC422NativeNotSupported = true;
+
+			if (((mode_lib->vba.OutputLinkDPRate[k] == dm_dp_rate_hbr
+					|| mode_lib->vba.OutputLinkDPRate[k] == dm_dp_rate_hbr2
+					|| mode_lib->vba.OutputLinkDPRate[k] == dm_dp_rate_hbr3)
+					&& mode_lib->vba.Output[k] != dm_dp && mode_lib->vba.Output[k] != dm_edp)
+					|| ((mode_lib->vba.OutputLinkDPRate[k] == dm_dp_rate_uhbr10
+							|| mode_lib->vba.OutputLinkDPRate[k] == dm_dp_rate_uhbr13p5
+							|| mode_lib->vba.OutputLinkDPRate[k] == dm_dp_rate_uhbr20)
+							&& mode_lib->vba.Output[k] != dm_dp2p0))
+				mode_lib->vba.LinkRateDoesNotMatchDPVersion = true;
+
+			if (mode_lib->vba.OutputMultistreamEn[k] == true) {
+				if (mode_lib->vba.OutputMultistreamId[k] == k
+					&& mode_lib->vba.OutputLinkDPRate[k] == dm_dp_rate_na)
+					mode_lib->vba.LinkRateForMultistreamNotIndicated = true;
+				if (mode_lib->vba.OutputMultistreamId[k] == k && mode_lib->vba.ForcedOutputLinkBPP[k] == 0)
+					mode_lib->vba.BPPForMultistreamNotIndicated = true;
+				for (j = 0; j < mode_lib->vba.NumberOfActiveSurfaces; ++j) {
+					if (mode_lib->vba.OutputMultistreamId[k] == j && mode_lib->vba.OutputMultistreamEn[k]
+						&& mode_lib->vba.ForcedOutputLinkBPP[k] == 0)
+						mode_lib->vba.BPPForMultistreamNotIndicated = true;
+				}
+			}
+
+			if ((mode_lib->vba.Output[k] == dm_edp || mode_lib->vba.Output[k] == dm_hdmi)) {
+				if (mode_lib->vba.OutputMultistreamId[k] == k && mode_lib->vba.OutputMultistreamEn[k])
+					mode_lib->vba.MultistreamWithHDMIOreDP = true;
+
+				for (j = 0; j < mode_lib->vba.NumberOfActiveSurfaces; ++j) {
+					if (mode_lib->vba.OutputMultistreamEn[k] == true && mode_lib->vba.OutputMultistreamId[k] == j)
+						mode_lib->vba.MultistreamWithHDMIOreDP = true;
+				}
+			}
+
+			if (mode_lib->vba.Output[k] != dm_dp
+					&& (mode_lib->vba.ODMUse[k] == dm_odm_split_policy_1to2
+							|| mode_lib->vba.ODMUse[k] == dm_odm_mso_policy_1to2
+							|| mode_lib->vba.ODMUse[k] == dm_odm_mso_policy_1to4))
+				mode_lib->vba.MSOOrODMSplitWithNonDPLink = true;
+
+			if ((mode_lib->vba.ODMUse[k] == dm_odm_mso_policy_1to2
+					&& mode_lib->vba.OutputLinkDPLanes[k] < 2)
+					|| (mode_lib->vba.ODMUse[k] == dm_odm_mso_policy_1to4
+							&& mode_lib->vba.OutputLinkDPLanes[k] < 4))
+				mode_lib->vba.NotEnoughLanesForMSO = true;
+		}
+	}
+
+	for (i = 0; i < v->soc.num_states; ++i) {
+		mode_lib->vba.DTBCLKRequiredMoreThanSupported[i] = false;
+		for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+			if (mode_lib->vba.BlendingAndTiming[k] == k
+					&& dml32_RequiredDTBCLK(mode_lib->vba.RequiresDSC[i][k],
+							mode_lib->vba.PixelClockBackEnd[k],
+							mode_lib->vba.OutputFormat[k],
+							mode_lib->vba.OutputBppPerState[i][k],
+							mode_lib->vba.NumberOfDSCSlices[k], mode_lib->vba.HTotal[k],
+							mode_lib->vba.HActive[k], mode_lib->vba.AudioSampleRate[k],
+							mode_lib->vba.AudioSampleLayout[k])
+							> mode_lib->vba.DTBCLKPerState[i]) {
+				mode_lib->vba.DTBCLKRequiredMoreThanSupported[i] = true;
+			}
+		}
+	}
+
+	for (i = 0; i < v->soc.num_states; ++i) {
+		mode_lib->vba.ODMCombine2To1SupportCheckOK[i] = true;
+		mode_lib->vba.ODMCombine4To1SupportCheckOK[i] = true;
+		for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+			if (mode_lib->vba.BlendingAndTiming[k] == k
+					&& mode_lib->vba.ODMCombineEnablePerState[i][k] == dm_odm_combine_mode_2to1
+					&& mode_lib->vba.Output[k] == dm_hdmi) {
+				mode_lib->vba.ODMCombine2To1SupportCheckOK[i] = false;
+			}
+			if (mode_lib->vba.BlendingAndTiming[k] == k
+					&& mode_lib->vba.ODMCombineEnablePerState[i][k] == dm_odm_combine_mode_4to1
+					&& (mode_lib->vba.Output[k] == dm_dp || mode_lib->vba.Output[k] == dm_edp
+							|| mode_lib->vba.Output[k] == dm_hdmi)) {
+				mode_lib->vba.ODMCombine4To1SupportCheckOK[i] = false;
+			}
+		}
+	}
+
+	for (i = 0; i < v->soc.num_states; i++) {
+		mode_lib->vba.DSCCLKRequiredMoreThanSupported[i] = false;
+		for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+			if (mode_lib->vba.BlendingAndTiming[k] == k) {
+				if (mode_lib->vba.Output[k] == dm_dp || mode_lib->vba.Output[k] == dm_dp2p0
+						|| mode_lib->vba.Output[k] == dm_edp) {
+					if (mode_lib->vba.OutputFormat[k] == dm_420) {
+						mode_lib->vba.DSCFormatFactor = 2;
+					} else if (mode_lib->vba.OutputFormat[k] == dm_444) {
+						mode_lib->vba.DSCFormatFactor = 1;
+					} else if (mode_lib->vba.OutputFormat[k] == dm_n422) {
+						mode_lib->vba.DSCFormatFactor = 2;
+					} else {
+						mode_lib->vba.DSCFormatFactor = 1;
+					}
+					if (mode_lib->vba.RequiresDSC[i][k] == true) {
+						if (mode_lib->vba.ODMCombineEnablePerState[i][k]
+								== dm_odm_combine_mode_4to1) {
+							if (mode_lib->vba.PixelClockBackEnd[k] / 12.0 / mode_lib->vba.DSCFormatFactor > (1.0 - mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100.0) * mode_lib->vba.MaxDSCCLK[i])
+								mode_lib->vba.DSCCLKRequiredMoreThanSupported[i] = true;
+						} else if (mode_lib->vba.ODMCombineEnablePerState[i][k]
+								== dm_odm_combine_mode_2to1) {
+							if (mode_lib->vba.PixelClockBackEnd[k] / 6.0 / mode_lib->vba.DSCFormatFactor > (1.0 - mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100.0) * mode_lib->vba.MaxDSCCLK[i])
+								mode_lib->vba.DSCCLKRequiredMoreThanSupported[i] = true;
+						} else {
+							if (mode_lib->vba.PixelClockBackEnd[k] / 3.0 / mode_lib->vba.DSCFormatFactor > (1.0 - mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100.0) * mode_lib->vba.MaxDSCCLK[i])
+								mode_lib->vba.DSCCLKRequiredMoreThanSupported[i] = true;
+						}
+					}
+				}
+			}
+		}
+	}
+
+	/* Check DSC Unit and Slices Support */
+	TotalDSCUnitsRequired = 0;
+
+	for (i = 0; i < v->soc.num_states; ++i) {
+		mode_lib->vba.NotEnoughDSCUnits[i] = false;
+		mode_lib->vba.NotEnoughDSCSlices[i] = false;
+		TotalDSCUnitsRequired = 0;
+		mode_lib->vba.PixelsPerLinePerDSCUnitSupport[i] = true;
+		for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+			if (mode_lib->vba.RequiresDSC[i][k] == true) {
+				if (mode_lib->vba.ODMCombineEnablePerState[i][k] == dm_odm_combine_mode_4to1) {
+					if (mode_lib->vba.HActive[k]
+							> 4 * mode_lib->vba.MaximumPixelsPerLinePerDSCUnit)
+						mode_lib->vba.PixelsPerLinePerDSCUnitSupport[i] = false;
+					TotalDSCUnitsRequired = TotalDSCUnitsRequired + 4;
+					if (mode_lib->vba.NumberOfDSCSlices[k] > 16)
+						mode_lib->vba.NotEnoughDSCSlices[i] = true;
+				} else if (mode_lib->vba.ODMCombineEnablePerState[i][k] == dm_odm_combine_mode_2to1) {
+					if (mode_lib->vba.HActive[k]
+							> 2 * mode_lib->vba.MaximumPixelsPerLinePerDSCUnit)
+						mode_lib->vba.PixelsPerLinePerDSCUnitSupport[i] = false;
+					TotalDSCUnitsRequired = TotalDSCUnitsRequired + 2;
+					if (mode_lib->vba.NumberOfDSCSlices[k] > 8)
+						mode_lib->vba.NotEnoughDSCSlices[i] = true;
+				} else {
+					if (mode_lib->vba.HActive[k] > mode_lib->vba.MaximumPixelsPerLinePerDSCUnit)
+						mode_lib->vba.PixelsPerLinePerDSCUnitSupport[i] = false;
+					TotalDSCUnitsRequired = TotalDSCUnitsRequired + 1;
+					if (mode_lib->vba.NumberOfDSCSlices[k] > 4)
+						mode_lib->vba.NotEnoughDSCSlices[i] = true;
+				}
+			}
+		}
+		if (TotalDSCUnitsRequired > mode_lib->vba.NumberOfDSC)
+			mode_lib->vba.NotEnoughDSCUnits[i] = true;
+	}
+
+	/*DSC Delay per state*/
+	for (i = 0; i < v->soc.num_states; ++i) {
+		unsigned int m;
+
+		for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+			mode_lib->vba.DSCDelayPerState[i][k] = dml32_DSCDelayRequirement(
+					mode_lib->vba.RequiresDSC[i][k], mode_lib->vba.ODMCombineEnablePerState[i][k],
+					mode_lib->vba.DSCInputBitPerComponent[k],
+					mode_lib->vba.OutputBppPerState[i][k], mode_lib->vba.HActive[k],
+					mode_lib->vba.HTotal[k], mode_lib->vba.NumberOfDSCSlices[k],
+					mode_lib->vba.OutputFormat[k], mode_lib->vba.Output[k],
+					mode_lib->vba.PixelClock[k], mode_lib->vba.PixelClockBackEnd[k]);
+		}
+
+		m = 0;
+
+		for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+			for (m = 0; m <= mode_lib->vba.NumberOfActiveSurfaces - 1; m++) {
+				for (j = 0; j <= mode_lib->vba.NumberOfActiveSurfaces - 1; j++) {
+					if (mode_lib->vba.BlendingAndTiming[k] == m &&
+							mode_lib->vba.RequiresDSC[i][m] == true) {
+						mode_lib->vba.DSCDelayPerState[i][k] =
+							mode_lib->vba.DSCDelayPerState[i][m];
+					}
+				}
+			}
+		}
+	}
+
+	//Calculate Swath, DET Configuration, DCFCLKDeepSleep
+	//
+	for (i = 0; i < (int) v->soc.num_states; ++i) {
+		for (j = 0; j <= 1; ++j) {
+			bool dummy_boolean_array[1][DC__NUM_DPP__MAX];
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				mode_lib->vba.RequiredDPPCLKThisState[k] = mode_lib->vba.RequiredDPPCLK[i][j][k];
+				mode_lib->vba.NoOfDPPThisState[k] = mode_lib->vba.NoOfDPP[i][j][k];
+				mode_lib->vba.ODMCombineEnableThisState[k] =
+						mode_lib->vba.ODMCombineEnablePerState[i][k];
+			}
+
+			dml32_CalculateSwathAndDETConfiguration(
+					mode_lib->vba.DETSizeOverride,
+					mode_lib->vba.UsesMALLForPStateChange,
+					mode_lib->vba.ConfigReturnBufferSizeInKByte,
+					mode_lib->vba.MaxTotalDETInKByte,
+					mode_lib->vba.MinCompressedBufferSizeInKByte,
+					false, /* ForceSingleDPP */
+					mode_lib->vba.NumberOfActiveSurfaces,
+					mode_lib->vba.nomDETInKByte,
+					mode_lib->vba.UseUnboundedRequesting,
+					mode_lib->vba.CompressedBufferSegmentSizeInkByteFinal,
+					mode_lib->vba.Output,
+					mode_lib->vba.ReadBandwidthLuma,
+					mode_lib->vba.ReadBandwidthChroma,
+					mode_lib->vba.MaximumSwathWidthLuma,
+					mode_lib->vba.MaximumSwathWidthChroma,
+					mode_lib->vba.SourceRotation,
+					mode_lib->vba.ViewportStationary,
+					mode_lib->vba.SourcePixelFormat,
+					mode_lib->vba.SurfaceTiling,
+					mode_lib->vba.ViewportWidth,
+					mode_lib->vba.ViewportHeight,
+					mode_lib->vba.ViewportXStartY,
+					mode_lib->vba.ViewportYStartY,
+					mode_lib->vba.ViewportXStartC,
+					mode_lib->vba.ViewportYStartC,
+					mode_lib->vba.SurfaceWidthY,
+					mode_lib->vba.SurfaceWidthC,
+					mode_lib->vba.SurfaceHeightY,
+					mode_lib->vba.SurfaceHeightC,
+					mode_lib->vba.Read256BlockHeightY,
+					mode_lib->vba.Read256BlockHeightC,
+					mode_lib->vba.Read256BlockWidthY,
+					mode_lib->vba.Read256BlockWidthC,
+					mode_lib->vba.ODMCombineEnableThisState,
+					mode_lib->vba.BlendingAndTiming,
+					mode_lib->vba.BytePerPixelY,
+					mode_lib->vba.BytePerPixelC,
+					mode_lib->vba.BytePerPixelInDETY,
+					mode_lib->vba.BytePerPixelInDETC,
+					mode_lib->vba.HActive,
+					mode_lib->vba.HRatio,
+					mode_lib->vba.HRatioChroma,
+					mode_lib->vba.NoOfDPPThisState,
+					/* Output */
+					mode_lib->vba.swath_width_luma_ub_this_state,
+					mode_lib->vba.swath_width_chroma_ub_this_state,
+					mode_lib->vba.SwathWidthYThisState,
+					mode_lib->vba.SwathWidthCThisState,
+					mode_lib->vba.SwathHeightYThisState,
+					mode_lib->vba.SwathHeightCThisState,
+					mode_lib->vba.DETBufferSizeInKByteThisState,
+					mode_lib->vba.DETBufferSizeYThisState,
+					mode_lib->vba.DETBufferSizeCThisState,
+					&mode_lib->vba.UnboundedRequestEnabledThisState,
+					&mode_lib->vba.CompressedBufferSizeInkByteThisState,
+					dummy_boolean_array[0],
+					&mode_lib->vba.ViewportSizeSupport[i][j]);
+
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				mode_lib->vba.swath_width_luma_ub_all_states[i][j][k] =
+						mode_lib->vba.swath_width_luma_ub_this_state[k];
+				mode_lib->vba.swath_width_chroma_ub_all_states[i][j][k] =
+						mode_lib->vba.swath_width_chroma_ub_this_state[k];
+				mode_lib->vba.SwathWidthYAllStates[i][j][k] = mode_lib->vba.SwathWidthYThisState[k];
+				mode_lib->vba.SwathWidthCAllStates[i][j][k] = mode_lib->vba.SwathWidthCThisState[k];
+				mode_lib->vba.SwathHeightYAllStates[i][j][k] = mode_lib->vba.SwathHeightYThisState[k];
+				mode_lib->vba.SwathHeightCAllStates[i][j][k] = mode_lib->vba.SwathHeightCThisState[k];
+				mode_lib->vba.UnboundedRequestEnabledAllStates[i][j] =
+						mode_lib->vba.UnboundedRequestEnabledThisState;
+				mode_lib->vba.CompressedBufferSizeInkByteAllStates[i][j] =
+						mode_lib->vba.CompressedBufferSizeInkByteThisState;
+				mode_lib->vba.DETBufferSizeInKByteAllStates[i][j][k] =
+						mode_lib->vba.DETBufferSizeInKByteThisState[k];
+				mode_lib->vba.DETBufferSizeYAllStates[i][j][k] =
+						mode_lib->vba.DETBufferSizeYThisState[k];
+				mode_lib->vba.DETBufferSizeCAllStates[i][j][k] =
+						mode_lib->vba.DETBufferSizeCThisState[k];
+			}
+		}
+	}
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		mode_lib->vba.cursor_bw[k] = mode_lib->vba.NumberOfCursors[k] * mode_lib->vba.CursorWidth[k][0]
+				* mode_lib->vba.CursorBPP[k][0] / 8.0
+				/ (mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k]) * mode_lib->vba.VRatio[k];
+	}
+
+	dml32_CalculateSurfaceSizeInMall(
+			mode_lib->vba.NumberOfActiveSurfaces,
+			mode_lib->vba.MALLAllocatedForDCNFinal,
+			mode_lib->vba.UseMALLForStaticScreen,
+			mode_lib->vba.DCCEnable,
+			mode_lib->vba.ViewportStationary,
+			mode_lib->vba.ViewportXStartY,
+			mode_lib->vba.ViewportYStartY,
+			mode_lib->vba.ViewportXStartC,
+			mode_lib->vba.ViewportYStartC,
+			mode_lib->vba.ViewportWidth,
+			mode_lib->vba.ViewportHeight,
+			mode_lib->vba.BytePerPixelY,
+			mode_lib->vba.ViewportWidthChroma,
+			mode_lib->vba.ViewportHeightChroma,
+			mode_lib->vba.BytePerPixelC,
+			mode_lib->vba.SurfaceWidthY,
+			mode_lib->vba.SurfaceWidthC,
+			mode_lib->vba.SurfaceHeightY,
+			mode_lib->vba.SurfaceHeightC,
+			mode_lib->vba.Read256BlockWidthY,
+			mode_lib->vba.Read256BlockWidthC,
+			mode_lib->vba.Read256BlockHeightY,
+			mode_lib->vba.Read256BlockHeightC,
+			mode_lib->vba.MicroTileWidthY,
+			mode_lib->vba.MicroTileWidthC,
+			mode_lib->vba.MicroTileHeightY,
+			mode_lib->vba.MicroTileHeightC,
+
+			/* Output */
+			mode_lib->vba.SurfaceSizeInMALL,
+			&mode_lib->vba.ExceededMALLSize);
+
+	for (i = 0; i < v->soc.num_states; i++) {
+		for (j = 0; j < 2; j++) {
+			for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+				mode_lib->vba.swath_width_luma_ub_this_state[k] =
+						mode_lib->vba.swath_width_luma_ub_all_states[i][j][k];
+				mode_lib->vba.swath_width_chroma_ub_this_state[k] =
+						mode_lib->vba.swath_width_chroma_ub_all_states[i][j][k];
+				mode_lib->vba.SwathWidthYThisState[k] = mode_lib->vba.SwathWidthYAllStates[i][j][k];
+				mode_lib->vba.SwathWidthCThisState[k] = mode_lib->vba.SwathWidthCAllStates[i][j][k];
+				mode_lib->vba.SwathHeightYThisState[k] = mode_lib->vba.SwathHeightYAllStates[i][j][k];
+				mode_lib->vba.SwathHeightCThisState[k] = mode_lib->vba.SwathHeightCAllStates[i][j][k];
+				mode_lib->vba.DETBufferSizeInKByteThisState[k] =
+						mode_lib->vba.DETBufferSizeInKByteAllStates[i][j][k];
+				mode_lib->vba.DETBufferSizeYThisState[k] =
+						mode_lib->vba.DETBufferSizeYAllStates[i][j][k];
+				mode_lib->vba.DETBufferSizeCThisState[k] =
+						mode_lib->vba.DETBufferSizeCAllStates[i][j][k];
+				mode_lib->vba.RequiredDPPCLKThisState[k] = mode_lib->vba.RequiredDPPCLK[i][j][k];
+				mode_lib->vba.NoOfDPPThisState[k] = mode_lib->vba.NoOfDPP[i][j][k];
+			}
+
+			mode_lib->vba.TotalNumberOfDCCActiveDPP[i][j] = 0;
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				if (mode_lib->vba.DCCEnable[k] == true) {
+					mode_lib->vba.TotalNumberOfDCCActiveDPP[i][j] =
+							mode_lib->vba.TotalNumberOfDCCActiveDPP[i][j]
+									+ mode_lib->vba.NoOfDPP[i][j][k];
+				}
+			}
+
+
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].PixelClock = mode_lib->vba.PixelClock[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].DPPPerSurface = mode_lib->vba.NoOfDPP[i][j][k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].SourceRotation = mode_lib->vba.SourceRotation[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].ViewportHeight = mode_lib->vba.ViewportHeight[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].ViewportHeightChroma = mode_lib->vba.ViewportHeightChroma[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].BlockWidth256BytesY = mode_lib->vba.Read256BlockWidthY[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].BlockHeight256BytesY = mode_lib->vba.Read256BlockHeightY[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].BlockWidth256BytesC = mode_lib->vba.Read256BlockWidthC[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].BlockHeight256BytesC = mode_lib->vba.Read256BlockHeightC[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].BlockWidthY = mode_lib->vba.MicroTileWidthY[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].BlockHeightY = mode_lib->vba.MicroTileHeightY[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].BlockWidthC = mode_lib->vba.MicroTileWidthC[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].BlockHeightC = mode_lib->vba.MicroTileHeightC[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].InterlaceEnable = mode_lib->vba.Interlace[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].HTotal = mode_lib->vba.HTotal[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].DCCEnable = mode_lib->vba.DCCEnable[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].SourcePixelFormat = mode_lib->vba.SourcePixelFormat[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].SurfaceTiling = mode_lib->vba.SurfaceTiling[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].BytePerPixelY = mode_lib->vba.BytePerPixelY[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].BytePerPixelC = mode_lib->vba.BytePerPixelC[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].ProgressiveToInterlaceUnitInOPP =
+				mode_lib->vba.ProgressiveToInterlaceUnitInOPP;
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].VRatio = mode_lib->vba.VRatio[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].VRatioChroma = mode_lib->vba.VRatioChroma[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].VTaps = mode_lib->vba.vtaps[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].VTapsChroma = mode_lib->vba.VTAPsChroma[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].PitchY = mode_lib->vba.PitchY[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].DCCMetaPitchY = mode_lib->vba.DCCMetaPitchY[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].PitchC = mode_lib->vba.PitchC[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].DCCMetaPitchC = mode_lib->vba.DCCMetaPitchC[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].ViewportStationary = mode_lib->vba.ViewportStationary[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].ViewportXStart = mode_lib->vba.ViewportXStartY[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].ViewportYStart = mode_lib->vba.ViewportYStartY[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].ViewportXStartC = mode_lib->vba.ViewportXStartC[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].ViewportYStartC = mode_lib->vba.ViewportYStartC[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].FORCE_ONE_ROW_FOR_FRAME = mode_lib->vba.ForceOneRowForFrame[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].SwathHeightY = mode_lib->vba.SwathHeightYThisState[k];
+				v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters[k].SwathHeightC = mode_lib->vba.SwathHeightCThisState[k];
+			}
+
+			{
+				bool dummy_boolean_array[2][DC__NUM_DPP__MAX];
+				unsigned int dummy_integer_array[22][DC__NUM_DPP__MAX];
+
+				dml32_CalculateVMRowAndSwath(
+						mode_lib->vba.NumberOfActiveSurfaces,
+						v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.SurfParameters,
+						mode_lib->vba.SurfaceSizeInMALL,
+						mode_lib->vba.PTEBufferSizeInRequestsLuma,
+						mode_lib->vba.PTEBufferSizeInRequestsChroma,
+						mode_lib->vba.DCCMetaBufferSizeBytes,
+						mode_lib->vba.UseMALLForStaticScreen,
+						mode_lib->vba.UsesMALLForPStateChange,
+						mode_lib->vba.MALLAllocatedForDCNFinal,
+						mode_lib->vba.SwathWidthYThisState,
+						mode_lib->vba.SwathWidthCThisState,
+						mode_lib->vba.GPUVMEnable,
+						mode_lib->vba.HostVMEnable,
+						mode_lib->vba.HostVMMaxNonCachedPageTableLevels,
+						mode_lib->vba.GPUVMMaxPageTableLevels,
+						mode_lib->vba.GPUVMMinPageSizeKBytes,
+						mode_lib->vba.HostVMMinPageSize,
+
+						/* Output */
+						mode_lib->vba.PTEBufferSizeNotExceededPerState,
+						mode_lib->vba.DCCMetaBufferSizeNotExceededPerState,
+						dummy_integer_array[0],
+						dummy_integer_array[1],
+						mode_lib->vba.dpte_row_height,
+						mode_lib->vba.dpte_row_height_chroma,
+						dummy_integer_array[2],
+						dummy_integer_array[3],
+						dummy_integer_array[4],
+						dummy_integer_array[5],
+						dummy_integer_array[6],
+						dummy_integer_array[7],
+						dummy_integer_array[8],
+						dummy_integer_array[9],
+						mode_lib->vba.meta_row_height,
+						mode_lib->vba.meta_row_height_chroma,
+						dummy_integer_array[10],
+						mode_lib->vba.dpte_group_bytes,
+						dummy_integer_array[11],
+						dummy_integer_array[12],
+						dummy_integer_array[13],
+						dummy_integer_array[14],
+						dummy_integer_array[15],
+						dummy_integer_array[16],
+						dummy_integer_array[17],
+						dummy_integer_array[18],
+						dummy_integer_array[19],
+						dummy_integer_array[20],
+						mode_lib->vba.PrefetchLinesYThisState,
+						mode_lib->vba.PrefetchLinesCThisState,
+						mode_lib->vba.PrefillY,
+						mode_lib->vba.PrefillC,
+						mode_lib->vba.MaxNumSwY,
+						mode_lib->vba.MaxNumSwC,
+						mode_lib->vba.meta_row_bandwidth_this_state,
+						mode_lib->vba.dpte_row_bandwidth_this_state,
+						mode_lib->vba.DPTEBytesPerRowThisState,
+						mode_lib->vba.PDEAndMetaPTEBytesPerFrameThisState,
+						mode_lib->vba.MetaRowBytesThisState,
+						mode_lib->vba.use_one_row_for_frame_this_state,
+						mode_lib->vba.use_one_row_for_frame_flip_this_state,
+						dummy_boolean_array[0], // Boolean UsesMALLForStaticScreen[]
+						dummy_boolean_array[1], // Boolean PTE_BUFFER_MODE[]
+						dummy_integer_array[21]); // Long BIGK_FRAGMENT_SIZE[]
+			}
+
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				mode_lib->vba.PrefetchLinesY[i][j][k] = mode_lib->vba.PrefetchLinesYThisState[k];
+				mode_lib->vba.PrefetchLinesC[i][j][k] = mode_lib->vba.PrefetchLinesCThisState[k];
+				mode_lib->vba.meta_row_bandwidth[i][j][k] =
+						mode_lib->vba.meta_row_bandwidth_this_state[k];
+				mode_lib->vba.dpte_row_bandwidth[i][j][k] =
+						mode_lib->vba.dpte_row_bandwidth_this_state[k];
+				mode_lib->vba.DPTEBytesPerRow[i][j][k] = mode_lib->vba.DPTEBytesPerRowThisState[k];
+				mode_lib->vba.PDEAndMetaPTEBytesPerFrame[i][j][k] =
+						mode_lib->vba.PDEAndMetaPTEBytesPerFrameThisState[k];
+				mode_lib->vba.MetaRowBytes[i][j][k] = mode_lib->vba.MetaRowBytesThisState[k];
+				mode_lib->vba.use_one_row_for_frame[i][j][k] =
+						mode_lib->vba.use_one_row_for_frame_this_state[k];
+				mode_lib->vba.use_one_row_for_frame_flip[i][j][k] =
+						mode_lib->vba.use_one_row_for_frame_flip_this_state[k];
+			}
+
+			mode_lib->vba.PTEBufferSizeNotExceeded[i][j] = true;
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				if (mode_lib->vba.PTEBufferSizeNotExceededPerState[k] == false)
+					mode_lib->vba.PTEBufferSizeNotExceeded[i][j] = false;
+			}
+
+			mode_lib->vba.DCCMetaBufferSizeNotExceeded[i][j] = true;
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				if (mode_lib->vba.DCCMetaBufferSizeNotExceededPerState[k] == false)
+					mode_lib->vba.DCCMetaBufferSizeNotExceeded[i][j] = false;
+			}
+
+			mode_lib->vba.UrgLatency[i] = dml32_CalculateUrgentLatency(
+					mode_lib->vba.UrgentLatencyPixelDataOnly,
+					mode_lib->vba.UrgentLatencyPixelMixedWithVMData,
+					mode_lib->vba.UrgentLatencyVMDataOnly, mode_lib->vba.DoUrgentLatencyAdjustment,
+					mode_lib->vba.UrgentLatencyAdjustmentFabricClockComponent,
+					mode_lib->vba.UrgentLatencyAdjustmentFabricClockReference,
+					mode_lib->vba.FabricClockPerState[i]);
+
+			//bool   NotUrgentLatencyHiding[DC__NUM_DPP__MAX];
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				dml32_CalculateUrgentBurstFactor(
+						mode_lib->vba.UsesMALLForPStateChange[k],
+						mode_lib->vba.swath_width_luma_ub_this_state[k],
+						mode_lib->vba.swath_width_chroma_ub_this_state[k],
+						mode_lib->vba.SwathHeightYThisState[k],
+						mode_lib->vba.SwathHeightCThisState[k],
+						(double) mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k],
+						mode_lib->vba.UrgLatency[i],
+						mode_lib->vba.CursorBufferSize,
+						mode_lib->vba.CursorWidth[k][0],
+						mode_lib->vba.CursorBPP[k][0],
+						mode_lib->vba.VRatio[k],
+						mode_lib->vba.VRatioChroma[k],
+						mode_lib->vba.BytePerPixelInDETY[k],
+						mode_lib->vba.BytePerPixelInDETC[k],
+						mode_lib->vba.DETBufferSizeYThisState[k],
+						mode_lib->vba.DETBufferSizeCThisState[k],
+						/* Output */
+						&mode_lib->vba.UrgentBurstFactorCursor[k],
+						&mode_lib->vba.UrgentBurstFactorLuma[k],
+						&mode_lib->vba.UrgentBurstFactorChroma[k],
+						&mode_lib->vba.NoUrgentLatencyHiding[k]);
+			}
+
+			dml32_CalculateDCFCLKDeepSleep(
+					mode_lib->vba.NumberOfActiveSurfaces,
+					mode_lib->vba.BytePerPixelY,
+					mode_lib->vba.BytePerPixelC,
+					mode_lib->vba.VRatio,
+					mode_lib->vba.VRatioChroma,
+					mode_lib->vba.SwathWidthYThisState,
+					mode_lib->vba.SwathWidthCThisState,
+					mode_lib->vba.NoOfDPPThisState,
+					mode_lib->vba.HRatio,
+					mode_lib->vba.HRatioChroma,
+					mode_lib->vba.PixelClock,
+					mode_lib->vba.PSCL_FACTOR,
+					mode_lib->vba.PSCL_FACTOR_CHROMA,
+					mode_lib->vba.RequiredDPPCLKThisState,
+					mode_lib->vba.ReadBandwidthLuma,
+					mode_lib->vba.ReadBandwidthChroma,
+					mode_lib->vba.ReturnBusWidth,
+
+					/* Output */
+					&mode_lib->vba.ProjectedDCFCLKDeepSleep[i][j]);
+		}
+	}
+
+	m = 0;
+
+	//Calculate Return BW
+	for (i = 0; i < (int) v->soc.num_states; ++i) {
+		for (j = 0; j <= 1; ++j) {
+			for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+				if (mode_lib->vba.BlendingAndTiming[k] == k) {
+					if (mode_lib->vba.WritebackEnable[k] == true) {
+						mode_lib->vba.WritebackDelayTime[k] =
+							mode_lib->vba.WritebackLatency
+						+ dml32_CalculateWriteBackDelay(
+							mode_lib->vba.WritebackPixelFormat[k],
+							mode_lib->vba.WritebackHRatio[k],
+							mode_lib->vba.WritebackVRatio[k],
+							mode_lib->vba.WritebackVTaps[k],
+							mode_lib->vba.WritebackDestinationWidth[k],
+							mode_lib->vba.WritebackDestinationHeight[k],
+							mode_lib->vba.WritebackSourceHeight[k],
+							mode_lib->vba.HTotal[k])
+							/ mode_lib->vba.RequiredDISPCLK[i][j];
+					} else {
+						mode_lib->vba.WritebackDelayTime[k] = 0.0;
+					}
+					for (m = 0; m <= mode_lib->vba.NumberOfActiveSurfaces - 1; m++) {
+						if (mode_lib->vba.BlendingAndTiming[m]
+								== k && mode_lib->vba.WritebackEnable[m] == true) {
+							mode_lib->vba.WritebackDelayTime[k] =
+								dml_max(mode_lib->vba.WritebackDelayTime[k],
+									mode_lib->vba.WritebackLatency
+								+ dml32_CalculateWriteBackDelay(
+									mode_lib->vba.WritebackPixelFormat[m],
+									mode_lib->vba.WritebackHRatio[m],
+									mode_lib->vba.WritebackVRatio[m],
+									mode_lib->vba.WritebackVTaps[m],
+									mode_lib->vba.WritebackDestinationWidth[m],
+									mode_lib->vba.WritebackDestinationHeight[m],
+									mode_lib->vba.WritebackSourceHeight[m],
+									mode_lib->vba.HTotal[m]) /
+									mode_lib->vba.RequiredDISPCLK[i][j]);
+						}
+					}
+				}
+			}
+			for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+				for (m = 0; m <= mode_lib->vba.NumberOfActiveSurfaces - 1; m++) {
+					if (mode_lib->vba.BlendingAndTiming[k] == m) {
+						mode_lib->vba.WritebackDelayTime[k] =
+								mode_lib->vba.WritebackDelayTime[m];
+					}
+				}
+			}
+			mode_lib->vba.MaxMaxVStartup[i][j] = 0;
+			for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+				mode_lib->vba.MaximumVStartup[i][j][k] = ((mode_lib->vba.Interlace[k] &&
+								!mode_lib->vba.ProgressiveToInterlaceUnitInOPP) ?
+								dml_floor((mode_lib->vba.VTotal[k] -
+									mode_lib->vba.VActive[k]) / 2.0, 1.0) :
+								mode_lib->vba.VTotal[k] - mode_lib->vba.VActive[k])
+								- dml_max(1.0, dml_ceil(1.0 *
+									mode_lib->vba.WritebackDelayTime[k] /
+									(mode_lib->vba.HTotal[k] /
+									mode_lib->vba.PixelClock[k]), 1.0));
+
+				// Clamp to max OTG vstartup register limit
+				if (mode_lib->vba.MaximumVStartup[i][j][k] > 1023)
+					mode_lib->vba.MaximumVStartup[i][j][k] = 1023;
+
+				mode_lib->vba.MaxMaxVStartup[i][j] = dml_max(mode_lib->vba.MaxMaxVStartup[i][j],
+						mode_lib->vba.MaximumVStartup[i][j][k]);
+			}
+		}
+	}
+
+	ReorderingBytes = mode_lib->vba.NumberOfChannels
+			* dml_max3(mode_lib->vba.UrgentOutOfOrderReturnPerChannelPixelDataOnly,
+					mode_lib->vba.UrgentOutOfOrderReturnPerChannelPixelMixedWithVMData,
+					mode_lib->vba.UrgentOutOfOrderReturnPerChannelVMDataOnly);
+
+	dml32_CalculateMinAndMaxPrefetchMode(mode_lib->vba.AllowForPStateChangeOrStutterInVBlankFinal,
+			&mode_lib->vba.MinPrefetchMode,
+			&mode_lib->vba.MaxPrefetchMode);
+
+	for (i = 0; i < (int) v->soc.num_states; ++i) {
+		for (j = 0; j <= 1; ++j)
+			mode_lib->vba.DCFCLKState[i][j] = mode_lib->vba.DCFCLKPerState[i];
+	}
+
+	/* Immediate Flip and MALL parameters */
+	mode_lib->vba.ImmediateFlipRequiredFinal = false;
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		mode_lib->vba.ImmediateFlipRequiredFinal = mode_lib->vba.ImmediateFlipRequiredFinal
+				|| (mode_lib->vba.ImmediateFlipRequirement[k] == dm_immediate_flip_required);
+	}
+
+	mode_lib->vba.ImmediateFlipRequiredButTheRequirementForEachSurfaceIsNotSpecified = false;
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		mode_lib->vba.ImmediateFlipRequiredButTheRequirementForEachSurfaceIsNotSpecified =
+				mode_lib->vba.ImmediateFlipRequiredButTheRequirementForEachSurfaceIsNotSpecified
+						|| ((mode_lib->vba.ImmediateFlipRequirement[k]
+								!= dm_immediate_flip_required)
+								&& (mode_lib->vba.ImmediateFlipRequirement[k]
+										!= dm_immediate_flip_not_required));
+	}
+	mode_lib->vba.ImmediateFlipRequiredButTheRequirementForEachSurfaceIsNotSpecified =
+			mode_lib->vba.ImmediateFlipRequiredButTheRequirementForEachSurfaceIsNotSpecified
+					&& mode_lib->vba.ImmediateFlipRequiredFinal;
+
+	mode_lib->vba.ImmediateFlipOrHostVMAndPStateWithMALLFullFrameOrPhantomPipe = false;
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		mode_lib->vba.ImmediateFlipOrHostVMAndPStateWithMALLFullFrameOrPhantomPipe =
+			mode_lib->vba.ImmediateFlipOrHostVMAndPStateWithMALLFullFrameOrPhantomPipe ||
+			((mode_lib->vba.HostVMEnable == true || mode_lib->vba.ImmediateFlipRequirement[k] !=
+					dm_immediate_flip_not_required) &&
+			(mode_lib->vba.UsesMALLForPStateChange[k] == dm_use_mall_pstate_change_full_frame ||
+			mode_lib->vba.UsesMALLForPStateChange[k] == dm_use_mall_pstate_change_phantom_pipe));
+	}
+
+	mode_lib->vba.InvalidCombinationOfMALLUseForPStateAndStaticScreen = false;
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		mode_lib->vba.InvalidCombinationOfMALLUseForPStateAndStaticScreen =
+			mode_lib->vba.InvalidCombinationOfMALLUseForPStateAndStaticScreen
+			|| ((mode_lib->vba.UseMALLForStaticScreen[k] == dm_use_mall_static_screen_enable
+			|| mode_lib->vba.UseMALLForStaticScreen[k] == dm_use_mall_static_screen_optimize)
+			&& (mode_lib->vba.UsesMALLForPStateChange[k] == dm_use_mall_pstate_change_phantom_pipe))
+			|| ((mode_lib->vba.UseMALLForStaticScreen[k] == dm_use_mall_static_screen_disable
+			|| mode_lib->vba.UseMALLForStaticScreen[k] == dm_use_mall_static_screen_optimize)
+			&& (mode_lib->vba.UsesMALLForPStateChange[k] == dm_use_mall_pstate_change_full_frame));
+	}
+
+	FullFrameMALLPStateMethod = false;
+	SubViewportMALLPStateMethod = false;
+	PhantomPipeMALLPStateMethod = false;
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		if (mode_lib->vba.UsesMALLForPStateChange[k] == dm_use_mall_pstate_change_full_frame)
+			FullFrameMALLPStateMethod = true;
+		if (mode_lib->vba.UsesMALLForPStateChange[k] == dm_use_mall_pstate_change_sub_viewport)
+			SubViewportMALLPStateMethod = true;
+		if (mode_lib->vba.UsesMALLForPStateChange[k] == dm_use_mall_pstate_change_phantom_pipe)
+			PhantomPipeMALLPStateMethod = true;
+	}
+	mode_lib->vba.InvalidCombinationOfMALLUseForPState = (SubViewportMALLPStateMethod
+			!= PhantomPipeMALLPStateMethod) || (SubViewportMALLPStateMethod && FullFrameMALLPStateMethod);
+
+	if (mode_lib->vba.UseMinimumRequiredDCFCLK == true) {
+		dml32_UseMinimumDCFCLK(
+				mode_lib->vba.UsesMALLForPStateChange,
+				mode_lib->vba.DRRDisplay,
+				mode_lib->vba.SynchronizeDRRDisplaysForUCLKPStateChangeFinal,
+				mode_lib->vba.MaxInterDCNTileRepeaters,
+				mode_lib->vba.MaxPrefetchMode,
+				mode_lib->vba.DRAMClockChangeLatency,
+				mode_lib->vba.FCLKChangeLatency,
+				mode_lib->vba.SREnterPlusExitTime,
+				mode_lib->vba.ReturnBusWidth,
+				mode_lib->vba.RoundTripPingLatencyCycles,
+				ReorderingBytes,
+				mode_lib->vba.PixelChunkSizeInKByte,
+				mode_lib->vba.MetaChunkSize,
+				mode_lib->vba.GPUVMEnable,
+				mode_lib->vba.GPUVMMaxPageTableLevels,
+				mode_lib->vba.HostVMEnable,
+				mode_lib->vba.NumberOfActiveSurfaces,
+				mode_lib->vba.HostVMMinPageSize,
+				mode_lib->vba.HostVMMaxNonCachedPageTableLevels,
+				mode_lib->vba.DynamicMetadataVMEnabled,
+				mode_lib->vba.ImmediateFlipRequiredFinal,
+				mode_lib->vba.ProgressiveToInterlaceUnitInOPP,
+				mode_lib->vba.MaxAveragePercentOfIdealSDPPortBWDisplayCanUseInNormalSystemOperation,
+				mode_lib->vba.PercentOfIdealFabricAndSDPPortBWReceivedAfterUrgLatency,
+				mode_lib->vba.VTotal,
+				mode_lib->vba.VActive,
+				mode_lib->vba.DynamicMetadataTransmittedBytes,
+				mode_lib->vba.DynamicMetadataLinesBeforeActiveRequired,
+				mode_lib->vba.Interlace,
+				mode_lib->vba.RequiredDPPCLK,
+				mode_lib->vba.RequiredDISPCLK,
+				mode_lib->vba.UrgLatency,
+				mode_lib->vba.NoOfDPP,
+				mode_lib->vba.ProjectedDCFCLKDeepSleep,
+				mode_lib->vba.MaximumVStartup,
+				mode_lib->vba.TotalNumberOfActiveDPP,
+				mode_lib->vba.TotalNumberOfDCCActiveDPP,
+				mode_lib->vba.dpte_group_bytes,
+				mode_lib->vba.PrefetchLinesY,
+				mode_lib->vba.PrefetchLinesC,
+				mode_lib->vba.swath_width_luma_ub_all_states,
+				mode_lib->vba.swath_width_chroma_ub_all_states,
+				mode_lib->vba.BytePerPixelY,
+				mode_lib->vba.BytePerPixelC,
+				mode_lib->vba.HTotal,
+				mode_lib->vba.PixelClock,
+				mode_lib->vba.PDEAndMetaPTEBytesPerFrame,
+				mode_lib->vba.DPTEBytesPerRow,
+				mode_lib->vba.MetaRowBytes,
+				mode_lib->vba.DynamicMetadataEnable,
+				mode_lib->vba.ReadBandwidthLuma,
+				mode_lib->vba.ReadBandwidthChroma,
+				mode_lib->vba.DCFCLKPerState,
+
+				/* Output */
+				mode_lib->vba.DCFCLKState);
+	} // UseMinimumRequiredDCFCLK == true
+
+	for (i = 0; i < (int) v->soc.num_states; ++i) {
+		for (j = 0; j <= 1; ++j) {
+			mode_lib->vba.ReturnBWPerState[i][j] = dml32_get_return_bw_mbps(&mode_lib->vba.soc, i,
+					mode_lib->vba.HostVMEnable, mode_lib->vba.DCFCLKState[i][j],
+					mode_lib->vba.FabricClockPerState[i], mode_lib->vba.DRAMSpeedPerState[i]);
+		}
+	}
+
+	//Re-ordering Buffer Support Check
+	for (i = 0; i < (int) v->soc.num_states; ++i) {
+		for (j = 0; j <= 1; ++j) {
+			if ((mode_lib->vba.ROBBufferSizeInKByte - mode_lib->vba.PixelChunkSizeInKByte) * 1024
+					/ mode_lib->vba.ReturnBWPerState[i][j]
+					> (mode_lib->vba.RoundTripPingLatencyCycles + 32)
+							/ mode_lib->vba.DCFCLKState[i][j]
+							+ ReorderingBytes / mode_lib->vba.ReturnBWPerState[i][j]) {
+				mode_lib->vba.ROBSupport[i][j] = true;
+			} else {
+				mode_lib->vba.ROBSupport[i][j] = false;
+			}
+		}
+	}
+
+	//Vertical Active BW support check
+	MaxTotalVActiveRDBandwidth = 0;
+
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+		MaxTotalVActiveRDBandwidth = MaxTotalVActiveRDBandwidth + mode_lib->vba.ReadBandwidthLuma[k]
+				+ mode_lib->vba.ReadBandwidthChroma[k];
+	}
+
+	for (i = 0; i < (int) v->soc.num_states; ++i) {
+		for (j = 0; j <= 1; ++j) {
+			mode_lib->vba.MaxTotalVerticalActiveAvailableBandwidth[i][j] =
+				dml_min3(mode_lib->vba.ReturnBusWidth * mode_lib->vba.DCFCLKState[i][j]
+					* mode_lib->vba.MaxAveragePercentOfIdealSDPPortBWDisplayCanUseInNormalSystemOperation / 100,
+					mode_lib->vba.FabricClockPerState[i]
+					* mode_lib->vba.FabricDatapathToDCNDataReturn
+					* mode_lib->vba.MaxAveragePercentOfIdealFabricBWDisplayCanUseInNormalSystemOperation / 100,
+					mode_lib->vba.DRAMSpeedPerState[i]
+					* mode_lib->vba.NumberOfChannels
+					* mode_lib->vba.DRAMChannelWidth
+					* (i < 2 ? mode_lib->vba.MaxAveragePercentOfIdealDRAMBWDisplayCanUseInNormalSystemOperationSTROBE : mode_lib->vba.MaxAveragePercentOfIdealDRAMBWDisplayCanUseInNormalSystemOperation) / 100);
+
+			if (MaxTotalVActiveRDBandwidth
+					<= mode_lib->vba.MaxTotalVerticalActiveAvailableBandwidth[i][j]) {
+				mode_lib->vba.TotalVerticalActiveBandwidthSupport[i][j] = true;
+			} else {
+				mode_lib->vba.TotalVerticalActiveBandwidthSupport[i][j] = false;
+			}
+		}
+	}
+
+	/* Prefetch Check */
+
+	for (i = 0; i < (int) v->soc.num_states; ++i) {
+		for (j = 0; j <= 1; ++j) {
+			double VMDataOnlyReturnBWPerState;
+			double HostVMInefficiencyFactor;
+			unsigned int NextPrefetchModeState;
+
+			mode_lib->vba.TimeCalc = 24 / mode_lib->vba.ProjectedDCFCLKDeepSleep[i][j];
+
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				mode_lib->vba.NoOfDPPThisState[k] = mode_lib->vba.NoOfDPP[i][j][k];
+				mode_lib->vba.swath_width_luma_ub_this_state[k] =
+						mode_lib->vba.swath_width_luma_ub_all_states[i][j][k];
+				mode_lib->vba.swath_width_chroma_ub_this_state[k] =
+						mode_lib->vba.swath_width_chroma_ub_all_states[i][j][k];
+				mode_lib->vba.SwathWidthYThisState[k] = mode_lib->vba.SwathWidthYAllStates[i][j][k];
+				mode_lib->vba.SwathWidthCThisState[k] = mode_lib->vba.SwathWidthCAllStates[i][j][k];
+				mode_lib->vba.SwathHeightYThisState[k] = mode_lib->vba.SwathHeightYAllStates[i][j][k];
+				mode_lib->vba.SwathHeightCThisState[k] = mode_lib->vba.SwathHeightCAllStates[i][j][k];
+				mode_lib->vba.UnboundedRequestEnabledThisState =
+						mode_lib->vba.UnboundedRequestEnabledAllStates[i][j];
+				mode_lib->vba.CompressedBufferSizeInkByteThisState =
+						mode_lib->vba.CompressedBufferSizeInkByteAllStates[i][j];
+				mode_lib->vba.DETBufferSizeInKByteThisState[k] =
+						mode_lib->vba.DETBufferSizeInKByteAllStates[i][j][k];
+				mode_lib->vba.DETBufferSizeYThisState[k] =
+						mode_lib->vba.DETBufferSizeYAllStates[i][j][k];
+				mode_lib->vba.DETBufferSizeCThisState[k] =
+						mode_lib->vba.DETBufferSizeCAllStates[i][j][k];
+			}
+
+			mode_lib->vba.VActiveBandwithSupport[i][j] = dml32_CalculateVActiveBandwithSupport(
+					mode_lib->vba.NumberOfActiveSurfaces,
+					mode_lib->vba.ReturnBWPerState[i][j],
+					mode_lib->vba.NoUrgentLatencyHiding,
+					mode_lib->vba.ReadBandwidthLuma,
+					mode_lib->vba.ReadBandwidthChroma,
+					mode_lib->vba.cursor_bw,
+					mode_lib->vba.meta_row_bandwidth_this_state,
+					mode_lib->vba.dpte_row_bandwidth_this_state,
+					mode_lib->vba.NoOfDPPThisState,
+					mode_lib->vba.UrgentBurstFactorLuma,
+					mode_lib->vba.UrgentBurstFactorChroma,
+					mode_lib->vba.UrgentBurstFactorCursor);
+
+			VMDataOnlyReturnBWPerState = dml32_get_return_bw_mbps_vm_only(&mode_lib->vba.soc, i,
+					mode_lib->vba.DCFCLKState[i][j], mode_lib->vba.FabricClockPerState[i],
+					mode_lib->vba.DRAMSpeedPerState[i]);
+			HostVMInefficiencyFactor = 1;
+
+			if (mode_lib->vba.GPUVMEnable && mode_lib->vba.HostVMEnable)
+				HostVMInefficiencyFactor = mode_lib->vba.ReturnBWPerState[i][j]
+						/ VMDataOnlyReturnBWPerState;
+
+			mode_lib->vba.ExtraLatency = dml32_CalculateExtraLatency(
+					mode_lib->vba.RoundTripPingLatencyCycles, ReorderingBytes,
+					mode_lib->vba.DCFCLKState[i][j], mode_lib->vba.TotalNumberOfActiveDPP[i][j],
+					mode_lib->vba.PixelChunkSizeInKByte,
+					mode_lib->vba.TotalNumberOfDCCActiveDPP[i][j], mode_lib->vba.MetaChunkSize,
+					mode_lib->vba.ReturnBWPerState[i][j], mode_lib->vba.GPUVMEnable,
+					mode_lib->vba.HostVMEnable, mode_lib->vba.NumberOfActiveSurfaces,
+					mode_lib->vba.NoOfDPPThisState, mode_lib->vba.dpte_group_bytes,
+					HostVMInefficiencyFactor, mode_lib->vba.HostVMMinPageSize,
+					mode_lib->vba.HostVMMaxNonCachedPageTableLevels);
+
+			NextPrefetchModeState = mode_lib->vba.MinPrefetchMode;
+
+			mode_lib->vba.NextMaxVStartup = mode_lib->vba.MaxMaxVStartup[i][j];
+
+			do {
+				mode_lib->vba.PrefetchModePerState[i][j] = NextPrefetchModeState;
+				mode_lib->vba.MaxVStartup = mode_lib->vba.NextMaxVStartup;
+
+				for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+					DmlPipe myPipe;
+					unsigned int dummy_integer;
+
+					mode_lib->vba.TWait = dml32_CalculateTWait(
+							mode_lib->vba.PrefetchModePerState[i][j],
+							mode_lib->vba.UsesMALLForPStateChange[k],
+							mode_lib->vba.SynchronizeDRRDisplaysForUCLKPStateChangeFinal,
+							mode_lib->vba.DRRDisplay[k],
+							mode_lib->vba.DRAMClockChangeLatency,
+							mode_lib->vba.FCLKChangeLatency, mode_lib->vba.UrgLatency[i],
+							mode_lib->vba.SREnterPlusExitTime);
+
+					myPipe.Dppclk = mode_lib->vba.RequiredDPPCLK[i][j][k];
+					myPipe.Dispclk = mode_lib->vba.RequiredDISPCLK[i][j];
+					myPipe.PixelClock = mode_lib->vba.PixelClock[k];
+					myPipe.DCFClkDeepSleep = mode_lib->vba.ProjectedDCFCLKDeepSleep[i][j];
+					myPipe.DPPPerSurface = mode_lib->vba.NoOfDPP[i][j][k];
+					myPipe.ScalerEnabled = mode_lib->vba.ScalerEnabled[k];
+					myPipe.SourceRotation = mode_lib->vba.SourceRotation[k];
+					myPipe.BlockWidth256BytesY = mode_lib->vba.Read256BlockWidthY[k];
+					myPipe.BlockHeight256BytesY = mode_lib->vba.Read256BlockHeightY[k];
+					myPipe.BlockWidth256BytesC = mode_lib->vba.Read256BlockWidthC[k];
+					myPipe.BlockHeight256BytesC = mode_lib->vba.Read256BlockHeightC[k];
+					myPipe.InterlaceEnable = mode_lib->vba.Interlace[k];
+					myPipe.NumberOfCursors = mode_lib->vba.NumberOfCursors[k];
+					myPipe.VBlank = mode_lib->vba.VTotal[k] - mode_lib->vba.VActive[k];
+					myPipe.HTotal = mode_lib->vba.HTotal[k];
+					myPipe.HActive = mode_lib->vba.HActive[k];
+					myPipe.DCCEnable = mode_lib->vba.DCCEnable[k];
+					myPipe.ODMMode = mode_lib->vba.ODMCombineEnablePerState[i][k];
+					myPipe.SourcePixelFormat = mode_lib->vba.SourcePixelFormat[k];
+					myPipe.BytePerPixelY = mode_lib->vba.BytePerPixelY[k];
+					myPipe.BytePerPixelC = mode_lib->vba.BytePerPixelC[k];
+					myPipe.ProgressiveToInterlaceUnitInOPP =
+							mode_lib->vba.ProgressiveToInterlaceUnitInOPP;
+
+					mode_lib->vba.NoTimeForPrefetch[i][j][k] =
+						dml32_CalculatePrefetchSchedule(
+							HostVMInefficiencyFactor,
+							&myPipe,
+							mode_lib->vba.DSCDelayPerState[i][k],
+							mode_lib->vba.DPPCLKDelaySubtotal +
+								mode_lib->vba.DPPCLKDelayCNVCFormater,
+							mode_lib->vba.DPPCLKDelaySCL,
+							mode_lib->vba.DPPCLKDelaySCLLBOnly,
+							mode_lib->vba.DPPCLKDelayCNVCCursor,
+							mode_lib->vba.DISPCLKDelaySubtotal,
+							mode_lib->vba.SwathWidthYThisState[k] /
+								mode_lib->vba.HRatio[k],
+							mode_lib->vba.OutputFormat[k],
+							mode_lib->vba.MaxInterDCNTileRepeaters,
+							dml_min(mode_lib->vba.MaxVStartup,
+									mode_lib->vba.MaximumVStartup[i][j][k]),
+							mode_lib->vba.MaximumVStartup[i][j][k],
+							mode_lib->vba.GPUVMMaxPageTableLevels,
+							mode_lib->vba.GPUVMEnable, mode_lib->vba.HostVMEnable,
+							mode_lib->vba.HostVMMaxNonCachedPageTableLevels,
+							mode_lib->vba.HostVMMinPageSize,
+							mode_lib->vba.DynamicMetadataEnable[k],
+							mode_lib->vba.DynamicMetadataVMEnabled,
+							mode_lib->vba.DynamicMetadataLinesBeforeActiveRequired[k],
+							mode_lib->vba.DynamicMetadataTransmittedBytes[k],
+							mode_lib->vba.UrgLatency[i],
+							mode_lib->vba.ExtraLatency,
+							mode_lib->vba.TimeCalc,
+							mode_lib->vba.PDEAndMetaPTEBytesPerFrame[i][j][k],
+							mode_lib->vba.MetaRowBytes[i][j][k],
+							mode_lib->vba.DPTEBytesPerRow[i][j][k],
+							mode_lib->vba.PrefetchLinesY[i][j][k],
+							mode_lib->vba.SwathWidthYThisState[k],
+							mode_lib->vba.PrefillY[k],
+							mode_lib->vba.MaxNumSwY[k],
+							mode_lib->vba.PrefetchLinesC[i][j][k],
+							mode_lib->vba.SwathWidthCThisState[k],
+							mode_lib->vba.PrefillC[k],
+							mode_lib->vba.MaxNumSwC[k],
+							mode_lib->vba.swath_width_luma_ub_this_state[k],
+							mode_lib->vba.swath_width_chroma_ub_this_state[k],
+							mode_lib->vba.SwathHeightYThisState[k],
+							mode_lib->vba.SwathHeightCThisState[k], mode_lib->vba.TWait,
+
+							/* Output */
+							&DSTXAfterScaler[k],
+							&DSTYAfterScaler[k],
+							&mode_lib->vba.LineTimesForPrefetch[k],
+							&mode_lib->vba.PrefetchBW[k],
+							&mode_lib->vba.LinesForMetaPTE[k],
+							&mode_lib->vba.LinesForMetaAndDPTERow[k],
+							&mode_lib->vba.VRatioPreY[i][j][k],
+							&mode_lib->vba.VRatioPreC[i][j][k],
+							&mode_lib->vba.RequiredPrefetchPixelDataBWLuma[0][0][k],
+							&mode_lib->vba.RequiredPrefetchPixelDataBWChroma[0][0][k],
+							&mode_lib->vba.NoTimeForDynamicMetadata[i][j][k],
+							&mode_lib->vba.Tno_bw[k],
+							&mode_lib->vba.prefetch_vmrow_bw[k],
+							&v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_single[0],         // double *Tdmdl_vm
+							&v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_single[1],         // double *Tdmdl
+							&v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_single[2],         // double *TSetup
+							&dummy_integer,         							    // unsigned int   *VUpdateOffsetPix
+							&v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_single[3],         // unsigned int   *VUpdateWidthPix
+							&v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_single[4]);        // unsigned int   *VReadyOffsetPix
+				}
+
+				for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+					dml32_CalculateUrgentBurstFactor(
+							mode_lib->vba.UsesMALLForPStateChange[k],
+							mode_lib->vba.swath_width_luma_ub_this_state[k],
+							mode_lib->vba.swath_width_chroma_ub_this_state[k],
+							mode_lib->vba.SwathHeightYThisState[k],
+							mode_lib->vba.SwathHeightCThisState[k],
+							mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k],
+							mode_lib->vba.UrgLatency[i], mode_lib->vba.CursorBufferSize,
+							mode_lib->vba.CursorWidth[k][0], mode_lib->vba.CursorBPP[k][0],
+							mode_lib->vba.VRatioPreY[i][j][k],
+							mode_lib->vba.VRatioPreC[i][j][k],
+							mode_lib->vba.BytePerPixelInDETY[k],
+							mode_lib->vba.BytePerPixelInDETC[k],
+							mode_lib->vba.DETBufferSizeYThisState[k],
+							mode_lib->vba.DETBufferSizeCThisState[k],
+							/* Output */
+							&mode_lib->vba.UrgentBurstFactorCursorPre[k],
+							&mode_lib->vba.UrgentBurstFactorLumaPre[k],
+							&mode_lib->vba.UrgentBurstFactorChroma[k],
+							&mode_lib->vba.NotUrgentLatencyHidingPre[k]);
+				}
+
+				{
+					double dummy_single[2];
+					dml32_CalculatePrefetchBandwithSupport(
+							mode_lib->vba.NumberOfActiveSurfaces,
+							mode_lib->vba.ReturnBWPerState[i][j],
+							mode_lib->vba.NotUrgentLatencyHidingPre,
+							mode_lib->vba.ReadBandwidthLuma,
+							mode_lib->vba.ReadBandwidthChroma,
+							mode_lib->vba.RequiredPrefetchPixelDataBWLuma[0][0],
+							mode_lib->vba.RequiredPrefetchPixelDataBWChroma[0][0],
+							mode_lib->vba.cursor_bw,
+							mode_lib->vba.meta_row_bandwidth_this_state,
+							mode_lib->vba.dpte_row_bandwidth_this_state,
+							mode_lib->vba.cursor_bw_pre,
+							mode_lib->vba.prefetch_vmrow_bw,
+							mode_lib->vba.NoOfDPPThisState,
+							mode_lib->vba.UrgentBurstFactorLuma,
+							mode_lib->vba.UrgentBurstFactorChroma,
+							mode_lib->vba.UrgentBurstFactorCursor,
+							mode_lib->vba.UrgentBurstFactorLumaPre,
+							mode_lib->vba.UrgentBurstFactorChromaPre,
+							mode_lib->vba.UrgentBurstFactorCursorPre,
+
+							/* output */
+							&dummy_single[0],   // Single  *PrefetchBandwidth
+							&dummy_single[1],   // Single  *FractionOfUrgentBandwidth
+							&mode_lib->vba.PrefetchSupported[i][j]);
+				}
+
+				for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+					if (mode_lib->vba.LineTimesForPrefetch[k]
+							< 2.0 || mode_lib->vba.LinesForMetaPTE[k] >= 32.0
+							|| mode_lib->vba.LinesForMetaAndDPTERow[k] >= 16.0
+							|| mode_lib->vba.NoTimeForPrefetch[i][j][k] == true) {
+						mode_lib->vba.PrefetchSupported[i][j] = false;
+					}
+				}
+
+				mode_lib->vba.DynamicMetadataSupported[i][j] = true;
+				for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+					if (mode_lib->vba.NoTimeForDynamicMetadata[i][j][k] == true)
+						mode_lib->vba.DynamicMetadataSupported[i][j] = false;
+				}
+
+				mode_lib->vba.VRatioInPrefetchSupported[i][j] = true;
+				for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+					if (mode_lib->vba.VRatioPreY[i][j][k] > __DML_MAX_VRATIO_PRE__
+							|| mode_lib->vba.VRatioPreC[i][j][k] > __DML_MAX_VRATIO_PRE__
+							|| mode_lib->vba.NoTimeForPrefetch[i][j][k] == true) {
+						mode_lib->vba.VRatioInPrefetchSupported[i][j] = false;
+					}
+				}
+				mode_lib->vba.AnyLinesForVMOrRowTooLarge = false;
+				for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+					if (mode_lib->vba.LinesForMetaAndDPTERow[k] >= 16
+							|| mode_lib->vba.LinesForMetaPTE[k] >= 32) {
+						mode_lib->vba.AnyLinesForVMOrRowTooLarge = true;
+					}
+				}
+
+				if (mode_lib->vba.PrefetchSupported[i][j] == true
+						&& mode_lib->vba.VRatioInPrefetchSupported[i][j] == true) {
+					mode_lib->vba.BandwidthAvailableForImmediateFlip =
+							dml32_CalculateBandwidthAvailableForImmediateFlip(
+							mode_lib->vba.NumberOfActiveSurfaces,
+							mode_lib->vba.ReturnBWPerState[i][j],
+							mode_lib->vba.ReadBandwidthLuma,
+							mode_lib->vba.ReadBandwidthChroma,
+							mode_lib->vba.RequiredPrefetchPixelDataBWLuma[0][0],
+							mode_lib->vba.RequiredPrefetchPixelDataBWChroma[0][0],
+							mode_lib->vba.cursor_bw,
+							mode_lib->vba.cursor_bw_pre,
+							mode_lib->vba.NoOfDPPThisState,
+							mode_lib->vba.UrgentBurstFactorLuma,
+							mode_lib->vba.UrgentBurstFactorChroma,
+							mode_lib->vba.UrgentBurstFactorCursor,
+							mode_lib->vba.UrgentBurstFactorLumaPre,
+							mode_lib->vba.UrgentBurstFactorChromaPre,
+							mode_lib->vba.UrgentBurstFactorCursorPre);
+
+					mode_lib->vba.TotImmediateFlipBytes = 0.0;
+					for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+						if (!(mode_lib->vba.ImmediateFlipRequirement[k] ==
+								dm_immediate_flip_not_required)) {
+							mode_lib->vba.TotImmediateFlipBytes =
+									mode_lib->vba.TotImmediateFlipBytes
+								+ mode_lib->vba.NoOfDPP[i][j][k]
+								* mode_lib->vba.PDEAndMetaPTEBytesPerFrame[i][j][k]
+								+ mode_lib->vba.MetaRowBytes[i][j][k];
+							if (mode_lib->vba.use_one_row_for_frame_flip[i][j][k]) {
+								mode_lib->vba.TotImmediateFlipBytes =
+									mode_lib->vba.TotImmediateFlipBytes + 2
+								* mode_lib->vba.DPTEBytesPerRow[i][j][k];
+							} else {
+								mode_lib->vba.TotImmediateFlipBytes =
+									mode_lib->vba.TotImmediateFlipBytes
+								+ mode_lib->vba.DPTEBytesPerRow[i][j][k];
+							}
+						}
+					}
+
+					for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+						dml32_CalculateFlipSchedule(HostVMInefficiencyFactor,
+							mode_lib->vba.ExtraLatency,
+							mode_lib->vba.UrgLatency[i],
+							mode_lib->vba.GPUVMMaxPageTableLevels,
+							mode_lib->vba.HostVMEnable,
+							mode_lib->vba.HostVMMaxNonCachedPageTableLevels,
+							mode_lib->vba.GPUVMEnable,
+							mode_lib->vba.HostVMMinPageSize,
+							mode_lib->vba.PDEAndMetaPTEBytesPerFrame[i][j][k],
+							mode_lib->vba.MetaRowBytes[i][j][k],
+							mode_lib->vba.DPTEBytesPerRow[i][j][k],
+							mode_lib->vba.BandwidthAvailableForImmediateFlip,
+							mode_lib->vba.TotImmediateFlipBytes,
+							mode_lib->vba.SourcePixelFormat[k],
+							(mode_lib->vba.HTotal[k] / mode_lib->vba.PixelClock[k]),
+							mode_lib->vba.VRatio[k],
+							mode_lib->vba.VRatioChroma[k],
+							mode_lib->vba.Tno_bw[k],
+								mode_lib->vba.DCCEnable[k],
+							mode_lib->vba.dpte_row_height[k],
+							mode_lib->vba.meta_row_height[k],
+							mode_lib->vba.dpte_row_height_chroma[k],
+							mode_lib->vba.meta_row_height_chroma[k],
+							mode_lib->vba.use_one_row_for_frame_flip[i][j][k], // 24
+
+							/* Output */
+							&mode_lib->vba.DestinationLinesToRequestVMInImmediateFlip[k],
+							&mode_lib->vba.DestinationLinesToRequestRowInImmediateFlip[k],
+							&mode_lib->vba.final_flip_bw[k],
+							&mode_lib->vba.ImmediateFlipSupportedForPipe[k]);
+					}
+
+					{
+						double dummy_single[2];
+						dml32_CalculateImmediateFlipBandwithSupport(mode_lib->vba.NumberOfActiveSurfaces,
+								mode_lib->vba.ReturnBWPerState[i][j],
+								mode_lib->vba.ImmediateFlipRequirement,
+								mode_lib->vba.final_flip_bw,
+								mode_lib->vba.ReadBandwidthLuma,
+								mode_lib->vba.ReadBandwidthChroma,
+								mode_lib->vba.RequiredPrefetchPixelDataBWLuma[0][0],
+								mode_lib->vba.RequiredPrefetchPixelDataBWChroma[0][0],
+								mode_lib->vba.cursor_bw,
+								mode_lib->vba.meta_row_bandwidth_this_state,
+								mode_lib->vba.dpte_row_bandwidth_this_state,
+								mode_lib->vba.cursor_bw_pre,
+								mode_lib->vba.prefetch_vmrow_bw,
+								mode_lib->vba.DPPPerPlane,
+								mode_lib->vba.UrgentBurstFactorLuma,
+								mode_lib->vba.UrgentBurstFactorChroma,
+								mode_lib->vba.UrgentBurstFactorCursor,
+								mode_lib->vba.UrgentBurstFactorLumaPre,
+								mode_lib->vba.UrgentBurstFactorChromaPre,
+								mode_lib->vba.UrgentBurstFactorCursorPre,
+
+								/* output */
+								&dummy_single[0], //  Single  *TotalBandwidth
+								&dummy_single[1], //  Single  *FractionOfUrgentBandwidth
+								&mode_lib->vba.ImmediateFlipSupportedForState[i][j]); // Boolean *ImmediateFlipBandwidthSupport
+					}
+
+					for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+						if (!(mode_lib->vba.ImmediateFlipRequirement[k]
+								== dm_immediate_flip_not_required)
+								&& (mode_lib->vba.ImmediateFlipSupportedForPipe[k]
+										== false))
+							mode_lib->vba.ImmediateFlipSupportedForState[i][j] = false;
+					}
+				} else { // if prefetch not support, assume iflip not supported
+					mode_lib->vba.ImmediateFlipSupportedForState[i][j] = false;
+				}
+
+				if (mode_lib->vba.MaxVStartup <= __DML_VBA_MIN_VSTARTUP__
+						|| mode_lib->vba.AnyLinesForVMOrRowTooLarge == false) {
+					mode_lib->vba.NextMaxVStartup = mode_lib->vba.MaxMaxVStartup[i][j];
+					NextPrefetchModeState = NextPrefetchModeState + 1;
+				} else {
+					mode_lib->vba.NextMaxVStartup = mode_lib->vba.NextMaxVStartup - 1;
+				}
+			} while (!((mode_lib->vba.PrefetchSupported[i][j] == true
+					&& mode_lib->vba.DynamicMetadataSupported[i][j] == true
+					&& mode_lib->vba.VRatioInPrefetchSupported[i][j] == true &&
+					// consider flip support is okay if when there is no hostvm and the
+					// user does't require a iflip OR the flip bw is ok
+					// If there is hostvm, DCN needs to support iflip for invalidation
+					((mode_lib->vba.HostVMEnable == false
+							&& !mode_lib->vba.ImmediateFlipRequiredFinal)
+							|| mode_lib->vba.ImmediateFlipSupportedForState[i][j] == true))
+					|| (mode_lib->vba.NextMaxVStartup == mode_lib->vba.MaxMaxVStartup[i][j]
+							&& NextPrefetchModeState > mode_lib->vba.MaxPrefetchMode)));
+
+			for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k) {
+				mode_lib->vba.use_one_row_for_frame_this_state[k] =
+						mode_lib->vba.use_one_row_for_frame[i][j][k];
+			}
+
+
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.mSOCParameters.UrgentLatency = mode_lib->vba.UrgLatency[i];
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.mSOCParameters.ExtraLatency = mode_lib->vba.ExtraLatency;
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.mSOCParameters.WritebackLatency = mode_lib->vba.WritebackLatency;
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.mSOCParameters.DRAMClockChangeLatency = mode_lib->vba.DRAMClockChangeLatency;
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.mSOCParameters.FCLKChangeLatency = mode_lib->vba.FCLKChangeLatency;
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.mSOCParameters.SRExitTime = mode_lib->vba.SRExitTime;
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.mSOCParameters.SREnterPlusExitTime = mode_lib->vba.SREnterPlusExitTime;
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.mSOCParameters.SRExitZ8Time = mode_lib->vba.SRExitZ8Time;
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.mSOCParameters.SREnterPlusExitZ8Time = mode_lib->vba.SREnterPlusExitZ8Time;
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.mSOCParameters.USRRetrainingLatency = mode_lib->vba.USRRetrainingLatency;
+			v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.mSOCParameters.SMNLatency = mode_lib->vba.SMNLatency;
+
+			{
+				unsigned int dummy_integer[4];
+				dml32_CalculateWatermarksMALLUseAndDRAMSpeedChangeSupport(
+						mode_lib->vba.USRRetrainingRequiredFinal,
+						mode_lib->vba.UsesMALLForPStateChange,
+						mode_lib->vba.PrefetchModePerState[i][j],
+						mode_lib->vba.NumberOfActiveSurfaces,
+						mode_lib->vba.MaxLineBufferLines,
+						mode_lib->vba.LineBufferSizeFinal,
+						mode_lib->vba.WritebackInterfaceBufferSize,
+						mode_lib->vba.DCFCLKState[i][j],
+						mode_lib->vba.ReturnBWPerState[i][j],
+						mode_lib->vba.SynchronizeTimingsFinal,
+						mode_lib->vba.SynchronizeDRRDisplaysForUCLKPStateChangeFinal,
+						mode_lib->vba.DRRDisplay,
+						mode_lib->vba.dpte_group_bytes,
+						mode_lib->vba.meta_row_height,
+						mode_lib->vba.meta_row_height_chroma,
+						v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.mSOCParameters,
+						mode_lib->vba.WritebackChunkSize,
+						mode_lib->vba.SOCCLKPerState[i],
+						mode_lib->vba.ProjectedDCFCLKDeepSleep[i][j],
+						mode_lib->vba.DETBufferSizeYThisState,
+						mode_lib->vba.DETBufferSizeCThisState,
+						mode_lib->vba.SwathHeightYThisState,
+						mode_lib->vba.SwathHeightCThisState,
+						mode_lib->vba.LBBitPerPixel,
+						mode_lib->vba.SwathWidthYThisState, // 24
+						mode_lib->vba.SwathWidthCThisState,
+						mode_lib->vba.HRatio,
+						mode_lib->vba.HRatioChroma,
+						mode_lib->vba.vtaps,
+						mode_lib->vba.VTAPsChroma,
+						mode_lib->vba.VRatio,
+						mode_lib->vba.VRatioChroma,
+						mode_lib->vba.HTotal,
+						mode_lib->vba.VTotal,
+						mode_lib->vba.VActive,
+						mode_lib->vba.PixelClock,
+						mode_lib->vba.BlendingAndTiming,
+						mode_lib->vba.NoOfDPPThisState,
+						mode_lib->vba.BytePerPixelInDETY,
+						mode_lib->vba.BytePerPixelInDETC,
+						DSTXAfterScaler,
+						DSTYAfterScaler,
+						mode_lib->vba.WritebackEnable,
+						mode_lib->vba.WritebackPixelFormat,
+						mode_lib->vba.WritebackDestinationWidth,
+						mode_lib->vba.WritebackDestinationHeight,
+						mode_lib->vba.WritebackSourceHeight,
+						mode_lib->vba.UnboundedRequestEnabledThisState,
+						mode_lib->vba.CompressedBufferSizeInkByteThisState,
+
+						/* Output */
+						&mode_lib->vba.Watermark, // Store the values in vba
+						&mode_lib->vba.DRAMClockChangeSupport[i][j],
+						&v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_single2[0], // double *MaxActiveDRAMClockChangeLatencySupported
+						&dummy_integer[0], // Long SubViewportLinesNeededInMALL[]
+						&mode_lib->vba.FCLKChangeSupport[i][j],
+						&v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.dummy_single2[1], // double *MinActiveFCLKChangeLatencySupported
+						&mode_lib->vba.USRRetrainingSupport[i][j],
+						mode_lib->vba.ActiveDRAMClockChangeLatencyMargin);
+			}
+		}
+	} // End of Prefetch Check
+
+	/*Cursor Support Check*/
+	mode_lib->vba.CursorSupport = true;
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		if (mode_lib->vba.CursorWidth[k][0] > 0.0) {
+			if (mode_lib->vba.CursorBPP[k][0] == 64 && mode_lib->vba.Cursor64BppSupport == false)
+				mode_lib->vba.CursorSupport = false;
+		}
+	}
+
+	/*Valid Pitch Check*/
+	mode_lib->vba.PitchSupport = true;
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		mode_lib->vba.AlignedYPitch[k] = dml_ceil(
+				dml_max(mode_lib->vba.PitchY[k], mode_lib->vba.SurfaceWidthY[k]),
+				mode_lib->vba.MacroTileWidthY[k]);
+		if (mode_lib->vba.DCCEnable[k] == true) {
+			mode_lib->vba.AlignedDCCMetaPitchY[k] = dml_ceil(
+					dml_max(mode_lib->vba.DCCMetaPitchY[k], mode_lib->vba.SurfaceWidthY[k]),
+					64.0 * mode_lib->vba.Read256BlockWidthY[k]);
+		} else {
+			mode_lib->vba.AlignedDCCMetaPitchY[k] = mode_lib->vba.DCCMetaPitchY[k];
+		}
+		if (mode_lib->vba.SourcePixelFormat[k] != dm_444_64 && mode_lib->vba.SourcePixelFormat[k] != dm_444_32
+				&& mode_lib->vba.SourcePixelFormat[k] != dm_444_16
+				&& mode_lib->vba.SourcePixelFormat[k] != dm_mono_16
+				&& mode_lib->vba.SourcePixelFormat[k] != dm_rgbe
+				&& mode_lib->vba.SourcePixelFormat[k] != dm_mono_8) {
+			mode_lib->vba.AlignedCPitch[k] = dml_ceil(
+					dml_max(mode_lib->vba.PitchC[k], mode_lib->vba.SurfaceWidthC[k]),
+					mode_lib->vba.MacroTileWidthC[k]);
+			if (mode_lib->vba.DCCEnable[k] == true) {
+				mode_lib->vba.AlignedDCCMetaPitchC[k] = dml_ceil(
+						dml_max(mode_lib->vba.DCCMetaPitchC[k],
+								mode_lib->vba.SurfaceWidthC[k]),
+						64.0 * mode_lib->vba.Read256BlockWidthC[k]);
+			} else {
+				mode_lib->vba.AlignedDCCMetaPitchC[k] = mode_lib->vba.DCCMetaPitchC[k];
+			}
+		} else {
+			mode_lib->vba.AlignedCPitch[k] = mode_lib->vba.PitchC[k];
+			mode_lib->vba.AlignedDCCMetaPitchC[k] = mode_lib->vba.DCCMetaPitchC[k];
+		}
+		if (mode_lib->vba.AlignedYPitch[k] > mode_lib->vba.PitchY[k]
+				|| mode_lib->vba.AlignedCPitch[k] > mode_lib->vba.PitchC[k]
+				|| mode_lib->vba.AlignedDCCMetaPitchY[k] > mode_lib->vba.DCCMetaPitchY[k]
+				|| mode_lib->vba.AlignedDCCMetaPitchC[k] > mode_lib->vba.DCCMetaPitchC[k]) {
+			mode_lib->vba.PitchSupport = false;
+		}
+	}
+
+	mode_lib->vba.ViewportExceedsSurface = false;
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		if (mode_lib->vba.ViewportWidth[k] > mode_lib->vba.SurfaceWidthY[k]
+				|| mode_lib->vba.ViewportHeight[k] > mode_lib->vba.SurfaceHeightY[k]) {
+			mode_lib->vba.ViewportExceedsSurface = true;
+			if (mode_lib->vba.SourcePixelFormat[k] != dm_444_64
+					&& mode_lib->vba.SourcePixelFormat[k] != dm_444_32
+					&& mode_lib->vba.SourcePixelFormat[k] != dm_444_16
+					&& mode_lib->vba.SourcePixelFormat[k] != dm_444_16
+					&& mode_lib->vba.SourcePixelFormat[k] != dm_444_8
+					&& mode_lib->vba.SourcePixelFormat[k] != dm_rgbe) {
+				if (mode_lib->vba.ViewportWidthChroma[k] > mode_lib->vba.SurfaceWidthC[k]
+						|| mode_lib->vba.ViewportHeightChroma[k]
+								> mode_lib->vba.SurfaceHeightC[k]) {
+					mode_lib->vba.ViewportExceedsSurface = true;
+				}
+			}
+		}
+	}
+
+	/*Mode Support, Voltage State and SOC Configuration*/
+	for (i = v->soc.num_states - 1; i >= 0; i--) {
+		for (j = 0; j < 2; j++) {
+			if (mode_lib->vba.ScaleRatioAndTapsSupport == true
+				&& mode_lib->vba.SourceFormatPixelAndScanSupport == true
+				&& mode_lib->vba.ViewportSizeSupport[i][j] == true
+				&& !mode_lib->vba.LinkRateDoesNotMatchDPVersion
+				&& !mode_lib->vba.LinkRateForMultistreamNotIndicated
+				&& !mode_lib->vba.BPPForMultistreamNotIndicated
+				&& !mode_lib->vba.MultistreamWithHDMIOreDP
+				&& !mode_lib->vba.ExceededMultistreamSlots[i]
+				&& !mode_lib->vba.MSOOrODMSplitWithNonDPLink
+				&& !mode_lib->vba.NotEnoughLanesForMSO
+				&& mode_lib->vba.LinkCapacitySupport[i] == true && !mode_lib->vba.P2IWith420
+				&& !mode_lib->vba.DSCOnlyIfNecessaryWithBPP
+				&& !mode_lib->vba.DSC422NativeNotSupported
+				&& !mode_lib->vba.MPCCombineMethodIncompatible
+				&& mode_lib->vba.ODMCombine2To1SupportCheckOK[i] == true
+				&& mode_lib->vba.ODMCombine4To1SupportCheckOK[i] == true
+				&& mode_lib->vba.NotEnoughDSCUnits[i] == false
+				&& !mode_lib->vba.NotEnoughDSCSlices[i]
+				&& !mode_lib->vba.ImmediateFlipOrHostVMAndPStateWithMALLFullFrameOrPhantomPipe
+				&& !mode_lib->vba.InvalidCombinationOfMALLUseForPStateAndStaticScreen
+				&& mode_lib->vba.DSCCLKRequiredMoreThanSupported[i] == false
+				&& mode_lib->vba.PixelsPerLinePerDSCUnitSupport[i]
+				&& mode_lib->vba.DTBCLKRequiredMoreThanSupported[i] == false
+				&& !mode_lib->vba.InvalidCombinationOfMALLUseForPState
+				&& !mode_lib->vba.ImmediateFlipRequiredButTheRequirementForEachSurfaceIsNotSpecified
+				&& mode_lib->vba.ROBSupport[i][j] == true
+				&& mode_lib->vba.DISPCLK_DPPCLK_Support[i][j] == true
+				&& mode_lib->vba.TotalAvailablePipesSupport[i][j] == true
+				&& mode_lib->vba.NumberOfOTGSupport == true
+				&& mode_lib->vba.NumberOfHDMIFRLSupport == true
+				&& mode_lib->vba.EnoughWritebackUnits == true
+				&& mode_lib->vba.WritebackLatencySupport == true
+				&& mode_lib->vba.WritebackScaleRatioAndTapsSupport == true
+				&& mode_lib->vba.CursorSupport == true && mode_lib->vba.PitchSupport == true
+				&& mode_lib->vba.ViewportExceedsSurface == false
+				&& mode_lib->vba.PrefetchSupported[i][j] == true
+				&& mode_lib->vba.VActiveBandwithSupport[i][j] == true
+				&& mode_lib->vba.DynamicMetadataSupported[i][j] == true
+				&& mode_lib->vba.TotalVerticalActiveBandwidthSupport[i][j] == true
+				&& mode_lib->vba.VRatioInPrefetchSupported[i][j] == true
+				&& mode_lib->vba.PTEBufferSizeNotExceeded[i][j] == true
+				&& mode_lib->vba.DCCMetaBufferSizeNotExceeded[i][j] == true
+				&& mode_lib->vba.NonsupportedDSCInputBPC == false
+				&& !mode_lib->vba.ExceededMALLSize
+				&& ((mode_lib->vba.HostVMEnable == false
+				&& !mode_lib->vba.ImmediateFlipRequiredFinal)
+				|| mode_lib->vba.ImmediateFlipSupportedForState[i][j])
+				&& (!mode_lib->vba.DRAMClockChangeRequirementFinal
+				|| i == v->soc.num_states - 1
+				|| mode_lib->vba.DRAMClockChangeSupport[i][j] != dm_dram_clock_change_unsupported)
+				&& (!mode_lib->vba.FCLKChangeRequirementFinal || i == v->soc.num_states - 1
+				|| mode_lib->vba.FCLKChangeSupport[i][j] != dm_fclock_change_unsupported)
+				&& (!mode_lib->vba.USRRetrainingRequiredFinal
+				|| &mode_lib->vba.USRRetrainingSupport[i][j])) {
+				mode_lib->vba.ModeSupport[i][j] = true;
+			} else {
+				mode_lib->vba.ModeSupport[i][j] = false;
+			}
+		}
+	}
+
+	MaximumMPCCombine = 0;
+
+	for (i = v->soc.num_states; i >= 0; i--) {
+		if (i == v->soc.num_states || mode_lib->vba.ModeSupport[i][0] == true ||
+				mode_lib->vba.ModeSupport[i][1] == true) {
+			mode_lib->vba.VoltageLevel = i;
+			mode_lib->vba.ModeIsSupported = mode_lib->vba.ModeSupport[i][0] == true
+					|| mode_lib->vba.ModeSupport[i][1] == true;
+
+			if ((mode_lib->vba.ModeSupport[i][0] == false && mode_lib->vba.ModeSupport[i][1] == true)
+				|| MPCCombineMethodAsPossible
+				|| (MPCCombineMethodAsNeededForPStateChangeAndVoltage
+				&& mode_lib->vba.DRAMClockChangeRequirementFinal
+				&& (((mode_lib->vba.DRAMClockChangeSupport[i][1] == dm_dram_clock_change_vactive
+				|| mode_lib->vba.DRAMClockChangeSupport[i][1] ==
+						dm_dram_clock_change_vactive_w_mall_full_frame
+				|| mode_lib->vba.DRAMClockChangeSupport[i][1] ==
+						dm_dram_clock_change_vactive_w_mall_sub_vp)
+				&& !(mode_lib->vba.DRAMClockChangeSupport[i][0] == dm_dram_clock_change_vactive
+				|| mode_lib->vba.DRAMClockChangeSupport[i][0] ==
+						dm_dram_clock_change_vactive_w_mall_full_frame
+				|| mode_lib->vba.DRAMClockChangeSupport[i][0] ==
+						dm_dram_clock_change_vactive_w_mall_sub_vp))
+				|| ((mode_lib->vba.DRAMClockChangeSupport[i][1] == dm_dram_clock_change_vblank
+				|| mode_lib->vba.DRAMClockChangeSupport[i][1] ==
+						dm_dram_clock_change_vblank_w_mall_full_frame
+				|| mode_lib->vba.DRAMClockChangeSupport[i][1] ==
+						dm_dram_clock_change_vblank_w_mall_sub_vp)
+				&& mode_lib->vba.DRAMClockChangeSupport[i][0] == dm_dram_clock_change_unsupported)))
+				|| (MPCCombineMethodAsNeededForPStateChangeAndVoltage &&
+				mode_lib->vba.FCLKChangeRequirementFinal
+				&& ((mode_lib->vba.FCLKChangeSupport[i][1] == dm_fclock_change_vactive
+				&& mode_lib->vba.FCLKChangeSupport[i][0] != dm_fclock_change_vactive)
+				|| (mode_lib->vba.FCLKChangeSupport[i][1] == dm_fclock_change_vblank
+				&& mode_lib->vba.FCLKChangeSupport[i][0] == dm_fclock_change_unsupported)))) {
+				MaximumMPCCombine = 1;
+			} else {
+				MaximumMPCCombine = 0;
+			}
+		}
+	}
+
+	mode_lib->vba.ImmediateFlipSupport =
+			mode_lib->vba.ImmediateFlipSupportedForState[mode_lib->vba.VoltageLevel][MaximumMPCCombine];
+	mode_lib->vba.UnboundedRequestEnabled =
+			mode_lib->vba.UnboundedRequestEnabledAllStates[mode_lib->vba.VoltageLevel][MaximumMPCCombine];
+	mode_lib->vba.CompressedBufferSizeInkByte =
+			mode_lib->vba.CompressedBufferSizeInkByteAllStates[mode_lib->vba.VoltageLevel][MaximumMPCCombine]; // Not used, informational
+
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		mode_lib->vba.MPCCombineEnable[k] =
+				mode_lib->vba.MPCCombine[mode_lib->vba.VoltageLevel][MaximumMPCCombine][k];
+		mode_lib->vba.DPPPerPlane[k] = mode_lib->vba.NoOfDPP[mode_lib->vba.VoltageLevel][MaximumMPCCombine][k];
+		mode_lib->vba.SwathHeightY[k] =
+				mode_lib->vba.SwathHeightYAllStates[mode_lib->vba.VoltageLevel][MaximumMPCCombine][k];
+		mode_lib->vba.SwathHeightC[k] =
+				mode_lib->vba.SwathHeightCAllStates[mode_lib->vba.VoltageLevel][MaximumMPCCombine][k];
+		mode_lib->vba.DETBufferSizeInKByte[k] =
+			mode_lib->vba.DETBufferSizeInKByteAllStates[mode_lib->vba.VoltageLevel][MaximumMPCCombine][k];
+		mode_lib->vba.DETBufferSizeY[k] =
+				mode_lib->vba.DETBufferSizeYAllStates[mode_lib->vba.VoltageLevel][MaximumMPCCombine][k];
+		mode_lib->vba.DETBufferSizeC[k] =
+				mode_lib->vba.DETBufferSizeCAllStates[mode_lib->vba.VoltageLevel][MaximumMPCCombine][k];
+		mode_lib->vba.OutputType[k] = mode_lib->vba.OutputTypePerState[mode_lib->vba.VoltageLevel][k];
+		mode_lib->vba.OutputRate[k] = mode_lib->vba.OutputRatePerState[mode_lib->vba.VoltageLevel][k];
+	}
+
+	mode_lib->vba.DCFCLK = mode_lib->vba.DCFCLKState[mode_lib->vba.VoltageLevel][MaximumMPCCombine];
+	mode_lib->vba.DRAMSpeed = mode_lib->vba.DRAMSpeedPerState[mode_lib->vba.VoltageLevel];
+	mode_lib->vba.FabricClock = mode_lib->vba.FabricClockPerState[mode_lib->vba.VoltageLevel];
+	mode_lib->vba.SOCCLK = mode_lib->vba.SOCCLKPerState[mode_lib->vba.VoltageLevel];
+	mode_lib->vba.ReturnBW = mode_lib->vba.ReturnBWPerState[mode_lib->vba.VoltageLevel][MaximumMPCCombine];
+	mode_lib->vba.DISPCLK = mode_lib->vba.RequiredDISPCLK[mode_lib->vba.VoltageLevel][MaximumMPCCombine];
+	mode_lib->vba.maxMpcComb = MaximumMPCCombine;
+
+	for (k = 0; k <= mode_lib->vba.NumberOfActiveSurfaces - 1; k++) {
+		if (mode_lib->vba.BlendingAndTiming[k] == k) {
+			mode_lib->vba.ODMCombineEnabled[k] =
+					mode_lib->vba.ODMCombineEnablePerState[mode_lib->vba.VoltageLevel][k];
+		} else {
+			mode_lib->vba.ODMCombineEnabled[k] = dm_odm_combine_mode_disabled;
+		}
+
+		mode_lib->vba.DSCEnabled[k] = mode_lib->vba.RequiresDSC[mode_lib->vba.VoltageLevel][k];
+		mode_lib->vba.FECEnable[k] = mode_lib->vba.RequiresFEC[mode_lib->vba.VoltageLevel][k];
+		mode_lib->vba.OutputBpp[k] = mode_lib->vba.OutputBppPerState[mode_lib->vba.VoltageLevel][k];
+	}
+
+	mode_lib->vba.UrgentWatermark = mode_lib->vba.Watermark.UrgentWatermark;
+	mode_lib->vba.StutterEnterPlusExitWatermark = mode_lib->vba.Watermark.StutterEnterPlusExitWatermark;
+	mode_lib->vba.StutterExitWatermark = mode_lib->vba.Watermark.StutterExitWatermark;
+	mode_lib->vba.WritebackDRAMClockChangeWatermark = mode_lib->vba.Watermark.WritebackDRAMClockChangeWatermark;
+	mode_lib->vba.DRAMClockChangeWatermark = mode_lib->vba.Watermark.DRAMClockChangeWatermark;
+	mode_lib->vba.UrgentLatency = mode_lib->vba.UrgLatency[mode_lib->vba.VoltageLevel];
+	mode_lib->vba.DCFCLKDeepSleep = mode_lib->vba.ProjectedDCFCLKDeepSleep[mode_lib->vba.VoltageLevel][MaximumMPCCombine];
+
+	/* VBA has Error type to Error Msg output here, but not necessary for DML-C */
+} // ModeSupportAndSystemConfigurationFull
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h
new file mode 100644
index 000000000000..c62e0991358b
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.h
@@ -0,0 +1,57 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DML32_DISPLAY_MODE_VBA_H__
+#define __DML32_DISPLAY_MODE_VBA_H__
+
+#include "../display_mode_enums.h"
+
+// To enable a lot of debug msg
+//#define __DML_VBA_DEBUG__
+// For DML-C changes that hasn't been propagated to VBA yet
+//#define __DML_VBA_ALLOW_DELTA__
+
+// Move these to ip parameters/constant
+// At which vstartup the DML start to try if the mode can be supported
+#define __DML_VBA_MIN_VSTARTUP__    9
+
+// Delay in DCFCLK from ARB to DET (1st num is ARB to SDPIF, 2nd number is SDPIF to DET)
+#define __DML_ARB_TO_RET_DELAY__    7 + 95
+
+// fudge factor for min dcfclk calclation
+#define __DML_MIN_DCFCLK_FACTOR__   1.15
+
+// Prefetch schedule max vratio
+#define __DML_MAX_VRATIO_PRE__ 4.0
+
+#define BPP_INVALID 0
+#define BPP_BLENDED_PIPE 0xffffffff
+
+struct display_mode_lib;
+
+void dml32_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_lib);
+void dml32_recalculate(struct display_mode_lib *mode_lib);
+
+#endif
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.c
new file mode 100644
index 000000000000..6509a84eeb64
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.c
@@ -0,0 +1,6254 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+#include "display_mode_vba_util_32.h"
+#include "../dml_inline_defs.h"
+#include "display_mode_vba_32.h"
+#include "../display_mode_lib.h"
+
+unsigned int dml32_dscceComputeDelay(
+		unsigned int bpc,
+		double BPP,
+		unsigned int sliceWidth,
+		unsigned int numSlices,
+		enum output_format_class pixelFormat,
+		enum output_encoder_class Output)
+{
+	// valid bpc         = source bits per component in the set of {8, 10, 12}
+	// valid bpp         = increments of 1/16 of a bit
+	//                    min = 6/7/8 in N420/N422/444, respectively
+	//                    max = such that compression is 1:1
+	//valid sliceWidth  = number of pixels per slice line,
+	//	must be less than or equal to 5184/numSlices (or 4096/numSlices in 420 mode)
+	//valid numSlices   = number of slices in the horiziontal direction per DSC engine in the set of {1, 2, 3, 4}
+	//valid pixelFormat = pixel/color format in the set of {:N444_RGB, :S422, :N422, :N420}
+
+	// fixed value
+	unsigned int rcModelSize = 8192;
+
+	// N422/N420 operate at 2 pixels per clock
+	unsigned int pixelsPerClock, lstall, D, initalXmitDelay, w, s, ix, wx, p, l0, a, ax, L,
+	Delay, pixels;
+
+	if (pixelFormat == dm_420)
+		pixelsPerClock = 2;
+	else if (pixelFormat == dm_n422)
+		pixelsPerClock = 2;
+	// #all other modes operate at 1 pixel per clock
+	else
+		pixelsPerClock = 1;
+
+	//initial transmit delay as per PPS
+	initalXmitDelay = dml_round(rcModelSize / 2.0 / BPP / pixelsPerClock);
+
+	//compute ssm delay
+	if (bpc == 8)
+		D = 81;
+	else if (bpc == 10)
+		D = 89;
+	else
+		D = 113;
+
+	//divide by pixel per cycle to compute slice width as seen by DSC
+	w = sliceWidth / pixelsPerClock;
+
+	//422 mode has an additional cycle of delay
+	if (pixelFormat == dm_420 || pixelFormat == dm_444 || pixelFormat == dm_n422)
+		s = 0;
+	else
+		s = 1;
+
+	//main calculation for the dscce
+	ix = initalXmitDelay + 45;
+	wx = (w + 2) / 3;
+	p = 3 * wx - w;
+	l0 = ix / w;
+	a = ix + p * l0;
+	ax = (a + 2) / 3 + D + 6 + 1;
+	L = (ax + wx - 1) / wx;
+	if ((ix % w) == 0 && p != 0)
+		lstall = 1;
+	else
+		lstall = 0;
+	Delay = L * wx * (numSlices - 1) + ax + s + lstall + 22;
+
+	//dsc processes 3 pixel containers per cycle and a container can contain 1 or 2 pixels
+	pixels = Delay * 3 * pixelsPerClock;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: bpc: %d\n", __func__, bpc);
+	dml_print("DML::%s: BPP: %f\n", __func__, BPP);
+	dml_print("DML::%s: sliceWidth: %d\n", __func__, sliceWidth);
+	dml_print("DML::%s: numSlices: %d\n", __func__, numSlices);
+	dml_print("DML::%s: pixelFormat: %d\n", __func__, pixelFormat);
+	dml_print("DML::%s: Output: %d\n", __func__, Output);
+	dml_print("DML::%s: pixels: %d\n", __func__, pixels);
+#endif
+
+	return pixels;
+}
+
+unsigned int dml32_dscComputeDelay(enum output_format_class pixelFormat, enum output_encoder_class Output)
+{
+	unsigned int Delay = 0;
+
+	if (pixelFormat == dm_420) {
+		//   sfr
+		Delay = Delay + 2;
+		//   dsccif
+		Delay = Delay + 0;
+		//   dscc - input deserializer
+		Delay = Delay + 3;
+		//   dscc gets pixels every other cycle
+		Delay = Delay + 2;
+		//   dscc - input cdc fifo
+		Delay = Delay + 12;
+		//   dscc gets pixels every other cycle
+		Delay = Delay + 13;
+		//   dscc - cdc uncertainty
+		Delay = Delay + 2;
+		//   dscc - output cdc fifo
+		Delay = Delay + 7;
+		//   dscc gets pixels every other cycle
+		Delay = Delay + 3;
+		//   dscc - cdc uncertainty
+		Delay = Delay + 2;
+		//   dscc - output serializer
+		Delay = Delay + 1;
+		//   sft
+		Delay = Delay + 1;
+	} else if (pixelFormat == dm_n422 || (pixelFormat != dm_444)) {
+		//   sfr
+		Delay = Delay + 2;
+		//   dsccif
+		Delay = Delay + 1;
+		//   dscc - input deserializer
+		Delay = Delay + 5;
+		//  dscc - input cdc fifo
+		Delay = Delay + 25;
+		//   dscc - cdc uncertainty
+		Delay = Delay + 2;
+		//   dscc - output cdc fifo
+		Delay = Delay + 10;
+		//   dscc - cdc uncertainty
+		Delay = Delay + 2;
+		//   dscc - output serializer
+		Delay = Delay + 1;
+		//   sft
+		Delay = Delay + 1;
+	} else {
+		//   sfr
+		Delay = Delay + 2;
+		//   dsccif
+		Delay = Delay + 0;
+		//   dscc - input deserializer
+		Delay = Delay + 3;
+		//   dscc - input cdc fifo
+		Delay = Delay + 12;
+		//   dscc - cdc uncertainty
+		Delay = Delay + 2;
+		//   dscc - output cdc fifo
+		Delay = Delay + 7;
+		//   dscc - output serializer
+		Delay = Delay + 1;
+		//   dscc - cdc uncertainty
+		Delay = Delay + 2;
+		//   sft
+		Delay = Delay + 1;
+	}
+
+	return Delay;
+}
+
+
+bool IsVertical(enum dm_rotation_angle Scan)
+{
+	bool is_vert = false;
+
+	if (Scan == dm_rotation_90 || Scan == dm_rotation_90m || Scan == dm_rotation_270 || Scan == dm_rotation_270m)
+		is_vert = true;
+	else
+		is_vert = false;
+	return is_vert;
+}
+
+void dml32_CalculateSinglePipeDPPCLKAndSCLThroughput(
+		double HRatio,
+		double HRatioChroma,
+		double VRatio,
+		double VRatioChroma,
+		double MaxDCHUBToPSCLThroughput,
+		double MaxPSCLToLBThroughput,
+		double PixelClock,
+		enum source_format_class SourcePixelFormat,
+		unsigned int HTaps,
+		unsigned int HTapsChroma,
+		unsigned int VTaps,
+		unsigned int VTapsChroma,
+
+		/* output */
+		double *PSCL_THROUGHPUT,
+		double *PSCL_THROUGHPUT_CHROMA,
+		double *DPPCLKUsingSingleDPP)
+{
+	double DPPCLKUsingSingleDPPLuma;
+	double DPPCLKUsingSingleDPPChroma;
+
+	if (HRatio > 1) {
+		*PSCL_THROUGHPUT = dml_min(MaxDCHUBToPSCLThroughput, MaxPSCLToLBThroughput * HRatio /
+				dml_ceil((double) HTaps / 6.0, 1.0));
+	} else {
+		*PSCL_THROUGHPUT = dml_min(MaxDCHUBToPSCLThroughput, MaxPSCLToLBThroughput);
+	}
+
+	DPPCLKUsingSingleDPPLuma = PixelClock * dml_max3(VTaps / 6 * dml_min(1, HRatio), HRatio * VRatio /
+			*PSCL_THROUGHPUT, 1);
+
+	if ((HTaps > 6 || VTaps > 6) && DPPCLKUsingSingleDPPLuma < 2 * PixelClock)
+		DPPCLKUsingSingleDPPLuma = 2 * PixelClock;
+
+	if ((SourcePixelFormat != dm_420_8 && SourcePixelFormat != dm_420_10 && SourcePixelFormat != dm_420_12 &&
+			SourcePixelFormat != dm_rgbe_alpha)) {
+		*PSCL_THROUGHPUT_CHROMA = 0;
+		*DPPCLKUsingSingleDPP = DPPCLKUsingSingleDPPLuma;
+	} else {
+		if (HRatioChroma > 1) {
+			*PSCL_THROUGHPUT_CHROMA = dml_min(MaxDCHUBToPSCLThroughput, MaxPSCLToLBThroughput *
+					HRatioChroma / dml_ceil((double) HTapsChroma / 6.0, 1.0));
+		} else {
+			*PSCL_THROUGHPUT_CHROMA = dml_min(MaxDCHUBToPSCLThroughput, MaxPSCLToLBThroughput);
+		}
+		DPPCLKUsingSingleDPPChroma = PixelClock * dml_max3(VTapsChroma / 6 * dml_min(1, HRatioChroma),
+				HRatioChroma * VRatioChroma / *PSCL_THROUGHPUT_CHROMA, 1);
+		if ((HTapsChroma > 6 || VTapsChroma > 6) && DPPCLKUsingSingleDPPChroma < 2 * PixelClock)
+			DPPCLKUsingSingleDPPChroma = 2 * PixelClock;
+		*DPPCLKUsingSingleDPP = dml_max(DPPCLKUsingSingleDPPLuma, DPPCLKUsingSingleDPPChroma);
+	}
+}
+
+void dml32_CalculateBytePerPixelAndBlockSizes(
+		enum source_format_class SourcePixelFormat,
+		enum dm_swizzle_mode SurfaceTiling,
+
+		/* Output */
+		unsigned int *BytePerPixelY,
+		unsigned int *BytePerPixelC,
+		double  *BytePerPixelDETY,
+		double  *BytePerPixelDETC,
+		unsigned int *BlockHeight256BytesY,
+		unsigned int *BlockHeight256BytesC,
+		unsigned int *BlockWidth256BytesY,
+		unsigned int *BlockWidth256BytesC,
+		unsigned int *MacroTileHeightY,
+		unsigned int *MacroTileHeightC,
+		unsigned int *MacroTileWidthY,
+		unsigned int *MacroTileWidthC)
+{
+	if (SourcePixelFormat == dm_444_64) {
+		*BytePerPixelDETY = 8;
+		*BytePerPixelDETC = 0;
+		*BytePerPixelY = 8;
+		*BytePerPixelC = 0;
+	} else if (SourcePixelFormat == dm_444_32 || SourcePixelFormat == dm_rgbe) {
+		*BytePerPixelDETY = 4;
+		*BytePerPixelDETC = 0;
+		*BytePerPixelY = 4;
+		*BytePerPixelC = 0;
+	} else if (SourcePixelFormat == dm_444_16 || SourcePixelFormat == dm_444_16) {
+		*BytePerPixelDETY = 2;
+		*BytePerPixelDETC = 0;
+		*BytePerPixelY = 2;
+		*BytePerPixelC = 0;
+	} else if (SourcePixelFormat == dm_444_8) {
+		*BytePerPixelDETY = 1;
+		*BytePerPixelDETC = 0;
+		*BytePerPixelY = 1;
+		*BytePerPixelC = 0;
+	} else if (SourcePixelFormat == dm_rgbe_alpha) {
+		*BytePerPixelDETY = 4;
+		*BytePerPixelDETC = 1;
+		*BytePerPixelY = 4;
+		*BytePerPixelC = 1;
+	} else if (SourcePixelFormat == dm_420_8) {
+		*BytePerPixelDETY = 1;
+		*BytePerPixelDETC = 2;
+		*BytePerPixelY = 1;
+		*BytePerPixelC = 2;
+	} else if (SourcePixelFormat == dm_420_12) {
+		*BytePerPixelDETY = 2;
+		*BytePerPixelDETC = 4;
+		*BytePerPixelY = 2;
+		*BytePerPixelC = 4;
+	} else {
+		*BytePerPixelDETY = 4.0 / 3;
+		*BytePerPixelDETC = 8.0 / 3;
+		*BytePerPixelY = 2;
+		*BytePerPixelC = 4;
+	}
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: SourcePixelFormat = %d\n", __func__, SourcePixelFormat);
+	dml_print("DML::%s: BytePerPixelDETY = %f\n", __func__, *BytePerPixelDETY);
+	dml_print("DML::%s: BytePerPixelDETC = %f\n", __func__, *BytePerPixelDETC);
+	dml_print("DML::%s: BytePerPixelY    = %d\n", __func__, *BytePerPixelY);
+	dml_print("DML::%s: BytePerPixelC    = %d\n", __func__, *BytePerPixelC);
+#endif
+	if ((SourcePixelFormat == dm_444_64 || SourcePixelFormat == dm_444_32
+			|| SourcePixelFormat == dm_444_16
+			|| SourcePixelFormat == dm_444_8
+			|| SourcePixelFormat == dm_mono_16
+			|| SourcePixelFormat == dm_mono_8
+			|| SourcePixelFormat == dm_rgbe)) {
+		if (SurfaceTiling == dm_sw_linear)
+			*BlockHeight256BytesY = 1;
+		else if (SourcePixelFormat == dm_444_64)
+			*BlockHeight256BytesY = 4;
+		else if (SourcePixelFormat == dm_444_8)
+			*BlockHeight256BytesY = 16;
+		else
+			*BlockHeight256BytesY = 8;
+
+		*BlockWidth256BytesY = 256U / *BytePerPixelY / *BlockHeight256BytesY;
+		*BlockHeight256BytesC = 0;
+		*BlockWidth256BytesC = 0;
+	} else {
+		if (SurfaceTiling == dm_sw_linear) {
+			*BlockHeight256BytesY = 1;
+			*BlockHeight256BytesC = 1;
+		} else if (SourcePixelFormat == dm_rgbe_alpha) {
+			*BlockHeight256BytesY = 8;
+			*BlockHeight256BytesC = 16;
+		} else if (SourcePixelFormat == dm_420_8) {
+			*BlockHeight256BytesY = 16;
+			*BlockHeight256BytesC = 8;
+		} else {
+			*BlockHeight256BytesY = 8;
+			*BlockHeight256BytesC = 8;
+		}
+		*BlockWidth256BytesY = 256U / *BytePerPixelY / *BlockHeight256BytesY;
+		*BlockWidth256BytesC = 256U / *BytePerPixelC / *BlockHeight256BytesC;
+	}
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: BlockWidth256BytesY  = %d\n", __func__, *BlockWidth256BytesY);
+	dml_print("DML::%s: BlockHeight256BytesY = %d\n", __func__, *BlockHeight256BytesY);
+	dml_print("DML::%s: BlockWidth256BytesC  = %d\n", __func__, *BlockWidth256BytesC);
+	dml_print("DML::%s: BlockHeight256BytesC = %d\n", __func__, *BlockHeight256BytesC);
+#endif
+
+	if (SurfaceTiling == dm_sw_linear) {
+		*MacroTileHeightY = *BlockHeight256BytesY;
+		*MacroTileWidthY = 256 / *BytePerPixelY / *MacroTileHeightY;
+		*MacroTileHeightC = *BlockHeight256BytesC;
+		if (*MacroTileHeightC == 0)
+			*MacroTileWidthC = 0;
+		else
+			*MacroTileWidthC = 256 / *BytePerPixelC / *MacroTileHeightC;
+	} else if (SurfaceTiling == dm_sw_64kb_d || SurfaceTiling == dm_sw_64kb_d_t ||
+			SurfaceTiling == dm_sw_64kb_d_x || SurfaceTiling == dm_sw_64kb_r_x) {
+		*MacroTileHeightY = 16 * *BlockHeight256BytesY;
+		*MacroTileWidthY = 65536 / *BytePerPixelY / *MacroTileHeightY;
+		*MacroTileHeightC = 16 * *BlockHeight256BytesC;
+		if (*MacroTileHeightC == 0)
+			*MacroTileWidthC = 0;
+		else
+			*MacroTileWidthC = 65536 / *BytePerPixelC / *MacroTileHeightC;
+	} else {
+		*MacroTileHeightY = 32 * *BlockHeight256BytesY;
+		*MacroTileWidthY = 65536 * 4 / *BytePerPixelY / *MacroTileHeightY;
+		*MacroTileHeightC = 32 * *BlockHeight256BytesC;
+		if (*MacroTileHeightC == 0)
+			*MacroTileWidthC = 0;
+		else
+			*MacroTileWidthC = 65536 * 4 / *BytePerPixelC / *MacroTileHeightC;
+	}
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: MacroTileWidthY  = %d\n", __func__, *MacroTileWidthY);
+	dml_print("DML::%s: MacroTileHeightY = %d\n", __func__, *MacroTileHeightY);
+	dml_print("DML::%s: MacroTileWidthC  = %d\n", __func__, *MacroTileWidthC);
+	dml_print("DML::%s: MacroTileHeightC = %d\n", __func__, *MacroTileHeightC);
+#endif
+} // CalculateBytePerPixelAndBlockSizes
+
+void dml32_CalculatedoublePipeDPPCLKAndSCLThroughput(
+		double HRatio,
+		double HRatioChroma,
+		double VRatio,
+		double VRatioChroma,
+		double MaxDCHUBToPSCLThroughput,
+		double MaxPSCLToLBThroughput,
+		double PixelClock,
+		enum source_format_class SourcePixelFormat,
+		unsigned int HTaps,
+		unsigned int HTapsChroma,
+		unsigned int VTaps,
+		unsigned int VTapsChroma,
+
+		/* output */
+		double *PSCL_THROUGHPUT,
+		double *PSCL_THROUGHPUT_CHROMA,
+		double *DPPCLKUsingdoubleDPP)
+{
+	double DPPCLKUsingdoubleDPPLuma;
+	double DPPCLKUsingdoubleDPPChroma;
+
+	if (HRatio > 1) {
+		*PSCL_THROUGHPUT = dml_min(MaxDCHUBToPSCLThroughput, MaxPSCLToLBThroughput * HRatio /
+				dml_ceil((double) HTaps / 6.0, 1.0));
+	} else {
+		*PSCL_THROUGHPUT = dml_min(MaxDCHUBToPSCLThroughput, MaxPSCLToLBThroughput);
+	}
+
+	DPPCLKUsingdoubleDPPLuma = PixelClock * dml_max3(VTaps / 6 * dml_min(1, HRatio), HRatio * VRatio /
+			*PSCL_THROUGHPUT, 1);
+
+	if ((HTaps > 6 || VTaps > 6) && DPPCLKUsingdoubleDPPLuma < 2 * PixelClock)
+		DPPCLKUsingdoubleDPPLuma = 2 * PixelClock;
+
+	if ((SourcePixelFormat != dm_420_8 && SourcePixelFormat != dm_420_10 && SourcePixelFormat != dm_420_12 &&
+			SourcePixelFormat != dm_rgbe_alpha)) {
+		*PSCL_THROUGHPUT_CHROMA = 0;
+		*DPPCLKUsingdoubleDPP = DPPCLKUsingdoubleDPPLuma;
+	} else {
+		if (HRatioChroma > 1) {
+			*PSCL_THROUGHPUT_CHROMA = dml_min(MaxDCHUBToPSCLThroughput, MaxPSCLToLBThroughput *
+					HRatioChroma / dml_ceil((double) HTapsChroma / 6.0, 1.0));
+		} else {
+			*PSCL_THROUGHPUT_CHROMA = dml_min(MaxDCHUBToPSCLThroughput, MaxPSCLToLBThroughput);
+		}
+		DPPCLKUsingdoubleDPPChroma = PixelClock * dml_max3(VTapsChroma / 6 * dml_min(1, HRatioChroma),
+				HRatioChroma * VRatioChroma / *PSCL_THROUGHPUT_CHROMA, 1);
+		if ((HTapsChroma > 6 || VTapsChroma > 6) && DPPCLKUsingdoubleDPPChroma < 2 * PixelClock)
+			DPPCLKUsingdoubleDPPChroma = 2 * PixelClock;
+		*DPPCLKUsingdoubleDPP = dml_max(DPPCLKUsingdoubleDPPLuma, DPPCLKUsingdoubleDPPChroma);
+	}
+}
+
+void dml32_CalculateSwathAndDETConfiguration(
+		unsigned int DETSizeOverride[],
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange[],
+		unsigned int ConfigReturnBufferSizeInKByte,
+		unsigned int MaxTotalDETInKByte,
+		unsigned int MinCompressedBufferSizeInKByte,
+		double ForceSingleDPP,
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int nomDETInKByte,
+		enum unbounded_requesting_policy UseUnboundedRequestingFinal,
+		unsigned int CompressedBufferSegmentSizeInkByteFinal,
+		enum output_encoder_class Output[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		double MaximumSwathWidthLuma[],
+		double MaximumSwathWidthChroma[],
+		enum dm_rotation_angle SourceRotation[],
+		bool ViewportStationary[],
+		enum source_format_class SourcePixelFormat[],
+		enum dm_swizzle_mode SurfaceTiling[],
+		unsigned int ViewportWidth[],
+		unsigned int ViewportHeight[],
+		unsigned int ViewportXStart[],
+		unsigned int ViewportYStart[],
+		unsigned int ViewportXStartC[],
+		unsigned int ViewportYStartC[],
+		unsigned int SurfaceWidthY[],
+		unsigned int SurfaceWidthC[],
+		unsigned int SurfaceHeightY[],
+		unsigned int SurfaceHeightC[],
+		unsigned int Read256BytesBlockHeightY[],
+		unsigned int Read256BytesBlockHeightC[],
+		unsigned int Read256BytesBlockWidthY[],
+		unsigned int Read256BytesBlockWidthC[],
+		enum odm_combine_mode ODMMode[],
+		unsigned int BlendingAndTiming[],
+		unsigned int BytePerPixY[],
+		unsigned int BytePerPixC[],
+		double BytePerPixDETY[],
+		double BytePerPixDETC[],
+		unsigned int HActive[],
+		double HRatio[],
+		double HRatioChroma[],
+		unsigned int DPPPerSurface[],
+
+		/* Output */
+		unsigned int swath_width_luma_ub[],
+		unsigned int swath_width_chroma_ub[],
+		double SwathWidth[],
+		double SwathWidthChroma[],
+		unsigned int SwathHeightY[],
+		unsigned int SwathHeightC[],
+		unsigned int DETBufferSizeInKByte[],
+		unsigned int DETBufferSizeY[],
+		unsigned int DETBufferSizeC[],
+		bool *UnboundedRequestEnabled,
+		unsigned int *CompressedBufferSizeInkByte,
+		bool ViewportSizeSupportPerSurface[],
+		bool *ViewportSizeSupport)
+{
+	unsigned int MaximumSwathHeightY[DC__NUM_DPP__MAX];
+	unsigned int MaximumSwathHeightC[DC__NUM_DPP__MAX];
+	unsigned int RoundedUpMaxSwathSizeBytesY[DC__NUM_DPP__MAX];
+	unsigned int RoundedUpMaxSwathSizeBytesC[DC__NUM_DPP__MAX];
+	unsigned int RoundedUpSwathSizeBytesY;
+	unsigned int RoundedUpSwathSizeBytesC;
+	double SwathWidthdoubleDPP[DC__NUM_DPP__MAX];
+	double SwathWidthdoubleDPPChroma[DC__NUM_DPP__MAX];
+	unsigned int k;
+	unsigned int TotalActiveDPP = 0;
+	bool NoChromaSurfaces = true;
+	unsigned int DETBufferSizeInKByteForSwathCalculation;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: ForceSingleDPP = %d\n", __func__, ForceSingleDPP);
+#endif
+	dml32_CalculateSwathWidth(ForceSingleDPP,
+			NumberOfActiveSurfaces,
+			SourcePixelFormat,
+			SourceRotation,
+			ViewportStationary,
+			ViewportWidth,
+			ViewportHeight,
+			ViewportXStart,
+			ViewportYStart,
+			ViewportXStartC,
+			ViewportYStartC,
+			SurfaceWidthY,
+			SurfaceWidthC,
+			SurfaceHeightY,
+			SurfaceHeightC,
+			ODMMode,
+			BytePerPixY,
+			BytePerPixC,
+			Read256BytesBlockHeightY,
+			Read256BytesBlockHeightC,
+			Read256BytesBlockWidthY,
+			Read256BytesBlockWidthC,
+			BlendingAndTiming,
+			HActive,
+			HRatio,
+			DPPPerSurface,
+
+			/* Output */
+			SwathWidthdoubleDPP,
+			SwathWidthdoubleDPPChroma,
+			SwathWidth,
+			SwathWidthChroma,
+			MaximumSwathHeightY,
+			MaximumSwathHeightC,
+			swath_width_luma_ub,
+			swath_width_chroma_ub);
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		RoundedUpMaxSwathSizeBytesY[k] = swath_width_luma_ub[k] * BytePerPixDETY[k] * MaximumSwathHeightY[k];
+		RoundedUpMaxSwathSizeBytesC[k] = swath_width_chroma_ub[k] * BytePerPixDETC[k] * MaximumSwathHeightC[k];
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%0d DPPPerSurface = %d\n", __func__, k, DPPPerSurface[k]);
+		dml_print("DML::%s: k=%0d swath_width_luma_ub = %d\n", __func__, k, swath_width_luma_ub[k]);
+		dml_print("DML::%s: k=%0d BytePerPixDETY = %f\n", __func__, k, BytePerPixDETY[k]);
+		dml_print("DML::%s: k=%0d MaximumSwathHeightY = %d\n", __func__, k, MaximumSwathHeightY[k]);
+		dml_print("DML::%s: k=%0d RoundedUpMaxSwathSizeBytesY = %d\n", __func__, k,
+				RoundedUpMaxSwathSizeBytesY[k]);
+		dml_print("DML::%s: k=%0d swath_width_chroma_ub = %d\n", __func__, k, swath_width_chroma_ub[k]);
+		dml_print("DML::%s: k=%0d BytePerPixDETC = %f\n", __func__, k, BytePerPixDETC[k]);
+		dml_print("DML::%s: k=%0d MaximumSwathHeightC = %d\n", __func__, k, MaximumSwathHeightC[k]);
+		dml_print("DML::%s: k=%0d RoundedUpMaxSwathSizeBytesC = %d\n", __func__, k,
+				RoundedUpMaxSwathSizeBytesC[k]);
+#endif
+
+		if (SourcePixelFormat[k] == dm_420_10) {
+			RoundedUpMaxSwathSizeBytesY[k] = dml_ceil((unsigned int) RoundedUpMaxSwathSizeBytesY[k], 256);
+			RoundedUpMaxSwathSizeBytesC[k] = dml_ceil((unsigned int) RoundedUpMaxSwathSizeBytesC[k], 256);
+		}
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		TotalActiveDPP = TotalActiveDPP + (ForceSingleDPP ? 1 : DPPPerSurface[k]);
+		if (SourcePixelFormat[k] == dm_420_8 || SourcePixelFormat[k] == dm_420_10 ||
+				SourcePixelFormat[k] == dm_420_12 || SourcePixelFormat[k] == dm_rgbe_alpha) {
+			NoChromaSurfaces = false;
+		}
+	}
+
+	*UnboundedRequestEnabled = dml32_UnboundedRequest(UseUnboundedRequestingFinal, TotalActiveDPP,
+			NoChromaSurfaces, Output[0]);
+
+	dml32_CalculateDETBufferSize(DETSizeOverride,
+			UseMALLForPStateChange,
+			ForceSingleDPP,
+			NumberOfActiveSurfaces,
+			*UnboundedRequestEnabled,
+			nomDETInKByte,
+			MaxTotalDETInKByte,
+			ConfigReturnBufferSizeInKByte,
+			MinCompressedBufferSizeInKByte,
+			CompressedBufferSegmentSizeInkByteFinal,
+			SourcePixelFormat,
+			ReadBandwidthLuma,
+			ReadBandwidthChroma,
+			RoundedUpMaxSwathSizeBytesY,
+			RoundedUpMaxSwathSizeBytesC,
+			DPPPerSurface,
+
+			/* Output */
+			DETBufferSizeInKByte,    // per hubp pipe
+			CompressedBufferSizeInkByte);
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: TotalActiveDPP = %d\n", __func__, TotalActiveDPP);
+	dml_print("DML::%s: nomDETInKByte = %d\n", __func__, nomDETInKByte);
+	dml_print("DML::%s: ConfigReturnBufferSizeInKByte = %d\n", __func__, ConfigReturnBufferSizeInKByte);
+	dml_print("DML::%s: UseUnboundedRequestingFinal = %d\n", __func__, UseUnboundedRequestingFinal);
+	dml_print("DML::%s: UnboundedRequestEnabled = %d\n", __func__, *UnboundedRequestEnabled);
+	dml_print("DML::%s: CompressedBufferSizeInkByte = %d\n", __func__, *CompressedBufferSizeInkByte);
+#endif
+
+	*ViewportSizeSupport = true;
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+
+		DETBufferSizeInKByteForSwathCalculation = (UseMALLForPStateChange[k] ==
+				dm_use_mall_pstate_change_phantom_pipe ? 1024 : DETBufferSizeInKByte[k]);
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%0d DETBufferSizeInKByteForSwathCalculation = %d\n", __func__, k,
+				DETBufferSizeInKByteForSwathCalculation);
+#endif
+
+		if (RoundedUpMaxSwathSizeBytesY[k] + RoundedUpMaxSwathSizeBytesC[k] <=
+				DETBufferSizeInKByteForSwathCalculation * 1024 / 2) {
+			SwathHeightY[k] = MaximumSwathHeightY[k];
+			SwathHeightC[k] = MaximumSwathHeightC[k];
+			RoundedUpSwathSizeBytesY = RoundedUpMaxSwathSizeBytesY[k];
+			RoundedUpSwathSizeBytesC = RoundedUpMaxSwathSizeBytesC[k];
+		} else if (RoundedUpMaxSwathSizeBytesY[k] >= 1.5 * RoundedUpMaxSwathSizeBytesC[k] &&
+				RoundedUpMaxSwathSizeBytesY[k] / 2 + RoundedUpMaxSwathSizeBytesC[k] <=
+				DETBufferSizeInKByteForSwathCalculation * 1024 / 2) {
+			SwathHeightY[k] = MaximumSwathHeightY[k] / 2;
+			SwathHeightC[k] = MaximumSwathHeightC[k];
+			RoundedUpSwathSizeBytesY = RoundedUpMaxSwathSizeBytesY[k] / 2;
+			RoundedUpSwathSizeBytesC = RoundedUpMaxSwathSizeBytesC[k];
+		} else if (RoundedUpMaxSwathSizeBytesY[k] < 1.5 * RoundedUpMaxSwathSizeBytesC[k] &&
+				RoundedUpMaxSwathSizeBytesY[k] + RoundedUpMaxSwathSizeBytesC[k] / 2 <=
+				DETBufferSizeInKByteForSwathCalculation * 1024 / 2) {
+			SwathHeightY[k] = MaximumSwathHeightY[k];
+			SwathHeightC[k] = MaximumSwathHeightC[k] / 2;
+			RoundedUpSwathSizeBytesY = RoundedUpMaxSwathSizeBytesY[k];
+			RoundedUpSwathSizeBytesC = RoundedUpMaxSwathSizeBytesC[k] / 2;
+		} else {
+			SwathHeightY[k] = MaximumSwathHeightY[k] / 2;
+			SwathHeightC[k] = MaximumSwathHeightC[k] / 2;
+			RoundedUpSwathSizeBytesY = RoundedUpMaxSwathSizeBytesY[k] / 2;
+			RoundedUpSwathSizeBytesC = RoundedUpMaxSwathSizeBytesC[k] / 2;
+		}
+
+		if ((RoundedUpMaxSwathSizeBytesY[k] / 2 + RoundedUpMaxSwathSizeBytesC[k] / 2 >
+				DETBufferSizeInKByteForSwathCalculation * 1024 / 2)
+				|| SwathWidth[k] > MaximumSwathWidthLuma[k] || (SwathHeightC[k] > 0 &&
+						SwathWidthChroma[k] > MaximumSwathWidthChroma[k])) {
+			*ViewportSizeSupport = false;
+			ViewportSizeSupportPerSurface[k] = false;
+		} else {
+			ViewportSizeSupportPerSurface[k] = true;
+		}
+
+		if (SwathHeightC[k] == 0) {
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: k=%0d All DET for plane0\n", __func__, k);
+#endif
+			DETBufferSizeY[k] = DETBufferSizeInKByte[k] * 1024;
+			DETBufferSizeC[k] = 0;
+		} else if (RoundedUpSwathSizeBytesY <= 1.5 * RoundedUpSwathSizeBytesC) {
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: k=%0d Half DET for plane0, half for plane1\n", __func__, k);
+#endif
+			DETBufferSizeY[k] = DETBufferSizeInKByte[k] * 1024 / 2;
+			DETBufferSizeC[k] = DETBufferSizeInKByte[k] * 1024 / 2;
+		} else {
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: k=%0d 2/3 DET for plane0, 1/3 for plane1\n", __func__, k);
+#endif
+			DETBufferSizeY[k] = dml_floor(DETBufferSizeInKByte[k] * 1024 * 2 / 3, 1024);
+			DETBufferSizeC[k] = DETBufferSizeInKByte[k] * 1024 - DETBufferSizeY[k];
+		}
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%0d SwathHeightY = %d\n", __func__, k, SwathHeightY[k]);
+		dml_print("DML::%s: k=%0d SwathHeightC = %d\n", __func__, k, SwathHeightC[k]);
+		dml_print("DML::%s: k=%0d RoundedUpMaxSwathSizeBytesY = %d\n", __func__,
+				k, RoundedUpMaxSwathSizeBytesY[k]);
+		dml_print("DML::%s: k=%0d RoundedUpMaxSwathSizeBytesC = %d\n", __func__,
+				k, RoundedUpMaxSwathSizeBytesC[k]);
+		dml_print("DML::%s: k=%0d RoundedUpSwathSizeBytesY = %d\n", __func__, k, RoundedUpSwathSizeBytesY);
+		dml_print("DML::%s: k=%0d RoundedUpSwathSizeBytesC = %d\n", __func__, k, RoundedUpSwathSizeBytesC);
+		dml_print("DML::%s: k=%0d DETBufferSizeInKByte = %d\n", __func__, k, DETBufferSizeInKByte[k]);
+		dml_print("DML::%s: k=%0d DETBufferSizeY = %d\n", __func__, k, DETBufferSizeY[k]);
+		dml_print("DML::%s: k=%0d DETBufferSizeC = %d\n", __func__, k, DETBufferSizeC[k]);
+		dml_print("DML::%s: k=%0d ViewportSizeSupportPerSurface = %d\n", __func__, k,
+				ViewportSizeSupportPerSurface[k]);
+#endif
+
+	}
+} // CalculateSwathAndDETConfiguration
+
+void dml32_CalculateSwathWidth(
+		bool				ForceSingleDPP,
+		unsigned int			NumberOfActiveSurfaces,
+		enum source_format_class	SourcePixelFormat[],
+		enum dm_rotation_angle		SourceRotation[],
+		bool				ViewportStationary[],
+		unsigned int			ViewportWidth[],
+		unsigned int			ViewportHeight[],
+		unsigned int			ViewportXStart[],
+		unsigned int			ViewportYStart[],
+		unsigned int			ViewportXStartC[],
+		unsigned int			ViewportYStartC[],
+		unsigned int			SurfaceWidthY[],
+		unsigned int			SurfaceWidthC[],
+		unsigned int			SurfaceHeightY[],
+		unsigned int			SurfaceHeightC[],
+		enum odm_combine_mode		ODMMode[],
+		unsigned int			BytePerPixY[],
+		unsigned int			BytePerPixC[],
+		unsigned int			Read256BytesBlockHeightY[],
+		unsigned int			Read256BytesBlockHeightC[],
+		unsigned int			Read256BytesBlockWidthY[],
+		unsigned int			Read256BytesBlockWidthC[],
+		unsigned int			BlendingAndTiming[],
+		unsigned int			HActive[],
+		double				HRatio[],
+		unsigned int			DPPPerSurface[],
+
+		/* Output */
+		double				SwathWidthdoubleDPPY[],
+		double				SwathWidthdoubleDPPC[],
+		double				SwathWidthY[], // per-pipe
+		double				SwathWidthC[], // per-pipe
+		unsigned int			MaximumSwathHeightY[],
+		unsigned int			MaximumSwathHeightC[],
+		unsigned int			swath_width_luma_ub[], // per-pipe
+		unsigned int			swath_width_chroma_ub[]) // per-pipe
+{
+	unsigned int k, j;
+	enum odm_combine_mode MainSurfaceODMMode;
+
+    unsigned int surface_width_ub_l;
+    unsigned int surface_height_ub_l;
+    unsigned int surface_width_ub_c;
+    unsigned int surface_height_ub_c;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: ForceSingleDPP = %d\n", __func__, ForceSingleDPP);
+	dml_print("DML::%s: NumberOfActiveSurfaces = %d\n", __func__, NumberOfActiveSurfaces);
+#endif
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (!IsVertical(SourceRotation[k]))
+			SwathWidthdoubleDPPY[k] = ViewportWidth[k];
+		else
+			SwathWidthdoubleDPPY[k] = ViewportHeight[k];
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d ViewportWidth=%d\n", __func__, k, ViewportWidth[k]);
+		dml_print("DML::%s: k=%d ViewportHeight=%d\n", __func__, k, ViewportHeight[k]);
+#endif
+
+		MainSurfaceODMMode = ODMMode[k];
+		for (j = 0; j < NumberOfActiveSurfaces; ++j) {
+			if (BlendingAndTiming[k] == j)
+				MainSurfaceODMMode = ODMMode[j];
+		}
+
+		if (ForceSingleDPP) {
+			SwathWidthY[k] = SwathWidthdoubleDPPY[k];
+		} else {
+			if (MainSurfaceODMMode == dm_odm_combine_mode_4to1) {
+				SwathWidthY[k] = dml_min(SwathWidthdoubleDPPY[k],
+						dml_round(HActive[k] / 4.0 * HRatio[k]));
+			} else if (MainSurfaceODMMode == dm_odm_combine_mode_2to1) {
+				SwathWidthY[k] = dml_min(SwathWidthdoubleDPPY[k],
+						dml_round(HActive[k] / 2.0 * HRatio[k]));
+			} else if (DPPPerSurface[k] == 2) {
+				SwathWidthY[k] = SwathWidthdoubleDPPY[k] / 2;
+			} else {
+				SwathWidthY[k] = SwathWidthdoubleDPPY[k];
+			}
+		}
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d HActive=%d\n", __func__, k, HActive[k]);
+		dml_print("DML::%s: k=%d HRatio=%f\n", __func__, k, HRatio[k]);
+		dml_print("DML::%s: k=%d MainSurfaceODMMode=%d\n", __func__, k, MainSurfaceODMMode);
+		dml_print("DML::%s: k=%d SwathWidthdoubleDPPY=%d\n", __func__, k, SwathWidthdoubleDPPY[k]);
+		dml_print("DML::%s: k=%d SwathWidthY=%d\n", __func__, k, SwathWidthY[k]);
+#endif
+
+		if (SourcePixelFormat[k] == dm_420_8 || SourcePixelFormat[k] == dm_420_10 ||
+				SourcePixelFormat[k] == dm_420_12) {
+			SwathWidthC[k] = SwathWidthY[k] / 2;
+			SwathWidthdoubleDPPC[k] = SwathWidthdoubleDPPY[k] / 2;
+		} else {
+			SwathWidthC[k] = SwathWidthY[k];
+			SwathWidthdoubleDPPC[k] = SwathWidthdoubleDPPY[k];
+		}
+
+		if (ForceSingleDPP == true) {
+			SwathWidthY[k] = SwathWidthdoubleDPPY[k];
+			SwathWidthC[k] = SwathWidthdoubleDPPC[k];
+		}
+
+		surface_width_ub_l  = dml_ceil(SurfaceWidthY[k], Read256BytesBlockWidthY[k]);
+		surface_height_ub_l = dml_ceil(SurfaceHeightY[k], Read256BytesBlockHeightY[k]);
+		surface_width_ub_c  = dml_ceil(SurfaceWidthC[k], Read256BytesBlockWidthC[k]);
+		surface_height_ub_c = dml_ceil(SurfaceHeightC[k], Read256BytesBlockHeightC[k]);
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d surface_width_ub_l=%0d\n", __func__, k, surface_width_ub_l);
+		dml_print("DML::%s: k=%d surface_height_ub_l=%0d\n", __func__, k, surface_height_ub_l);
+		dml_print("DML::%s: k=%d surface_width_ub_c=%0d\n", __func__, k, surface_width_ub_c);
+		dml_print("DML::%s: k=%d surface_height_ub_c=%0d\n", __func__, k, surface_height_ub_c);
+		dml_print("DML::%s: k=%d Read256BytesBlockWidthY=%0d\n", __func__, k, Read256BytesBlockWidthY[k]);
+		dml_print("DML::%s: k=%d Read256BytesBlockHeightY=%0d\n", __func__, k, Read256BytesBlockHeightY[k]);
+		dml_print("DML::%s: k=%d Read256BytesBlockWidthC=%0d\n", __func__, k, Read256BytesBlockWidthC[k]);
+		dml_print("DML::%s: k=%d Read256BytesBlockHeightC=%0d\n", __func__, k, Read256BytesBlockHeightC[k]);
+		dml_print("DML::%s: k=%d ViewportStationary=%0d\n", __func__, k, ViewportStationary[k]);
+		dml_print("DML::%s: k=%d DPPPerSurface=%0d\n", __func__, k, DPPPerSurface[k]);
+#endif
+
+		if (!IsVertical(SourceRotation[k])) {
+			MaximumSwathHeightY[k] = Read256BytesBlockHeightY[k];
+			MaximumSwathHeightC[k] = Read256BytesBlockHeightC[k];
+			if (ViewportStationary[k] && DPPPerSurface[k] == 1) {
+				swath_width_luma_ub[k] = dml_min(surface_width_ub_l,
+						dml_floor(ViewportXStart[k] +
+								SwathWidthY[k] +
+								Read256BytesBlockWidthY[k] - 1,
+								Read256BytesBlockWidthY[k]) -
+								dml_floor(ViewportXStart[k],
+								Read256BytesBlockWidthY[k]));
+			} else {
+				swath_width_luma_ub[k] = dml_min(surface_width_ub_l,
+						dml_ceil(SwathWidthY[k] - 1,
+								Read256BytesBlockWidthY[k]) +
+								Read256BytesBlockWidthY[k]);
+			}
+			if (BytePerPixC[k] > 0) {
+				if (ViewportStationary[k] && DPPPerSurface[k] == 1) {
+					swath_width_chroma_ub[k] = dml_min(surface_width_ub_c,
+							dml_floor(ViewportXStartC[k] + SwathWidthC[k] +
+									Read256BytesBlockWidthC[k] - 1,
+									Read256BytesBlockWidthC[k]) -
+									dml_floor(ViewportXStartC[k],
+									Read256BytesBlockWidthC[k]));
+				} else {
+					swath_width_chroma_ub[k] = dml_min(surface_width_ub_c,
+							dml_ceil(SwathWidthC[k] - 1,
+								Read256BytesBlockWidthC[k]) +
+								Read256BytesBlockWidthC[k]);
+				}
+			} else {
+				swath_width_chroma_ub[k] = 0;
+			}
+		} else {
+			MaximumSwathHeightY[k] = Read256BytesBlockWidthY[k];
+			MaximumSwathHeightC[k] = Read256BytesBlockWidthC[k];
+
+			if (ViewportStationary[k] && DPPPerSurface[k] == 1) {
+				swath_width_luma_ub[k] = dml_min(surface_height_ub_l, dml_floor(ViewportYStart[k] +
+						SwathWidthY[k] + Read256BytesBlockHeightY[k] - 1,
+						Read256BytesBlockHeightY[k]) -
+						dml_floor(ViewportYStart[k], Read256BytesBlockHeightY[k]));
+			} else {
+				swath_width_luma_ub[k] = dml_min(surface_height_ub_l, dml_ceil(SwathWidthY[k] - 1,
+						Read256BytesBlockHeightY[k]) + Read256BytesBlockHeightY[k]);
+			}
+			if (BytePerPixC[k] > 0) {
+				if (ViewportStationary[k] && DPPPerSurface[k] == 1) {
+					swath_width_chroma_ub[k] = dml_min(surface_height_ub_c,
+							dml_floor(ViewportYStartC[k] + SwathWidthC[k] +
+									Read256BytesBlockHeightC[k] - 1,
+									Read256BytesBlockHeightC[k]) -
+									dml_floor(ViewportYStartC[k],
+											Read256BytesBlockHeightC[k]));
+				} else {
+					swath_width_chroma_ub[k] = dml_min(surface_height_ub_c,
+							dml_ceil(SwathWidthC[k] - 1, Read256BytesBlockHeightC[k]) +
+							Read256BytesBlockHeightC[k]);
+				}
+			} else {
+				swath_width_chroma_ub[k] = 0;
+			}
+		}
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d swath_width_luma_ub=%0d\n", __func__, k, swath_width_luma_ub[k]);
+		dml_print("DML::%s: k=%d swath_width_chroma_ub=%0d\n", __func__, k, swath_width_chroma_ub[k]);
+		dml_print("DML::%s: k=%d MaximumSwathHeightY=%0d\n", __func__, k, MaximumSwathHeightY[k]);
+		dml_print("DML::%s: k=%d MaximumSwathHeightC=%0d\n", __func__, k, MaximumSwathHeightC[k]);
+#endif
+
+	}
+} // CalculateSwathWidth
+
+bool dml32_UnboundedRequest(enum unbounded_requesting_policy UseUnboundedRequestingFinal,
+		unsigned int TotalNumberOfActiveDPP,
+		bool NoChroma,
+		enum output_encoder_class Output)
+{
+	bool ret_val = false;
+
+	ret_val = (UseUnboundedRequestingFinal != dm_unbounded_requesting_disable &&
+			TotalNumberOfActiveDPP == 1 && NoChroma);
+	if (UseUnboundedRequestingFinal == dm_unbounded_requesting_edp_only && Output != dm_edp)
+		ret_val = false;
+	return ret_val;
+}
+
+void dml32_CalculateDETBufferSize(
+		unsigned int DETSizeOverride[],
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange[],
+		bool ForceSingleDPP,
+		unsigned int NumberOfActiveSurfaces,
+		bool UnboundedRequestEnabled,
+		unsigned int nomDETInKByte,
+		unsigned int MaxTotalDETInKByte,
+		unsigned int ConfigReturnBufferSizeInKByte,
+		unsigned int MinCompressedBufferSizeInKByte,
+		unsigned int CompressedBufferSegmentSizeInkByteFinal,
+		enum source_format_class SourcePixelFormat[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		unsigned int RoundedUpMaxSwathSizeBytesY[],
+		unsigned int RoundedUpMaxSwathSizeBytesC[],
+		unsigned int DPPPerSurface[],
+		/* Output */
+		unsigned int DETBufferSizeInKByte[],
+		unsigned int *CompressedBufferSizeInkByte)
+{
+	unsigned int DETBufferSizePoolInKByte;
+	unsigned int NextDETBufferPieceInKByte;
+	bool DETPieceAssignedToThisSurfaceAlready[DC__NUM_DPP__MAX];
+	bool NextPotentialSurfaceToAssignDETPieceFound;
+	unsigned int NextSurfaceToAssignDETPiece;
+	double TotalBandwidth;
+	double BandwidthOfSurfacesNotAssignedDETPiece;
+	unsigned int max_minDET;
+	unsigned int minDET;
+	unsigned int minDET_pipe;
+	unsigned int j, k;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: ForceSingleDPP = %d\n", __func__, ForceSingleDPP);
+	dml_print("DML::%s: nomDETInKByte = %d\n", __func__, nomDETInKByte);
+	dml_print("DML::%s: NumberOfActiveSurfaces = %d\n", __func__, NumberOfActiveSurfaces);
+	dml_print("DML::%s: UnboundedRequestEnabled = %d\n", __func__, UnboundedRequestEnabled);
+	dml_print("DML::%s: MaxTotalDETInKByte = %d\n", __func__, MaxTotalDETInKByte);
+	dml_print("DML::%s: ConfigReturnBufferSizeInKByte = %d\n", __func__, ConfigReturnBufferSizeInKByte);
+	dml_print("DML::%s: MinCompressedBufferSizeInKByte = %d\n", __func__, MinCompressedBufferSizeInKByte);
+	dml_print("DML::%s: CompressedBufferSegmentSizeInkByteFinal = %d\n", __func__,
+			CompressedBufferSegmentSizeInkByteFinal);
+#endif
+
+	// Note: Will use default det size if that fits 2 swaths
+	if (UnboundedRequestEnabled) {
+		if (DETSizeOverride[0] > 0) {
+			DETBufferSizeInKByte[0] = DETSizeOverride[0];
+		} else {
+			DETBufferSizeInKByte[0] = dml_max(nomDETInKByte, dml_ceil(2.0 *
+					((double) RoundedUpMaxSwathSizeBytesY[0] +
+							(double) RoundedUpMaxSwathSizeBytesC[0]) / 1024.0, 64.0));
+		}
+		*CompressedBufferSizeInkByte = ConfigReturnBufferSizeInKByte - DETBufferSizeInKByte[0];
+	} else {
+		DETBufferSizePoolInKByte = MaxTotalDETInKByte;
+		for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+			DETBufferSizeInKByte[k] = nomDETInKByte;
+			if (SourcePixelFormat[k] == dm_420_8 || SourcePixelFormat[k] == dm_420_10 ||
+					SourcePixelFormat[k] == dm_420_12) {
+				max_minDET = nomDETInKByte - 64;
+			} else {
+				max_minDET = nomDETInKByte;
+			}
+			minDET = 128;
+			minDET_pipe = 0;
+
+			// add DET resource until can hold 2 full swaths
+			while (minDET <= max_minDET && minDET_pipe == 0) {
+				if (2.0 * ((double) RoundedUpMaxSwathSizeBytesY[k] +
+						(double) RoundedUpMaxSwathSizeBytesC[k]) / 1024.0 <= minDET)
+					minDET_pipe = minDET;
+				minDET = minDET + 64;
+			}
+
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: k=%0d minDET        = %d\n", __func__, k, minDET);
+			dml_print("DML::%s: k=%0d max_minDET    = %d\n", __func__, k, max_minDET);
+			dml_print("DML::%s: k=%0d minDET_pipe   = %d\n", __func__, k, minDET_pipe);
+			dml_print("DML::%s: k=%0d RoundedUpMaxSwathSizeBytesY = %d\n", __func__, k,
+					RoundedUpMaxSwathSizeBytesY[k]);
+			dml_print("DML::%s: k=%0d RoundedUpMaxSwathSizeBytesC = %d\n", __func__, k,
+					RoundedUpMaxSwathSizeBytesC[k]);
+#endif
+
+			if (minDET_pipe == 0) {
+				minDET_pipe = dml_max(128, dml_ceil(((double)RoundedUpMaxSwathSizeBytesY[k] +
+						(double)RoundedUpMaxSwathSizeBytesC[k]) / 1024.0, 64));
+#ifdef __DML_VBA_DEBUG__
+				dml_print("DML::%s: k=%0d minDET_pipe = %d (assume each plane take half DET)\n",
+						__func__, k, minDET_pipe);
+#endif
+			}
+
+			if (UseMALLForPStateChange[k] == dm_use_mall_pstate_change_phantom_pipe) {
+				DETBufferSizeInKByte[k] = 0;
+			} else if (DETSizeOverride[k] > 0) {
+				DETBufferSizeInKByte[k] = DETSizeOverride[k];
+				DETBufferSizePoolInKByte = DETBufferSizePoolInKByte -
+						(ForceSingleDPP ? 1 : DPPPerSurface[k]) * DETSizeOverride[k];
+			} else if ((ForceSingleDPP ? 1 : DPPPerSurface[k]) * minDET_pipe <= DETBufferSizePoolInKByte) {
+				DETBufferSizeInKByte[k] = minDET_pipe;
+				DETBufferSizePoolInKByte = DETBufferSizePoolInKByte -
+						(ForceSingleDPP ? 1 : DPPPerSurface[k]) * minDET_pipe;
+			}
+
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: k=%d DPPPerSurface = %d\n", __func__, k, DPPPerSurface[k]);
+			dml_print("DML::%s: k=%d DETSizeOverride = %d\n", __func__, k, DETSizeOverride[k]);
+			dml_print("DML::%s: k=%d DETBufferSizeInKByte = %d\n", __func__, k, DETBufferSizeInKByte[k]);
+			dml_print("DML::%s: DETBufferSizePoolInKByte = %d\n", __func__, DETBufferSizePoolInKByte);
+#endif
+		}
+
+		TotalBandwidth = 0;
+		for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+			if (UseMALLForPStateChange[k] != dm_use_mall_pstate_change_phantom_pipe)
+				TotalBandwidth = TotalBandwidth + ReadBandwidthLuma[k] + ReadBandwidthChroma[k];
+		}
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: --- Before bandwidth adjustment ---\n", __func__);
+		for (uint k = 0; k < NumberOfActiveSurfaces; ++k)
+			dml_print("DML::%s: k=%d DETBufferSizeInKByte   = %d\n", __func__, k, DETBufferSizeInKByte[k]);
+		dml_print("DML::%s: --- DET allocation with bandwidth ---\n", __func__);
+		dml_print("DML::%s: TotalBandwidth = %f\n", __func__, TotalBandwidth);
+#endif
+		BandwidthOfSurfacesNotAssignedDETPiece = TotalBandwidth;
+		for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+
+			if (UseMALLForPStateChange[k] == dm_use_mall_pstate_change_phantom_pipe) {
+				DETPieceAssignedToThisSurfaceAlready[k] = true;
+			} else if (DETSizeOverride[k] > 0 || (((double) (ForceSingleDPP ? 1 : DPPPerSurface[k]) *
+					(double) DETBufferSizeInKByte[k] / (double) MaxTotalDETInKByte) >=
+					((ReadBandwidthLuma[k] + ReadBandwidthChroma[k]) / TotalBandwidth))) {
+				DETPieceAssignedToThisSurfaceAlready[k] = true;
+				BandwidthOfSurfacesNotAssignedDETPiece = BandwidthOfSurfacesNotAssignedDETPiece -
+						ReadBandwidthLuma[k] - ReadBandwidthChroma[k];
+			} else {
+				DETPieceAssignedToThisSurfaceAlready[k] = false;
+			}
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: k=%d DETPieceAssignedToThisSurfaceAlready = %d\n", __func__, k,
+					DETPieceAssignedToThisSurfaceAlready[k]);
+			dml_print("DML::%s: k=%d BandwidthOfSurfacesNotAssignedDETPiece = %f\n", __func__, k,
+					BandwidthOfSurfacesNotAssignedDETPiece);
+#endif
+		}
+
+		for (j = 0; j < NumberOfActiveSurfaces; ++j) {
+			NextPotentialSurfaceToAssignDETPieceFound = false;
+			NextSurfaceToAssignDETPiece = 0;
+
+			for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+#ifdef __DML_VBA_DEBUG__
+				dml_print("DML::%s: j=%d k=%d, ReadBandwidthLuma[k] = %f\n", __func__, j, k,
+						ReadBandwidthLuma[k]);
+				dml_print("DML::%s: j=%d k=%d, ReadBandwidthChroma[k] = %f\n", __func__, j, k,
+						ReadBandwidthChroma[k]);
+				dml_print("DML::%s: j=%d k=%d, ReadBandwidthLuma[Next] = %f\n", __func__, j, k,
+						ReadBandwidthLuma[NextSurfaceToAssignDETPiece]);
+				dml_print("DML::%s: j=%d k=%d, ReadBandwidthChroma[Next] = %f\n", __func__, j, k,
+						ReadBandwidthChroma[NextSurfaceToAssignDETPiece]);
+				dml_print("DML::%s: j=%d k=%d, NextSurfaceToAssignDETPiece = %d\n", __func__, j, k,
+						NextSurfaceToAssignDETPiece);
+#endif
+				if (!DETPieceAssignedToThisSurfaceAlready[k] &&
+						(!NextPotentialSurfaceToAssignDETPieceFound ||
+						ReadBandwidthLuma[k] + ReadBandwidthChroma[k] <
+						ReadBandwidthLuma[NextSurfaceToAssignDETPiece] +
+						ReadBandwidthChroma[NextSurfaceToAssignDETPiece])) {
+					NextSurfaceToAssignDETPiece = k;
+					NextPotentialSurfaceToAssignDETPieceFound = true;
+				}
+#ifdef __DML_VBA_DEBUG__
+				dml_print("DML::%s: j=%d k=%d, DETPieceAssignedToThisSurfaceAlready = %d\n",
+						__func__, j, k, DETPieceAssignedToThisSurfaceAlready[k]);
+				dml_print("DML::%s: j=%d k=%d, NextPotentialSurfaceToAssignDETPieceFound = %d\n",
+						__func__, j, k, NextPotentialSurfaceToAssignDETPieceFound);
+#endif
+			}
+
+			if (NextPotentialSurfaceToAssignDETPieceFound) {
+				// Note: To show the banker's rounding behavior in VBA and also the fact
+				// that the DET buffer size varies due to precision issue
+				//
+				//double tmp1 =  ((double) DETBufferSizePoolInKByte *
+				// (ReadBandwidthLuma[NextSurfaceToAssignDETPiece] +
+				// ReadBandwidthChroma[NextSurfaceToAssignDETPiece]) /
+				// BandwidthOfSurfacesNotAssignedDETPiece /
+				// ((ForceSingleDPP ? 1 : DPPPerSurface[NextSurfaceToAssignDETPiece]) * 64.0));
+				//double tmp2 =  dml_round((double) DETBufferSizePoolInKByte *
+				// (ReadBandwidthLuma[NextSurfaceToAssignDETPiece] +
+				// ReadBandwidthChroma[NextSurfaceToAssignDETPiece]) /
+				 //BandwidthOfSurfacesNotAssignedDETPiece /
+				// ((ForceSingleDPP ? 1 : DPPPerSurface[NextSurfaceToAssignDETPiece]) * 64.0));
+				//
+				//dml_print("DML::%s: j=%d, tmp1 = %f\n", __func__, j, tmp1);
+				//dml_print("DML::%s: j=%d, tmp2 = %f\n", __func__, j, tmp2);
+
+				NextDETBufferPieceInKByte = dml_min(
+					dml_round((double) DETBufferSizePoolInKByte *
+						(ReadBandwidthLuma[NextSurfaceToAssignDETPiece] +
+						ReadBandwidthChroma[NextSurfaceToAssignDETPiece]) /
+						BandwidthOfSurfacesNotAssignedDETPiece /
+						((ForceSingleDPP ? 1 :
+								DPPPerSurface[NextSurfaceToAssignDETPiece]) * 64.0)) *
+						(ForceSingleDPP ? 1 :
+								DPPPerSurface[NextSurfaceToAssignDETPiece]) * 64.0,
+						dml_floor((double) DETBufferSizePoolInKByte,
+						(ForceSingleDPP ? 1 :
+								DPPPerSurface[NextSurfaceToAssignDETPiece]) * 64.0));
+
+				// Above calculation can assign the entire DET buffer allocation to a single pipe.
+				// We should limit the per-pipe DET size to the nominal / max per pipe.
+				if (NextDETBufferPieceInKByte > nomDETInKByte * (ForceSingleDPP ? 1 : DPPPerSurface[k])) {
+					if (DETBufferSizeInKByte[NextSurfaceToAssignDETPiece] <
+							nomDETInKByte * (ForceSingleDPP ? 1 : DPPPerSurface[k])) {
+						NextDETBufferPieceInKByte = nomDETInKByte * (ForceSingleDPP ? 1 : DPPPerSurface[k]) -
+								DETBufferSizeInKByte[NextSurfaceToAssignDETPiece];
+					} else {
+						// Case where DETBufferSizeInKByte[NextSurfaceToAssignDETPiece]
+						// already has the max per-pipe value
+						NextDETBufferPieceInKByte = 0;
+					}
+				}
+
+#ifdef __DML_VBA_DEBUG__
+				dml_print("DML::%s: j=%0d, DETBufferSizePoolInKByte = %d\n", __func__, j,
+					DETBufferSizePoolInKByte);
+				dml_print("DML::%s: j=%0d, NextSurfaceToAssignDETPiece = %d\n", __func__, j,
+					NextSurfaceToAssignDETPiece);
+				dml_print("DML::%s: j=%0d, ReadBandwidthLuma[%0d] = %f\n", __func__, j,
+					NextSurfaceToAssignDETPiece, ReadBandwidthLuma[NextSurfaceToAssignDETPiece]);
+				dml_print("DML::%s: j=%0d, ReadBandwidthChroma[%0d] = %f\n", __func__, j,
+					NextSurfaceToAssignDETPiece, ReadBandwidthChroma[NextSurfaceToAssignDETPiece]);
+				dml_print("DML::%s: j=%0d, BandwidthOfSurfacesNotAssignedDETPiece = %f\n",
+					__func__, j, BandwidthOfSurfacesNotAssignedDETPiece);
+				dml_print("DML::%s: j=%0d, NextDETBufferPieceInKByte = %d\n", __func__, j,
+					NextDETBufferPieceInKByte);
+				dml_print("DML::%s: j=%0d, DETBufferSizeInKByte[%0d] increases from %0d ",
+					__func__, j, NextSurfaceToAssignDETPiece,
+					DETBufferSizeInKByte[NextSurfaceToAssignDETPiece]);
+#endif
+
+				DETBufferSizeInKByte[NextSurfaceToAssignDETPiece] =
+						DETBufferSizeInKByte[NextSurfaceToAssignDETPiece]
+						+ NextDETBufferPieceInKByte
+						/ (ForceSingleDPP ? 1 : DPPPerSurface[NextSurfaceToAssignDETPiece]);
+#ifdef __DML_VBA_DEBUG__
+				dml_print("to %0d\n", DETBufferSizeInKByte[NextSurfaceToAssignDETPiece]);
+#endif
+
+				DETBufferSizePoolInKByte = DETBufferSizePoolInKByte - NextDETBufferPieceInKByte;
+				DETPieceAssignedToThisSurfaceAlready[NextSurfaceToAssignDETPiece] = true;
+				BandwidthOfSurfacesNotAssignedDETPiece = BandwidthOfSurfacesNotAssignedDETPiece -
+						(ReadBandwidthLuma[NextSurfaceToAssignDETPiece] +
+								ReadBandwidthChroma[NextSurfaceToAssignDETPiece]);
+			}
+		}
+		*CompressedBufferSizeInkByte = MinCompressedBufferSizeInKByte;
+	}
+	*CompressedBufferSizeInkByte = *CompressedBufferSizeInkByte * CompressedBufferSegmentSizeInkByteFinal / 64;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: --- After bandwidth adjustment ---\n", __func__);
+	dml_print("DML::%s: CompressedBufferSizeInkByte = %d\n", __func__, *CompressedBufferSizeInkByte);
+	for (uint k = 0; k < NumberOfActiveSurfaces; ++k) {
+		dml_print("DML::%s: k=%d DETBufferSizeInKByte = %d (TotalReadBandWidth=%f)\n",
+				__func__, k, DETBufferSizeInKByte[k], ReadBandwidthLuma[k] + ReadBandwidthChroma[k]);
+	}
+#endif
+} // CalculateDETBufferSize
+
+void dml32_CalculateODMMode(
+		unsigned int MaximumPixelsPerLinePerDSCUnit,
+		unsigned int HActive,
+		enum output_encoder_class Output,
+		enum odm_combine_policy ODMUse,
+		double StateDispclk,
+		double MaxDispclk,
+		bool DSCEnable,
+		unsigned int TotalNumberOfActiveDPP,
+		unsigned int MaxNumDPP,
+		double PixelClock,
+		double DISPCLKDPPCLKDSCCLKDownSpreading,
+		double DISPCLKRampingMargin,
+		double DISPCLKDPPCLKVCOSpeed,
+
+		/* Output */
+		bool *TotalAvailablePipesSupport,
+		unsigned int *NumberOfDPP,
+		enum odm_combine_mode *ODMMode,
+		double *RequiredDISPCLKPerSurface)
+{
+
+	double SurfaceRequiredDISPCLKWithoutODMCombine;
+	double SurfaceRequiredDISPCLKWithODMCombineTwoToOne;
+	double SurfaceRequiredDISPCLKWithODMCombineFourToOne;
+
+	SurfaceRequiredDISPCLKWithoutODMCombine = dml32_CalculateRequiredDispclk(dm_odm_combine_mode_disabled,
+			PixelClock, DISPCLKDPPCLKDSCCLKDownSpreading, DISPCLKRampingMargin, DISPCLKDPPCLKVCOSpeed,
+			MaxDispclk);
+	SurfaceRequiredDISPCLKWithODMCombineTwoToOne = dml32_CalculateRequiredDispclk(dm_odm_combine_mode_2to1,
+			PixelClock, DISPCLKDPPCLKDSCCLKDownSpreading, DISPCLKRampingMargin, DISPCLKDPPCLKVCOSpeed,
+			MaxDispclk);
+	SurfaceRequiredDISPCLKWithODMCombineFourToOne = dml32_CalculateRequiredDispclk(dm_odm_combine_mode_4to1,
+			PixelClock, DISPCLKDPPCLKDSCCLKDownSpreading, DISPCLKRampingMargin, DISPCLKDPPCLKVCOSpeed,
+			MaxDispclk);
+	*TotalAvailablePipesSupport = true;
+	*ODMMode = dm_odm_combine_mode_disabled; // initialize as disable
+
+	if (ODMUse == dm_odm_combine_policy_none)
+		*ODMMode = dm_odm_combine_mode_disabled;
+
+	*RequiredDISPCLKPerSurface = SurfaceRequiredDISPCLKWithoutODMCombine;
+	*NumberOfDPP = 0;
+
+	// FIXME check ODMUse == "" condition does it mean bypass or Gabriel means something like don't care??
+	// (ODMUse == "" || ODMUse == "CombineAsNeeded")
+
+	if (!(Output == dm_hdmi || Output == dm_dp || Output == dm_edp) && (ODMUse == dm_odm_combine_policy_4to1 ||
+			((SurfaceRequiredDISPCLKWithODMCombineTwoToOne > StateDispclk ||
+					(DSCEnable && (HActive > 2 * MaximumPixelsPerLinePerDSCUnit)))))) {
+		if (TotalNumberOfActiveDPP + 4 <= MaxNumDPP) {
+			*ODMMode = dm_odm_combine_mode_4to1;
+			*RequiredDISPCLKPerSurface = SurfaceRequiredDISPCLKWithODMCombineFourToOne;
+			*NumberOfDPP = 4;
+		} else {
+			*TotalAvailablePipesSupport = false;
+		}
+	} else if (Output != dm_hdmi && (ODMUse == dm_odm_combine_policy_2to1 ||
+			(((SurfaceRequiredDISPCLKWithoutODMCombine > StateDispclk &&
+					SurfaceRequiredDISPCLKWithODMCombineTwoToOne <= StateDispclk) ||
+					(DSCEnable && (HActive > MaximumPixelsPerLinePerDSCUnit)))))) {
+		if (TotalNumberOfActiveDPP + 2 <= MaxNumDPP) {
+			*ODMMode = dm_odm_combine_mode_2to1;
+			*RequiredDISPCLKPerSurface = SurfaceRequiredDISPCLKWithODMCombineTwoToOne;
+			*NumberOfDPP = 2;
+		} else {
+			*TotalAvailablePipesSupport = false;
+		}
+	} else {
+		if (TotalNumberOfActiveDPP + 1 <= MaxNumDPP)
+			*NumberOfDPP = 1;
+		else
+			*TotalAvailablePipesSupport = false;
+	}
+}
+
+double dml32_CalculateRequiredDispclk(
+		enum odm_combine_mode ODMMode,
+		double PixelClock,
+		double DISPCLKDPPCLKDSCCLKDownSpreading,
+		double DISPCLKRampingMargin,
+		double DISPCLKDPPCLKVCOSpeed,
+		double MaxDispclk)
+{
+	double RequiredDispclk = 0.;
+	double PixelClockAfterODM;
+	double DISPCLKWithRampingRoundedToDFSGranularity;
+	double DISPCLKWithoutRampingRoundedToDFSGranularity;
+	double MaxDispclkRoundedDownToDFSGranularity;
+
+	if (ODMMode == dm_odm_combine_mode_4to1)
+		PixelClockAfterODM = PixelClock / 4;
+	else if (ODMMode == dm_odm_combine_mode_2to1)
+		PixelClockAfterODM = PixelClock / 2;
+	else
+		PixelClockAfterODM = PixelClock;
+
+
+	DISPCLKWithRampingRoundedToDFSGranularity = dml32_RoundToDFSGranularity(
+			PixelClockAfterODM * (1 + DISPCLKDPPCLKDSCCLKDownSpreading / 100)
+					* (1 + DISPCLKRampingMargin / 100), 1, DISPCLKDPPCLKVCOSpeed);
+
+	DISPCLKWithoutRampingRoundedToDFSGranularity = dml32_RoundToDFSGranularity(
+			PixelClockAfterODM * (1 + DISPCLKDPPCLKDSCCLKDownSpreading / 100), 1, DISPCLKDPPCLKVCOSpeed);
+
+	MaxDispclkRoundedDownToDFSGranularity = dml32_RoundToDFSGranularity(MaxDispclk, 0, DISPCLKDPPCLKVCOSpeed);
+
+	if (DISPCLKWithoutRampingRoundedToDFSGranularity > MaxDispclkRoundedDownToDFSGranularity)
+		RequiredDispclk = DISPCLKWithoutRampingRoundedToDFSGranularity;
+	else if (DISPCLKWithRampingRoundedToDFSGranularity > MaxDispclkRoundedDownToDFSGranularity)
+		RequiredDispclk = MaxDispclkRoundedDownToDFSGranularity;
+	else
+		RequiredDispclk = DISPCLKWithRampingRoundedToDFSGranularity;
+
+	return RequiredDispclk;
+}
+
+double dml32_RoundToDFSGranularity(double Clock, bool round_up, double VCOSpeed)
+{
+	if (Clock <= 0.0)
+		return 0.0;
+
+	if (round_up)
+		return VCOSpeed * 4.0 / dml_floor(VCOSpeed * 4.0 / Clock, 1.0);
+	else
+		return VCOSpeed * 4.0 / dml_ceil(VCOSpeed * 4.0 / Clock, 1.0);
+}
+
+void dml32_CalculateOutputLink(
+		double PHYCLKPerState,
+		double PHYCLKD18PerState,
+		double PHYCLKD32PerState,
+		double Downspreading,
+		bool IsMainSurfaceUsingTheIndicatedTiming,
+		enum output_encoder_class Output,
+		enum output_format_class OutputFormat,
+		unsigned int HTotal,
+		unsigned int HActive,
+		double PixelClockBackEnd,
+		double ForcedOutputLinkBPP,
+		unsigned int DSCInputBitPerComponent,
+		unsigned int NumberOfDSCSlices,
+		double AudioSampleRate,
+		unsigned int AudioSampleLayout,
+		enum odm_combine_mode ODMModeNoDSC,
+		enum odm_combine_mode ODMModeDSC,
+		bool DSCEnable,
+		unsigned int OutputLinkDPLanes,
+		enum dm_output_link_dp_rate OutputLinkDPRate,
+
+		/* Output */
+		bool *RequiresDSC,
+		double *RequiresFEC,
+		double  *OutBpp,
+		enum dm_output_type *OutputType,
+		enum dm_output_rate *OutputRate,
+		unsigned int *RequiredSlots)
+{
+	bool LinkDSCEnable;
+	unsigned int dummy;
+	*RequiresDSC = false;
+	*RequiresFEC = false;
+	*OutBpp = 0;
+	*OutputType = dm_output_type_unknown;
+	*OutputRate = dm_output_rate_unknown;
+
+	if (IsMainSurfaceUsingTheIndicatedTiming) {
+		if (Output == dm_hdmi) {
+			*RequiresDSC = false;
+			*RequiresFEC = false;
+			*OutBpp = dml32_TruncToValidBPP(dml_min(600, PHYCLKPerState) * 10, 3, HTotal, HActive,
+					PixelClockBackEnd, ForcedOutputLinkBPP, false, Output, OutputFormat,
+					DSCInputBitPerComponent, NumberOfDSCSlices, AudioSampleRate, AudioSampleLayout,
+					ODMModeNoDSC, ODMModeDSC, &dummy);
+			//OutputTypeAndRate = "HDMI";
+			*OutputType = dm_output_type_hdmi;
+
+		} else if (Output == dm_dp || Output == dm_dp2p0 || Output == dm_edp) {
+			if (DSCEnable == true) {
+				*RequiresDSC = true;
+				LinkDSCEnable = true;
+				if (Output == dm_dp || Output == dm_dp2p0)
+					*RequiresFEC = true;
+				else
+					*RequiresFEC = false;
+			} else {
+				*RequiresDSC = false;
+				LinkDSCEnable = false;
+				if (Output == dm_dp2p0)
+					*RequiresFEC = true;
+				else
+					*RequiresFEC = false;
+			}
+			if (Output == dm_dp2p0) {
+				*OutBpp = 0;
+				if ((OutputLinkDPRate == dm_dp_rate_na || OutputLinkDPRate == dm_dp_rate_uhbr10) &&
+						PHYCLKD32PerState >= 10000 / 32) {
+					*OutBpp = dml32_TruncToValidBPP((1 - Downspreading / 100) * 10000,
+							OutputLinkDPLanes, HTotal, HActive, PixelClockBackEnd,
+							ForcedOutputLinkBPP, LinkDSCEnable, Output, OutputFormat,
+							DSCInputBitPerComponent, NumberOfDSCSlices, AudioSampleRate,
+							AudioSampleLayout, ODMModeNoDSC, ODMModeDSC, RequiredSlots);
+					if (*OutBpp == 0 && PHYCLKD32PerState < 13500 / 32 && DSCEnable == true &&
+							ForcedOutputLinkBPP == 0) {
+						*RequiresDSC = true;
+						LinkDSCEnable = true;
+						*OutBpp = dml32_TruncToValidBPP((1 - Downspreading / 100) * 10000,
+								OutputLinkDPLanes, HTotal, HActive, PixelClockBackEnd,
+								ForcedOutputLinkBPP, LinkDSCEnable, Output,
+								OutputFormat, DSCInputBitPerComponent,
+								NumberOfDSCSlices, AudioSampleRate, AudioSampleLayout,
+								ODMModeNoDSC, ODMModeDSC, RequiredSlots);
+					}
+					//OutputTypeAndRate = Output & " UHBR10";
+					*OutputType = dm_output_type_dp2p0;
+					*OutputRate = dm_output_rate_dp_rate_uhbr10;
+				}
+				if ((OutputLinkDPRate == dm_dp_rate_na || OutputLinkDPRate == dm_dp_rate_uhbr13p5) &&
+						*OutBpp == 0 && PHYCLKD32PerState >= 13500 / 32) {
+					*OutBpp = dml32_TruncToValidBPP((1 - Downspreading / 100) * 13500,
+							OutputLinkDPLanes, HTotal, HActive, PixelClockBackEnd,
+							ForcedOutputLinkBPP, LinkDSCEnable, Output, OutputFormat,
+							DSCInputBitPerComponent, NumberOfDSCSlices, AudioSampleRate,
+							AudioSampleLayout, ODMModeNoDSC, ODMModeDSC, RequiredSlots);
+
+					if (*OutBpp == 0 && PHYCLKD32PerState < 20000 / 32 && DSCEnable == true &&
+							ForcedOutputLinkBPP == 0) {
+						*RequiresDSC = true;
+						LinkDSCEnable = true;
+						*OutBpp = dml32_TruncToValidBPP((1 - Downspreading / 100) * 13500,
+								OutputLinkDPLanes, HTotal, HActive, PixelClockBackEnd,
+								ForcedOutputLinkBPP, LinkDSCEnable, Output,
+								OutputFormat, DSCInputBitPerComponent,
+								NumberOfDSCSlices, AudioSampleRate, AudioSampleLayout,
+								ODMModeNoDSC, ODMModeDSC, RequiredSlots);
+					}
+					//OutputTypeAndRate = Output & " UHBR13p5";
+					*OutputType = dm_output_type_dp2p0;
+					*OutputRate = dm_output_rate_dp_rate_uhbr13p5;
+				}
+				if ((OutputLinkDPRate == dm_dp_rate_na || OutputLinkDPRate == dm_dp_rate_uhbr20) &&
+						*OutBpp == 0 && PHYCLKD32PerState >= 20000 / 32) {
+					*OutBpp = dml32_TruncToValidBPP((1 - Downspreading / 100) * 20000,
+							OutputLinkDPLanes, HTotal, HActive, PixelClockBackEnd,
+							ForcedOutputLinkBPP, LinkDSCEnable, Output, OutputFormat,
+							DSCInputBitPerComponent, NumberOfDSCSlices, AudioSampleRate,
+							AudioSampleLayout, ODMModeNoDSC, ODMModeDSC, RequiredSlots);
+					if (*OutBpp == 0 && DSCEnable == true && ForcedOutputLinkBPP == 0) {
+						*RequiresDSC = true;
+						LinkDSCEnable = true;
+						*OutBpp = dml32_TruncToValidBPP((1 - Downspreading / 100) * 20000,
+								OutputLinkDPLanes, HTotal, HActive, PixelClockBackEnd,
+								ForcedOutputLinkBPP, LinkDSCEnable, Output,
+								OutputFormat, DSCInputBitPerComponent,
+								NumberOfDSCSlices, AudioSampleRate, AudioSampleLayout,
+								ODMModeNoDSC, ODMModeDSC, RequiredSlots);
+					}
+					//OutputTypeAndRate = Output & " UHBR20";
+					*OutputType = dm_output_type_dp2p0;
+					*OutputRate = dm_output_rate_dp_rate_uhbr20;
+				}
+			} else {
+				*OutBpp = 0;
+				if ((OutputLinkDPRate == dm_dp_rate_na || OutputLinkDPRate == dm_dp_rate_hbr) &&
+						PHYCLKPerState >= 270) {
+					*OutBpp = dml32_TruncToValidBPP((1 - Downspreading / 100) * 2700,
+							OutputLinkDPLanes, HTotal, HActive, PixelClockBackEnd,
+							ForcedOutputLinkBPP, LinkDSCEnable, Output, OutputFormat,
+							DSCInputBitPerComponent, NumberOfDSCSlices, AudioSampleRate,
+							AudioSampleLayout, ODMModeNoDSC, ODMModeDSC, RequiredSlots);
+					if (*OutBpp == 0 && PHYCLKPerState < 540 && DSCEnable == true &&
+							ForcedOutputLinkBPP == 0) {
+						*RequiresDSC = true;
+						LinkDSCEnable = true;
+						if (Output == dm_dp)
+							*RequiresFEC = true;
+						*OutBpp = dml32_TruncToValidBPP((1 - Downspreading / 100) * 2700,
+								OutputLinkDPLanes, HTotal, HActive, PixelClockBackEnd,
+								ForcedOutputLinkBPP, LinkDSCEnable, Output,
+								OutputFormat, DSCInputBitPerComponent,
+								NumberOfDSCSlices, AudioSampleRate, AudioSampleLayout,
+								ODMModeNoDSC, ODMModeDSC, RequiredSlots);
+					}
+					//OutputTypeAndRate = Output & " HBR";
+					*OutputType = (Output == dm_dp) ? dm_output_type_dp : dm_output_type_edp;
+					*OutputRate = dm_output_rate_dp_rate_hbr;
+				}
+				if ((OutputLinkDPRate == dm_dp_rate_na || OutputLinkDPRate == dm_dp_rate_hbr2) &&
+						*OutBpp == 0 && PHYCLKPerState >= 540) {
+					*OutBpp = dml32_TruncToValidBPP((1 - Downspreading / 100) * 5400,
+							OutputLinkDPLanes, HTotal, HActive, PixelClockBackEnd,
+							ForcedOutputLinkBPP, LinkDSCEnable, Output, OutputFormat,
+							DSCInputBitPerComponent, NumberOfDSCSlices, AudioSampleRate,
+							AudioSampleLayout, ODMModeNoDSC, ODMModeDSC, RequiredSlots);
+
+					if (*OutBpp == 0 && PHYCLKPerState < 810 && DSCEnable == true &&
+							ForcedOutputLinkBPP == 0) {
+						*RequiresDSC = true;
+						LinkDSCEnable = true;
+						if (Output == dm_dp)
+							*RequiresFEC = true;
+
+						*OutBpp = dml32_TruncToValidBPP((1 - Downspreading / 100) * 5400,
+								OutputLinkDPLanes, HTotal, HActive, PixelClockBackEnd,
+								ForcedOutputLinkBPP, LinkDSCEnable, Output,
+								OutputFormat, DSCInputBitPerComponent,
+								NumberOfDSCSlices, AudioSampleRate, AudioSampleLayout,
+								ODMModeNoDSC, ODMModeDSC, RequiredSlots);
+					}
+					//OutputTypeAndRate = Output & " HBR2";
+					*OutputType = (Output == dm_dp) ? dm_output_type_dp : dm_output_type_edp;
+					*OutputRate = dm_output_rate_dp_rate_hbr2;
+				}
+				if ((OutputLinkDPRate == dm_dp_rate_na || OutputLinkDPRate == dm_dp_rate_hbr3) && *OutBpp == 0 && PHYCLKPerState >= 810) {
+					*OutBpp = dml32_TruncToValidBPP((1 - Downspreading / 100) * 8100,
+							OutputLinkDPLanes, HTotal, HActive, PixelClockBackEnd,
+							ForcedOutputLinkBPP, LinkDSCEnable, Output,
+							OutputFormat, DSCInputBitPerComponent, NumberOfDSCSlices,
+							AudioSampleRate, AudioSampleLayout, ODMModeNoDSC, ODMModeDSC,
+							RequiredSlots);
+
+					if (*OutBpp == 0 && DSCEnable == true && ForcedOutputLinkBPP == 0) {
+						*RequiresDSC = true;
+						LinkDSCEnable = true;
+						if (Output == dm_dp)
+							*RequiresFEC = true;
+
+						*OutBpp = dml32_TruncToValidBPP((1 - Downspreading / 100) * 8100,
+								OutputLinkDPLanes, HTotal, HActive, PixelClockBackEnd,
+								ForcedOutputLinkBPP, LinkDSCEnable, Output,
+								OutputFormat, DSCInputBitPerComponent,
+								NumberOfDSCSlices, AudioSampleRate, AudioSampleLayout,
+								ODMModeNoDSC, ODMModeDSC, RequiredSlots);
+					}
+					//OutputTypeAndRate = Output & " HBR3";
+					*OutputType = (Output == dm_dp) ? dm_output_type_dp : dm_output_type_edp;
+					*OutputRate = dm_output_rate_dp_rate_hbr3;
+				}
+			}
+		}
+	}
+}
+
+void dml32_CalculateDPPCLK(
+		unsigned int NumberOfActiveSurfaces,
+		double DISPCLKDPPCLKDSCCLKDownSpreading,
+		double DISPCLKDPPCLKVCOSpeed,
+		double DPPCLKUsingSingleDPP[],
+		unsigned int DPPPerSurface[],
+
+		/* output */
+		double *GlobalDPPCLK,
+		double Dppclk[])
+{
+	unsigned int k;
+	*GlobalDPPCLK = 0;
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		Dppclk[k] = DPPCLKUsingSingleDPP[k] / DPPPerSurface[k] * (1 + DISPCLKDPPCLKDSCCLKDownSpreading / 100);
+		*GlobalDPPCLK = dml_max(*GlobalDPPCLK, Dppclk[k]);
+	}
+	*GlobalDPPCLK = dml32_RoundToDFSGranularity(*GlobalDPPCLK, 1, DISPCLKDPPCLKVCOSpeed);
+	for (k = 0; k < NumberOfActiveSurfaces; ++k)
+		Dppclk[k] = *GlobalDPPCLK / 255 * dml_ceil(Dppclk[k] * 255.0 / *GlobalDPPCLK, 1.0);
+}
+
+double dml32_TruncToValidBPP(
+		double LinkBitRate,
+		unsigned int Lanes,
+		unsigned int HTotal,
+		unsigned int HActive,
+		double PixelClock,
+		double DesiredBPP,
+		bool DSCEnable,
+		enum output_encoder_class Output,
+		enum output_format_class Format,
+		unsigned int DSCInputBitPerComponent,
+		unsigned int DSCSlices,
+		unsigned int AudioRate,
+		unsigned int AudioLayout,
+		enum odm_combine_mode ODMModeNoDSC,
+		enum odm_combine_mode ODMModeDSC,
+		/* Output */
+		unsigned int *RequiredSlots)
+{
+	double    MaxLinkBPP;
+	unsigned int   MinDSCBPP;
+	double    MaxDSCBPP;
+	unsigned int   NonDSCBPP0;
+	unsigned int   NonDSCBPP1;
+	unsigned int   NonDSCBPP2;
+	unsigned int   NonDSCBPP3;
+
+	if (Format == dm_420) {
+		NonDSCBPP0 = 12;
+		NonDSCBPP1 = 15;
+		NonDSCBPP2 = 18;
+		MinDSCBPP = 6;
+		MaxDSCBPP = 1.5 * DSCInputBitPerComponent - 1 / 16;
+	} else if (Format == dm_444) {
+		NonDSCBPP0 = 18;
+		NonDSCBPP1 = 24;
+		NonDSCBPP2 = 30;
+		NonDSCBPP3 = 36;
+		MinDSCBPP = 8;
+		MaxDSCBPP = 3 * DSCInputBitPerComponent - 1.0 / 16;
+	} else {
+		if (Output == dm_hdmi) {
+			NonDSCBPP0 = 24;
+			NonDSCBPP1 = 24;
+			NonDSCBPP2 = 24;
+		} else {
+			NonDSCBPP0 = 16;
+			NonDSCBPP1 = 20;
+			NonDSCBPP2 = 24;
+		}
+		if (Format == dm_n422) {
+			MinDSCBPP = 7;
+			MaxDSCBPP = 2 * DSCInputBitPerComponent - 1.0 / 16.0;
+		} else {
+			MinDSCBPP = 8;
+			MaxDSCBPP = 3 * DSCInputBitPerComponent - 1.0 / 16.0;
+		}
+	}
+	if (Output == dm_dp2p0) {
+		MaxLinkBPP = LinkBitRate * Lanes / PixelClock * 128 / 132 * 383 / 384 * 65536 / 65540;
+	} else if (DSCEnable && Output == dm_dp) {
+		MaxLinkBPP = LinkBitRate / 10 * 8 * Lanes / PixelClock * (1 - 2.4 / 100);
+	} else {
+		MaxLinkBPP = LinkBitRate / 10 * 8 * Lanes / PixelClock;
+	}
+
+	if (DSCEnable) {
+		if (ODMModeDSC == dm_odm_combine_mode_4to1)
+			MaxLinkBPP = dml_min(MaxLinkBPP, 16);
+		else if (ODMModeDSC == dm_odm_combine_mode_2to1)
+			MaxLinkBPP = dml_min(MaxLinkBPP, 32);
+		else if (ODMModeDSC == dm_odm_split_mode_1to2)
+			MaxLinkBPP = 2 * MaxLinkBPP;
+	} else {
+		if (ODMModeNoDSC == dm_odm_combine_mode_4to1)
+			MaxLinkBPP = dml_min(MaxLinkBPP, 16);
+		else if (ODMModeNoDSC == dm_odm_combine_mode_2to1)
+			MaxLinkBPP = dml_min(MaxLinkBPP, 32);
+		else if (ODMModeNoDSC == dm_odm_split_mode_1to2)
+			MaxLinkBPP = 2 * MaxLinkBPP;
+	}
+
+	if (DesiredBPP == 0) {
+		if (DSCEnable) {
+			if (MaxLinkBPP < MinDSCBPP)
+				return BPP_INVALID;
+			else if (MaxLinkBPP >= MaxDSCBPP)
+				return MaxDSCBPP;
+			else
+				return dml_floor(16.0 * MaxLinkBPP, 1.0) / 16.0;
+		} else {
+			if (MaxLinkBPP >= NonDSCBPP3)
+				return NonDSCBPP3;
+			else if (MaxLinkBPP >= NonDSCBPP2)
+				return NonDSCBPP2;
+			else if (MaxLinkBPP >= NonDSCBPP1)
+				return NonDSCBPP1;
+			else if (MaxLinkBPP >= NonDSCBPP0)
+				return 16.0;
+			else
+				return BPP_INVALID;
+		}
+	} else {
+		if (!((DSCEnable == false && (DesiredBPP == NonDSCBPP2 || DesiredBPP == NonDSCBPP1 ||
+				DesiredBPP == NonDSCBPP0 || DesiredBPP == NonDSCBPP3)) ||
+				(DSCEnable && DesiredBPP >= MinDSCBPP && DesiredBPP <= MaxDSCBPP)))
+			return BPP_INVALID;
+		else
+			return DesiredBPP;
+	}
+
+	*RequiredSlots = dml_ceil(DesiredBPP / MaxLinkBPP * 64, 1);
+
+	return BPP_INVALID;
+} // TruncToValidBPP
+
+double dml32_RequiredDTBCLK(
+		bool              DSCEnable,
+		double               PixelClock,
+		enum output_format_class  OutputFormat,
+		double               OutputBpp,
+		unsigned int              DSCSlices,
+		unsigned int                 HTotal,
+		unsigned int                 HActive,
+		unsigned int              AudioRate,
+		unsigned int              AudioLayout)
+{
+	double PixelWordRate = PixelClock /  (OutputFormat == dm_444 ? 1 : 2);
+	double HCActive = dml_ceil(DSCSlices * dml_ceil(OutputBpp *
+			dml_ceil(HActive / DSCSlices, 1) / 8.0, 1) / 3.0, 1);
+	double HCBlank = 64 + 32 *
+			dml_ceil(AudioRate *  (AudioLayout == 1 ? 1 : 0.25) * HTotal / (PixelClock * 1000), 1);
+	double AverageTribyteRate = PixelWordRate * (HCActive + HCBlank) / HTotal;
+	double HActiveTribyteRate = PixelWordRate * HCActive / HActive;
+
+	if (DSCEnable != true)
+		return dml_max(PixelClock / 4.0 * OutputBpp / 24.0, 25.0);
+
+	return dml_max4(PixelWordRate / 4.0, AverageTribyteRate / 4.0, HActiveTribyteRate / 4.0, 25.0) * 1.002;
+}
+
+unsigned int dml32_DSCDelayRequirement(bool DSCEnabled,
+		enum odm_combine_mode ODMMode,
+		unsigned int DSCInputBitPerComponent,
+		double OutputBpp,
+		unsigned int HActive,
+		unsigned int HTotal,
+		unsigned int NumberOfDSCSlices,
+		enum output_format_class  OutputFormat,
+		enum output_encoder_class Output,
+		double PixelClock,
+		double PixelClockBackEnd)
+{
+	unsigned int DSCDelayRequirement_val;
+
+	if (DSCEnabled == true && OutputBpp != 0) {
+		if (ODMMode == dm_odm_combine_mode_4to1) {
+			DSCDelayRequirement_val = 4 * (dml32_dscceComputeDelay(DSCInputBitPerComponent, OutputBpp,
+					dml_ceil(HActive / NumberOfDSCSlices, 1), NumberOfDSCSlices / 4,
+					OutputFormat, Output) + dml32_dscComputeDelay(OutputFormat, Output));
+		} else if (ODMMode == dm_odm_combine_mode_2to1) {
+			DSCDelayRequirement_val = 2 * (dml32_dscceComputeDelay(DSCInputBitPerComponent, OutputBpp,
+					dml_ceil(HActive / NumberOfDSCSlices, 1), NumberOfDSCSlices / 2,
+					OutputFormat, Output) + dml32_dscComputeDelay(OutputFormat, Output));
+		} else {
+			DSCDelayRequirement_val = dml32_dscceComputeDelay(DSCInputBitPerComponent, OutputBpp,
+					dml_ceil(HActive / NumberOfDSCSlices, 1), NumberOfDSCSlices,
+					OutputFormat, Output) + dml32_dscComputeDelay(OutputFormat, Output);
+		}
+
+		DSCDelayRequirement_val = DSCDelayRequirement_val + (HTotal - HActive) *
+				dml_ceil(DSCDelayRequirement_val / HActive, 1);
+
+		DSCDelayRequirement_val = DSCDelayRequirement_val * PixelClock / PixelClockBackEnd;
+
+	} else {
+		DSCDelayRequirement_val = 0;
+	}
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: DSCEnabled              = %d\n", __func__, DSCEnabled);
+	dml_print("DML::%s: OutputBpp               = %f\n", __func__, OutputBpp);
+	dml_print("DML::%s: HActive                 = %d\n", __func__, HActive);
+	dml_print("DML::%s: OutputFormat            = %d\n", __func__, OutputFormat);
+	dml_print("DML::%s: DSCInputBitPerComponent = %d\n", __func__, DSCInputBitPerComponent);
+	dml_print("DML::%s: NumberOfDSCSlices       = %d\n", __func__, NumberOfDSCSlices);
+	dml_print("DML::%s: DSCDelayRequirement_val = %d\n", __func__, DSCDelayRequirement_val);
+#endif
+
+	return DSCDelayRequirement_val;
+}
+
+void dml32_CalculateSurfaceSizeInMall(
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int MALLAllocatedForDCN,
+		enum dm_use_mall_for_static_screen_mode UseMALLForStaticScreen[],
+		bool DCCEnable[],
+		bool ViewportStationary[],
+		unsigned int ViewportXStartY[],
+		unsigned int ViewportYStartY[],
+		unsigned int ViewportXStartC[],
+		unsigned int ViewportYStartC[],
+		unsigned int ViewportWidthY[],
+		unsigned int ViewportHeightY[],
+		unsigned int BytesPerPixelY[],
+		unsigned int ViewportWidthC[],
+		unsigned int ViewportHeightC[],
+		unsigned int BytesPerPixelC[],
+		unsigned int SurfaceWidthY[],
+		unsigned int SurfaceWidthC[],
+		unsigned int SurfaceHeightY[],
+		unsigned int SurfaceHeightC[],
+		unsigned int Read256BytesBlockWidthY[],
+		unsigned int Read256BytesBlockWidthC[],
+		unsigned int Read256BytesBlockHeightY[],
+		unsigned int Read256BytesBlockHeightC[],
+		unsigned int ReadBlockWidthY[],
+		unsigned int ReadBlockWidthC[],
+		unsigned int ReadBlockHeightY[],
+		unsigned int ReadBlockHeightC[],
+
+		/* Output */
+		unsigned int    SurfaceSizeInMALL[],
+		bool *ExceededMALLSize)
+{
+	unsigned int TotalSurfaceSizeInMALL  = 0;
+	unsigned int k;
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (ViewportStationary[k]) {
+			SurfaceSizeInMALL[k] = dml_min(dml_ceil(SurfaceWidthY[k], ReadBlockWidthY[k]),
+					dml_floor(ViewportXStartY[k] + ViewportWidthY[k] + ReadBlockWidthY[k] - 1,
+						ReadBlockWidthY[k]) - dml_floor(ViewportXStartY[k],
+						ReadBlockWidthY[k])) * dml_min(dml_ceil(SurfaceHeightY[k],
+						ReadBlockHeightY[k]), dml_floor(ViewportYStartY[k] +
+						ViewportHeightY[k] + ReadBlockHeightY[k] - 1, ReadBlockHeightY[k]) -
+						dml_floor(ViewportYStartY[k], ReadBlockHeightY[k])) * BytesPerPixelY[k];
+
+			if (ReadBlockWidthC[k] > 0) {
+				SurfaceSizeInMALL[k] = SurfaceSizeInMALL[k] +
+						dml_min(dml_ceil(SurfaceWidthC[k], ReadBlockWidthC[k]),
+							dml_floor(ViewportXStartC[k] + ViewportWidthC[k] +
+							ReadBlockWidthC[k] - 1, ReadBlockWidthC[k]) -
+							dml_floor(ViewportXStartC[k], ReadBlockWidthC[k])) *
+							dml_min(dml_ceil(SurfaceHeightC[k], ReadBlockHeightC[k]),
+							dml_floor(ViewportYStartC[k] + ViewportHeightC[k] +
+							ReadBlockHeightC[k] - 1, ReadBlockHeightC[k]) -
+							dml_floor(ViewportYStartC[k], ReadBlockHeightC[k])) *
+							BytesPerPixelC[k];
+			}
+			if (DCCEnable[k] == true) {
+				SurfaceSizeInMALL[k] = SurfaceSizeInMALL[k] +
+						dml_min(dml_ceil(SurfaceWidthY[k], 8 * Read256BytesBlockWidthY[k]),
+							dml_floor(ViewportXStartY[k] + ViewportWidthY[k] + 8 *
+							Read256BytesBlockWidthY[k] - 1, 8 * Read256BytesBlockWidthY[k])
+							- dml_floor(ViewportXStartY[k], 8 * Read256BytesBlockWidthY[k]))
+							* dml_min(dml_ceil(SurfaceHeightY[k], 8 *
+							Read256BytesBlockHeightY[k]), dml_floor(ViewportYStartY[k] +
+							ViewportHeightY[k] + 8 * Read256BytesBlockHeightY[k] - 1, 8 *
+							Read256BytesBlockHeightY[k]) - dml_floor(ViewportYStartY[k], 8
+							* Read256BytesBlockHeightY[k])) * BytesPerPixelY[k] / 256;
+				if (Read256BytesBlockWidthC[k] > 0) {
+					SurfaceSizeInMALL[k] = SurfaceSizeInMALL[k] +
+							dml_min(dml_ceil(SurfaceWidthC[k], 8 *
+								Read256BytesBlockWidthC[k]),
+								dml_floor(ViewportXStartC[k] + ViewportWidthC[k] + 8
+								* Read256BytesBlockWidthC[k] - 1, 8 *
+								Read256BytesBlockWidthC[k]) -
+								dml_floor(ViewportXStartC[k], 8 *
+								Read256BytesBlockWidthC[k])) *
+								dml_min(dml_ceil(SurfaceHeightC[k], 8 *
+								Read256BytesBlockHeightC[k]),
+								dml_floor(ViewportYStartC[k] + ViewportHeightC[k] +
+								8 * Read256BytesBlockHeightC[k] - 1, 8 *
+								Read256BytesBlockHeightC[k]) -
+								dml_floor(ViewportYStartC[k], 8 *
+								Read256BytesBlockHeightC[k])) *
+								BytesPerPixelC[k] / 256;
+				}
+			}
+		} else {
+			SurfaceSizeInMALL[k] = dml_ceil(dml_min(SurfaceWidthY[k], ViewportWidthY[k] +
+					ReadBlockWidthY[k] - 1), ReadBlockWidthY[k]) *
+					dml_ceil(dml_min(SurfaceHeightY[k], ViewportHeightY[k] +
+							ReadBlockHeightY[k] - 1), ReadBlockHeightY[k]) *
+							BytesPerPixelY[k];
+			if (ReadBlockWidthC[k] > 0) {
+				SurfaceSizeInMALL[k] = SurfaceSizeInMALL[k] +
+						dml_ceil(dml_min(SurfaceWidthC[k], ViewportWidthC[k] +
+								ReadBlockWidthC[k] - 1), ReadBlockWidthC[k]) *
+						dml_ceil(dml_min(SurfaceHeightC[k], ViewportHeightC[k] +
+								ReadBlockHeightC[k] - 1), ReadBlockHeightC[k]) *
+								BytesPerPixelC[k];
+			}
+			if (DCCEnable[k] == true) {
+				SurfaceSizeInMALL[k] = SurfaceSizeInMALL[k] +
+						dml_ceil(dml_min(SurfaceWidthY[k], ViewportWidthY[k] + 8 *
+								Read256BytesBlockWidthY[k] - 1), 8 *
+								Read256BytesBlockWidthY[k]) *
+						dml_ceil(dml_min(SurfaceHeightY[k], ViewportHeightY[k] + 8 *
+								Read256BytesBlockHeightY[k] - 1), 8 *
+								Read256BytesBlockHeightY[k]) * BytesPerPixelY[k] / 256;
+
+				if (Read256BytesBlockWidthC[k] > 0) {
+					SurfaceSizeInMALL[k] = SurfaceSizeInMALL[k] +
+							dml_ceil(dml_min(SurfaceWidthC[k], ViewportWidthC[k] + 8 *
+									Read256BytesBlockWidthC[k] - 1), 8 *
+									Read256BytesBlockWidthC[k]) *
+							dml_ceil(dml_min(SurfaceHeightC[k], ViewportHeightC[k] + 8 *
+									Read256BytesBlockHeightC[k] - 1), 8 *
+									Read256BytesBlockHeightC[k]) *
+									BytesPerPixelC[k] / 256;
+				}
+			}
+		}
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (UseMALLForStaticScreen[k] == dm_use_mall_static_screen_enable)
+			TotalSurfaceSizeInMALL = TotalSurfaceSizeInMALL + SurfaceSizeInMALL[k];
+	}
+	*ExceededMALLSize =  (TotalSurfaceSizeInMALL <= MALLAllocatedForDCN * 1024 * 1024 ? false : true);
+} // CalculateSurfaceSizeInMall
+
+void dml32_CalculateVMRowAndSwath(
+		unsigned int NumberOfActiveSurfaces,
+		DmlPipe myPipe[],
+		unsigned int SurfaceSizeInMALL[],
+		unsigned int PTEBufferSizeInRequestsLuma,
+		unsigned int PTEBufferSizeInRequestsChroma,
+		unsigned int DCCMetaBufferSizeBytes,
+		enum dm_use_mall_for_static_screen_mode UseMALLForStaticScreen[],
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange[],
+		unsigned int MALLAllocatedForDCN,
+		double SwathWidthY[],
+		double SwathWidthC[],
+		bool GPUVMEnable,
+		bool HostVMEnable,
+		unsigned int HostVMMaxNonCachedPageTableLevels,
+		unsigned int GPUVMMaxPageTableLevels,
+		unsigned int GPUVMMinPageSizeKBytes[],
+		unsigned int HostVMMinPageSize,
+
+		/* Output */
+		bool PTEBufferSizeNotExceeded[],
+		bool DCCMetaBufferSizeNotExceeded[],
+		unsigned int dpte_row_width_luma_ub[],
+		unsigned int dpte_row_width_chroma_ub[],
+		unsigned int dpte_row_height_luma[],
+		unsigned int dpte_row_height_chroma[],
+		unsigned int dpte_row_height_linear_luma[],     // VBA_DELTA
+		unsigned int dpte_row_height_linear_chroma[],   // VBA_DELTA
+		unsigned int meta_req_width[],
+		unsigned int meta_req_width_chroma[],
+		unsigned int meta_req_height[],
+		unsigned int meta_req_height_chroma[],
+		unsigned int meta_row_width[],
+		unsigned int meta_row_width_chroma[],
+		unsigned int meta_row_height[],
+		unsigned int meta_row_height_chroma[],
+		unsigned int vm_group_bytes[],
+		unsigned int dpte_group_bytes[],
+		unsigned int PixelPTEReqWidthY[],
+		unsigned int PixelPTEReqHeightY[],
+		unsigned int PTERequestSizeY[],
+		unsigned int PixelPTEReqWidthC[],
+		unsigned int PixelPTEReqHeightC[],
+		unsigned int PTERequestSizeC[],
+		unsigned int dpde0_bytes_per_frame_ub_l[],
+		unsigned int meta_pte_bytes_per_frame_ub_l[],
+		unsigned int dpde0_bytes_per_frame_ub_c[],
+		unsigned int meta_pte_bytes_per_frame_ub_c[],
+		double PrefetchSourceLinesY[],
+		double PrefetchSourceLinesC[],
+		double VInitPreFillY[],
+		double VInitPreFillC[],
+		unsigned int MaxNumSwathY[],
+		unsigned int MaxNumSwathC[],
+		double meta_row_bw[],
+		double dpte_row_bw[],
+		double PixelPTEBytesPerRow[],
+		double PDEAndMetaPTEBytesFrame[],
+		double MetaRowByte[],
+		bool use_one_row_for_frame[],
+		bool use_one_row_for_frame_flip[],
+		bool UsesMALLForStaticScreen[],
+		bool PTE_BUFFER_MODE[],
+		unsigned int BIGK_FRAGMENT_SIZE[])
+{
+	unsigned int k;
+	unsigned int PTEBufferSizeInRequestsForLuma[DC__NUM_DPP__MAX];
+	unsigned int PTEBufferSizeInRequestsForChroma[DC__NUM_DPP__MAX];
+	unsigned int PDEAndMetaPTEBytesFrameY;
+	unsigned int PDEAndMetaPTEBytesFrameC;
+	unsigned int MetaRowByteY[DC__NUM_DPP__MAX];
+	unsigned int MetaRowByteC[DC__NUM_DPP__MAX];
+	unsigned int PixelPTEBytesPerRowY[DC__NUM_DPP__MAX];
+	unsigned int PixelPTEBytesPerRowC[DC__NUM_DPP__MAX];
+	unsigned int PixelPTEBytesPerRowY_one_row_per_frame[DC__NUM_DPP__MAX];
+	unsigned int PixelPTEBytesPerRowC_one_row_per_frame[DC__NUM_DPP__MAX];
+	unsigned int dpte_row_width_luma_ub_one_row_per_frame[DC__NUM_DPP__MAX];
+	unsigned int dpte_row_height_luma_one_row_per_frame[DC__NUM_DPP__MAX];
+	unsigned int dpte_row_width_chroma_ub_one_row_per_frame[DC__NUM_DPP__MAX];
+	unsigned int dpte_row_height_chroma_one_row_per_frame[DC__NUM_DPP__MAX];
+	bool one_row_per_frame_fits_in_buffer[DC__NUM_DPP__MAX];
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (HostVMEnable == true) {
+			vm_group_bytes[k] = 512;
+			dpte_group_bytes[k] = 512;
+		} else if (GPUVMEnable == true) {
+			vm_group_bytes[k] = 2048;
+			if (GPUVMMinPageSizeKBytes[k] >= 64 && IsVertical(myPipe[k].SourceRotation))
+				dpte_group_bytes[k] = 512;
+			else
+				dpte_group_bytes[k] = 2048;
+		} else {
+			vm_group_bytes[k] = 0;
+			dpte_group_bytes[k] = 0;
+		}
+
+		if (myPipe[k].SourcePixelFormat == dm_420_8 || myPipe[k].SourcePixelFormat == dm_420_10 ||
+				myPipe[k].SourcePixelFormat == dm_420_12 ||
+				myPipe[k].SourcePixelFormat == dm_rgbe_alpha) {
+			if ((myPipe[k].SourcePixelFormat == dm_420_10 || myPipe[k].SourcePixelFormat == dm_420_12) &&
+					!IsVertical(myPipe[k].SourceRotation)) {
+				PTEBufferSizeInRequestsForLuma[k] =
+						(PTEBufferSizeInRequestsLuma + PTEBufferSizeInRequestsChroma) / 2;
+				PTEBufferSizeInRequestsForChroma[k] = PTEBufferSizeInRequestsForLuma[k];
+			} else {
+				PTEBufferSizeInRequestsForLuma[k] = PTEBufferSizeInRequestsLuma;
+				PTEBufferSizeInRequestsForChroma[k] = PTEBufferSizeInRequestsChroma;
+			}
+
+			PDEAndMetaPTEBytesFrameC = dml32_CalculateVMAndRowBytes(
+					myPipe[k].ViewportStationary,
+					myPipe[k].DCCEnable,
+					myPipe[k].DPPPerSurface,
+					myPipe[k].BlockHeight256BytesC,
+					myPipe[k].BlockWidth256BytesC,
+					myPipe[k].SourcePixelFormat,
+					myPipe[k].SurfaceTiling,
+					myPipe[k].BytePerPixelC,
+					myPipe[k].SourceRotation,
+					SwathWidthC[k],
+					myPipe[k].ViewportHeightChroma,
+					myPipe[k].ViewportXStartC,
+					myPipe[k].ViewportYStartC,
+					GPUVMEnable,
+					HostVMEnable,
+					HostVMMaxNonCachedPageTableLevels,
+					GPUVMMaxPageTableLevels,
+					GPUVMMinPageSizeKBytes[k],
+					HostVMMinPageSize,
+					PTEBufferSizeInRequestsForChroma[k],
+					myPipe[k].PitchC,
+					myPipe[k].DCCMetaPitchC,
+					myPipe[k].BlockWidthC,
+					myPipe[k].BlockHeightC,
+
+					/* Output */
+					&MetaRowByteC[k],
+					&PixelPTEBytesPerRowC[k],
+					&dpte_row_width_chroma_ub[k],
+					&dpte_row_height_chroma[k],
+					&dpte_row_height_linear_chroma[k],
+					&PixelPTEBytesPerRowC_one_row_per_frame[k],
+					&dpte_row_width_chroma_ub_one_row_per_frame[k],
+					&dpte_row_height_chroma_one_row_per_frame[k],
+					&meta_req_width_chroma[k],
+					&meta_req_height_chroma[k],
+					&meta_row_width_chroma[k],
+					&meta_row_height_chroma[k],
+					&PixelPTEReqWidthC[k],
+					&PixelPTEReqHeightC[k],
+					&PTERequestSizeC[k],
+					&dpde0_bytes_per_frame_ub_c[k],
+					&meta_pte_bytes_per_frame_ub_c[k]);
+
+			PrefetchSourceLinesC[k] = dml32_CalculatePrefetchSourceLines(
+					myPipe[k].VRatioChroma,
+					myPipe[k].VTapsChroma,
+					myPipe[k].InterlaceEnable,
+					myPipe[k].ProgressiveToInterlaceUnitInOPP,
+					myPipe[k].SwathHeightC,
+					myPipe[k].SourceRotation,
+					myPipe[k].ViewportStationary,
+					SwathWidthC[k],
+					myPipe[k].ViewportHeightChroma,
+					myPipe[k].ViewportXStartC,
+					myPipe[k].ViewportYStartC,
+
+					/* Output */
+					&VInitPreFillC[k],
+					&MaxNumSwathC[k]);
+		} else {
+			PTEBufferSizeInRequestsForLuma[k] = PTEBufferSizeInRequestsLuma + PTEBufferSizeInRequestsChroma;
+			PTEBufferSizeInRequestsForChroma[k] = 0;
+			PixelPTEBytesPerRowC[k] = 0;
+			PDEAndMetaPTEBytesFrameC = 0;
+			MetaRowByteC[k] = 0;
+			MaxNumSwathC[k] = 0;
+			PrefetchSourceLinesC[k] = 0;
+			dpte_row_height_chroma_one_row_per_frame[k] = 0;
+			dpte_row_width_chroma_ub_one_row_per_frame[k] = 0;
+			PixelPTEBytesPerRowC_one_row_per_frame[k] = 0;
+		}
+
+		PDEAndMetaPTEBytesFrameY = dml32_CalculateVMAndRowBytes(
+				myPipe[k].ViewportStationary,
+				myPipe[k].DCCEnable,
+				myPipe[k].DPPPerSurface,
+				myPipe[k].BlockHeight256BytesY,
+				myPipe[k].BlockWidth256BytesY,
+				myPipe[k].SourcePixelFormat,
+				myPipe[k].SurfaceTiling,
+				myPipe[k].BytePerPixelY,
+				myPipe[k].SourceRotation,
+				SwathWidthY[k],
+				myPipe[k].ViewportHeight,
+				myPipe[k].ViewportXStart,
+				myPipe[k].ViewportYStart,
+				GPUVMEnable,
+				HostVMEnable,
+				HostVMMaxNonCachedPageTableLevels,
+				GPUVMMaxPageTableLevels,
+				GPUVMMinPageSizeKBytes[k],
+				HostVMMinPageSize,
+				PTEBufferSizeInRequestsForLuma[k],
+				myPipe[k].PitchY,
+				myPipe[k].DCCMetaPitchY,
+				myPipe[k].BlockWidthY,
+				myPipe[k].BlockHeightY,
+
+				/* Output */
+				&MetaRowByteY[k],
+				&PixelPTEBytesPerRowY[k],
+				&dpte_row_width_luma_ub[k],
+				&dpte_row_height_luma[k],
+				&dpte_row_height_linear_luma[k],
+				&PixelPTEBytesPerRowY_one_row_per_frame[k],
+				&dpte_row_width_luma_ub_one_row_per_frame[k],
+				&dpte_row_height_luma_one_row_per_frame[k],
+				&meta_req_width[k],
+				&meta_req_height[k],
+				&meta_row_width[k],
+				&meta_row_height[k],
+				&PixelPTEReqWidthY[k],
+				&PixelPTEReqHeightY[k],
+				&PTERequestSizeY[k],
+				&dpde0_bytes_per_frame_ub_l[k],
+				&meta_pte_bytes_per_frame_ub_l[k]);
+
+		PrefetchSourceLinesY[k] = dml32_CalculatePrefetchSourceLines(
+				myPipe[k].VRatio,
+				myPipe[k].VTaps,
+				myPipe[k].InterlaceEnable,
+				myPipe[k].ProgressiveToInterlaceUnitInOPP,
+				myPipe[k].SwathHeightY,
+				myPipe[k].SourceRotation,
+				myPipe[k].ViewportStationary,
+				SwathWidthY[k],
+				myPipe[k].ViewportHeight,
+				myPipe[k].ViewportXStart,
+				myPipe[k].ViewportYStart,
+
+				/* Output */
+				&VInitPreFillY[k],
+				&MaxNumSwathY[k]);
+
+		PDEAndMetaPTEBytesFrame[k] = PDEAndMetaPTEBytesFrameY + PDEAndMetaPTEBytesFrameC;
+		MetaRowByte[k] = MetaRowByteY[k] + MetaRowByteC[k];
+
+		if (PixelPTEBytesPerRowY[k] <= 64 * PTEBufferSizeInRequestsForLuma[k] &&
+				PixelPTEBytesPerRowC[k] <= 64 * PTEBufferSizeInRequestsForChroma[k]) {
+			PTEBufferSizeNotExceeded[k] = true;
+		} else {
+			PTEBufferSizeNotExceeded[k] = false;
+		}
+
+		one_row_per_frame_fits_in_buffer[k] = (PixelPTEBytesPerRowY_one_row_per_frame[k] <= 64 * 2 *
+			PTEBufferSizeInRequestsForLuma[k] &&
+			PixelPTEBytesPerRowC_one_row_per_frame[k] <= 64 * 2 * PTEBufferSizeInRequestsForChroma[k]);
+	}
+
+	dml32_CalculateMALLUseForStaticScreen(
+			NumberOfActiveSurfaces,
+			MALLAllocatedForDCN,
+			UseMALLForStaticScreen,   // mode
+			SurfaceSizeInMALL,
+			one_row_per_frame_fits_in_buffer,
+			/* Output */
+			UsesMALLForStaticScreen); // boolen
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		PTE_BUFFER_MODE[k] = myPipe[k].FORCE_ONE_ROW_FOR_FRAME || UsesMALLForStaticScreen[k] ||
+				(UseMALLForPStateChange[k] == dm_use_mall_pstate_change_sub_viewport) ||
+				(UseMALLForPStateChange[k] == dm_use_mall_pstate_change_phantom_pipe) ||
+				(GPUVMMinPageSizeKBytes[k] > 64);
+		BIGK_FRAGMENT_SIZE[k] = dml_log2(GPUVMMinPageSizeKBytes[k] * 1024) - 12;
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d, SurfaceSizeInMALL = %d\n",  __func__, k, SurfaceSizeInMALL[k]);
+		dml_print("DML::%s: k=%d, UsesMALLForStaticScreen = %d\n",  __func__, k, UsesMALLForStaticScreen[k]);
+#endif
+		use_one_row_for_frame[k] = myPipe[k].FORCE_ONE_ROW_FOR_FRAME || UsesMALLForStaticScreen[k] ||
+				(UseMALLForPStateChange[k] == dm_use_mall_pstate_change_sub_viewport) ||
+				(UseMALLForPStateChange[k] == dm_use_mall_pstate_change_phantom_pipe) ||
+				(GPUVMMinPageSizeKBytes[k] > 64 && IsVertical(myPipe[k].SourceRotation));
+
+		use_one_row_for_frame_flip[k] = use_one_row_for_frame[k] &&
+				!(UseMALLForPStateChange[k] == dm_use_mall_pstate_change_full_frame);
+
+		if (use_one_row_for_frame[k]) {
+			dpte_row_height_luma[k] = dpte_row_height_luma_one_row_per_frame[k];
+			dpte_row_width_luma_ub[k] = dpte_row_width_luma_ub_one_row_per_frame[k];
+			PixelPTEBytesPerRowY[k] = PixelPTEBytesPerRowY_one_row_per_frame[k];
+			dpte_row_height_chroma[k] = dpte_row_height_chroma_one_row_per_frame[k];
+			dpte_row_width_chroma_ub[k] = dpte_row_width_chroma_ub_one_row_per_frame[k];
+			PixelPTEBytesPerRowC[k] = PixelPTEBytesPerRowC_one_row_per_frame[k];
+			PTEBufferSizeNotExceeded[k] = one_row_per_frame_fits_in_buffer[k];
+		}
+
+		if (MetaRowByte[k] <= DCCMetaBufferSizeBytes)
+			DCCMetaBufferSizeNotExceeded[k] = true;
+		else
+			DCCMetaBufferSizeNotExceeded[k] = false;
+
+		PixelPTEBytesPerRow[k] = PixelPTEBytesPerRowY[k] + PixelPTEBytesPerRowC[k];
+		if (use_one_row_for_frame[k])
+			PixelPTEBytesPerRow[k] = PixelPTEBytesPerRow[k] / 2;
+
+		dml32_CalculateRowBandwidth(
+				GPUVMEnable,
+				myPipe[k].SourcePixelFormat,
+				myPipe[k].VRatio,
+				myPipe[k].VRatioChroma,
+				myPipe[k].DCCEnable,
+				myPipe[k].HTotal / myPipe[k].PixelClock,
+				MetaRowByteY[k], MetaRowByteC[k],
+				meta_row_height[k],
+				meta_row_height_chroma[k],
+				PixelPTEBytesPerRowY[k],
+				PixelPTEBytesPerRowC[k],
+				dpte_row_height_luma[k],
+				dpte_row_height_chroma[k],
+
+				/* Output */
+				&meta_row_bw[k],
+				&dpte_row_bw[k]);
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d, use_one_row_for_frame        = %d\n",  __func__, k, use_one_row_for_frame[k]);
+		dml_print("DML::%s: k=%d, use_one_row_for_frame_flip   = %d\n",
+				__func__, k, use_one_row_for_frame_flip[k]);
+		dml_print("DML::%s: k=%d, UseMALLForPStateChange       = %d\n",
+				__func__, k, UseMALLForPStateChange[k]);
+		dml_print("DML::%s: k=%d, dpte_row_height_luma         = %d\n",  __func__, k, dpte_row_height_luma[k]);
+		dml_print("DML::%s: k=%d, dpte_row_width_luma_ub       = %d\n",
+				__func__, k, dpte_row_width_luma_ub[k]);
+		dml_print("DML::%s: k=%d, PixelPTEBytesPerRowY         = %d\n",  __func__, k, PixelPTEBytesPerRowY[k]);
+		dml_print("DML::%s: k=%d, dpte_row_height_chroma       = %d\n",
+				__func__, k, dpte_row_height_chroma[k]);
+		dml_print("DML::%s: k=%d, dpte_row_width_chroma_ub     = %d\n",
+				__func__, k, dpte_row_width_chroma_ub[k]);
+		dml_print("DML::%s: k=%d, PixelPTEBytesPerRowC         = %d\n",  __func__, k, PixelPTEBytesPerRowC[k]);
+		dml_print("DML::%s: k=%d, PixelPTEBytesPerRow          = %d\n",  __func__, k, PixelPTEBytesPerRow[k]);
+		dml_print("DML::%s: k=%d, PTEBufferSizeNotExceeded     = %d\n",
+				__func__, k, PTEBufferSizeNotExceeded[k]);
+		dml_print("DML::%s: k=%d, PTE_BUFFER_MODE              = %d\n", __func__, k, PTE_BUFFER_MODE[k]);
+		dml_print("DML::%s: k=%d, BIGK_FRAGMENT_SIZE           = %d\n", __func__, k, BIGK_FRAGMENT_SIZE[k]);
+#endif
+	}
+} // CalculateVMRowAndSwath
+
+unsigned int dml32_CalculateVMAndRowBytes(
+		bool ViewportStationary,
+		bool DCCEnable,
+		unsigned int NumberOfDPPs,
+		unsigned int BlockHeight256Bytes,
+		unsigned int BlockWidth256Bytes,
+		enum source_format_class SourcePixelFormat,
+		unsigned int SurfaceTiling,
+		unsigned int BytePerPixel,
+		enum dm_rotation_angle SourceRotation,
+		double SwathWidth,
+		unsigned int ViewportHeight,
+		unsigned int    ViewportXStart,
+		unsigned int    ViewportYStart,
+		bool GPUVMEnable,
+		bool HostVMEnable,
+		unsigned int HostVMMaxNonCachedPageTableLevels,
+		unsigned int GPUVMMaxPageTableLevels,
+		unsigned int GPUVMMinPageSizeKBytes,
+		unsigned int HostVMMinPageSize,
+		unsigned int PTEBufferSizeInRequests,
+		unsigned int Pitch,
+		unsigned int DCCMetaPitch,
+		unsigned int MacroTileWidth,
+		unsigned int MacroTileHeight,
+
+		/* Output */
+		unsigned int *MetaRowByte,
+		unsigned int *PixelPTEBytesPerRow,
+		unsigned int    *dpte_row_width_ub,
+		unsigned int *dpte_row_height,
+		unsigned int *dpte_row_height_linear,
+		unsigned int    *PixelPTEBytesPerRow_one_row_per_frame,
+		unsigned int    *dpte_row_width_ub_one_row_per_frame,
+		unsigned int    *dpte_row_height_one_row_per_frame,
+		unsigned int *MetaRequestWidth,
+		unsigned int *MetaRequestHeight,
+		unsigned int *meta_row_width,
+		unsigned int *meta_row_height,
+		unsigned int *PixelPTEReqWidth,
+		unsigned int *PixelPTEReqHeight,
+		unsigned int *PTERequestSize,
+		unsigned int    *DPDE0BytesFrame,
+		unsigned int    *MetaPTEBytesFrame)
+{
+	unsigned int MPDEBytesFrame;
+	unsigned int DCCMetaSurfaceBytes;
+	unsigned int ExtraDPDEBytesFrame;
+	unsigned int PDEAndMetaPTEBytesFrame;
+	unsigned int HostVMDynamicLevels = 0;
+	unsigned int    MacroTileSizeBytes;
+	unsigned int    vp_height_meta_ub;
+	unsigned int    vp_height_dpte_ub;
+	unsigned int PixelPTEReqWidth_linear = 0; // VBA_DELTA. VBA doesn't calculate this
+
+	if (GPUVMEnable == true && HostVMEnable == true) {
+		if (HostVMMinPageSize < 2048)
+			HostVMDynamicLevels = HostVMMaxNonCachedPageTableLevels;
+		else if (HostVMMinPageSize >= 2048 && HostVMMinPageSize < 1048576)
+			HostVMDynamicLevels = dml_max(0, (int) HostVMMaxNonCachedPageTableLevels - 1);
+		else
+			HostVMDynamicLevels = dml_max(0, (int) HostVMMaxNonCachedPageTableLevels - 2);
+	}
+
+	*MetaRequestHeight = 8 * BlockHeight256Bytes;
+	*MetaRequestWidth = 8 * BlockWidth256Bytes;
+	if (SurfaceTiling == dm_sw_linear) {
+		*meta_row_height = 32;
+		*meta_row_width = dml_floor(ViewportXStart + SwathWidth + *MetaRequestWidth - 1, *MetaRequestWidth)
+				- dml_floor(ViewportXStart, *MetaRequestWidth);
+	} else if (!IsVertical(SourceRotation)) {
+		*meta_row_height = *MetaRequestHeight;
+		if (ViewportStationary && NumberOfDPPs == 1) {
+			*meta_row_width = dml_floor(ViewportXStart + SwathWidth + *MetaRequestWidth - 1,
+					*MetaRequestWidth) - dml_floor(ViewportXStart, *MetaRequestWidth);
+		} else {
+			*meta_row_width = dml_ceil(SwathWidth - 1, *MetaRequestWidth) + *MetaRequestWidth;
+		}
+		*MetaRowByte = *meta_row_width * *MetaRequestHeight * BytePerPixel / 256.0;
+	} else {
+		*meta_row_height = *MetaRequestWidth;
+		if (ViewportStationary && NumberOfDPPs == 1) {
+			*meta_row_width = dml_floor(ViewportYStart + ViewportHeight + *MetaRequestHeight - 1,
+					*MetaRequestHeight) - dml_floor(ViewportYStart, *MetaRequestHeight);
+		} else {
+			*meta_row_width = dml_ceil(SwathWidth - 1, *MetaRequestHeight) + *MetaRequestHeight;
+		}
+		*MetaRowByte = *meta_row_width * *MetaRequestWidth * BytePerPixel / 256.0;
+	}
+
+	if (ViewportStationary && (NumberOfDPPs == 1 || !IsVertical(SourceRotation))) {
+		vp_height_meta_ub = dml_floor(ViewportYStart + ViewportHeight + 64 * BlockHeight256Bytes - 1,
+				64 * BlockHeight256Bytes) - dml_floor(ViewportYStart, 64 * BlockHeight256Bytes);
+	} else if (!IsVertical(SourceRotation)) {
+		vp_height_meta_ub = dml_ceil(ViewportHeight - 1, 64 * BlockHeight256Bytes) + 64 * BlockHeight256Bytes;
+	} else {
+		vp_height_meta_ub = dml_ceil(SwathWidth - 1, 64 * BlockHeight256Bytes) + 64 * BlockHeight256Bytes;
+	}
+
+	DCCMetaSurfaceBytes = DCCMetaPitch * vp_height_meta_ub * BytePerPixel / 256.0;
+
+	if (GPUVMEnable == true) {
+		*MetaPTEBytesFrame = (dml_ceil((double) (DCCMetaSurfaceBytes - 4.0 * 1024.0) /
+				(8 * 4.0 * 1024), 1) + 1) * 64;
+		MPDEBytesFrame = 128 * (GPUVMMaxPageTableLevels - 1);
+	} else {
+		*MetaPTEBytesFrame = 0;
+		MPDEBytesFrame = 0;
+	}
+
+	if (DCCEnable != true) {
+		*MetaPTEBytesFrame = 0;
+		MPDEBytesFrame = 0;
+		*MetaRowByte = 0;
+	}
+
+	MacroTileSizeBytes = MacroTileWidth * BytePerPixel * MacroTileHeight;
+
+	if (GPUVMEnable == true && GPUVMMaxPageTableLevels > 1) {
+		if (ViewportStationary && (NumberOfDPPs == 1 || !IsVertical(SourceRotation))) {
+			vp_height_dpte_ub = dml_floor(ViewportYStart + ViewportHeight +
+					MacroTileHeight - 1, MacroTileHeight) -
+					dml_floor(ViewportYStart, MacroTileHeight);
+		} else if (!IsVertical(SourceRotation)) {
+			vp_height_dpte_ub = dml_ceil(ViewportHeight - 1, MacroTileHeight) + MacroTileHeight;
+		} else {
+			vp_height_dpte_ub = dml_ceil(SwathWidth - 1, MacroTileHeight) + MacroTileHeight;
+		}
+		*DPDE0BytesFrame = 64 * (dml_ceil((Pitch * vp_height_dpte_ub * BytePerPixel - MacroTileSizeBytes) /
+				(8 * 2097152), 1) + 1);
+		ExtraDPDEBytesFrame = 128 * (GPUVMMaxPageTableLevels - 2);
+	} else {
+		*DPDE0BytesFrame = 0;
+		ExtraDPDEBytesFrame = 0;
+		vp_height_dpte_ub = 0;
+	}
+
+	PDEAndMetaPTEBytesFrame = *MetaPTEBytesFrame + MPDEBytesFrame + *DPDE0BytesFrame + ExtraDPDEBytesFrame;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: DCCEnable = %d\n", __func__, DCCEnable);
+	dml_print("DML::%s: GPUVMEnable = %d\n", __func__, GPUVMEnable);
+	dml_print("DML::%s: SwModeLinear = %d\n", __func__, SurfaceTiling == dm_sw_linear);
+	dml_print("DML::%s: BytePerPixel = %d\n", __func__, BytePerPixel);
+	dml_print("DML::%s: GPUVMMaxPageTableLevels = %d\n", __func__, GPUVMMaxPageTableLevels);
+	dml_print("DML::%s: BlockHeight256Bytes = %d\n", __func__, BlockHeight256Bytes);
+	dml_print("DML::%s: BlockWidth256Bytes = %d\n", __func__, BlockWidth256Bytes);
+	dml_print("DML::%s: MacroTileHeight = %d\n", __func__, MacroTileHeight);
+	dml_print("DML::%s: MacroTileWidth = %d\n", __func__, MacroTileWidth);
+	dml_print("DML::%s: MetaPTEBytesFrame = %d\n", __func__, *MetaPTEBytesFrame);
+	dml_print("DML::%s: MPDEBytesFrame = %d\n", __func__, MPDEBytesFrame);
+	dml_print("DML::%s: DPDE0BytesFrame = %d\n", __func__, *DPDE0BytesFrame);
+	dml_print("DML::%s: ExtraDPDEBytesFrame= %d\n", __func__, ExtraDPDEBytesFrame);
+	dml_print("DML::%s: PDEAndMetaPTEBytesFrame = %d\n", __func__, PDEAndMetaPTEBytesFrame);
+	dml_print("DML::%s: ViewportHeight = %d\n", __func__, ViewportHeight);
+	dml_print("DML::%s: SwathWidth = %d\n", __func__, SwathWidth);
+	dml_print("DML::%s: vp_height_dpte_ub = %d\n", __func__, vp_height_dpte_ub);
+#endif
+
+	if (HostVMEnable == true)
+		PDEAndMetaPTEBytesFrame = PDEAndMetaPTEBytesFrame * (1 + 8 * HostVMDynamicLevels);
+
+	if (SurfaceTiling == dm_sw_linear) {
+		*PixelPTEReqHeight = 1;
+		*PixelPTEReqWidth = GPUVMMinPageSizeKBytes * 1024 * 8 / BytePerPixel;
+		PixelPTEReqWidth_linear = GPUVMMinPageSizeKBytes * 1024 * 8 / BytePerPixel;
+		*PTERequestSize = 64;
+	} else if (GPUVMMinPageSizeKBytes == 4) {
+		*PixelPTEReqHeight = 16 * BlockHeight256Bytes;
+		*PixelPTEReqWidth = 16 * BlockWidth256Bytes;
+		*PTERequestSize = 128;
+	} else {
+		*PixelPTEReqHeight = MacroTileHeight;
+		*PixelPTEReqWidth = 8 *  1024 * GPUVMMinPageSizeKBytes / (MacroTileHeight * BytePerPixel);
+		*PTERequestSize = 64;
+	}
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: GPUVMMinPageSizeKBytes = %d\n", __func__, GPUVMMinPageSizeKBytes);
+	dml_print("DML::%s: PDEAndMetaPTEBytesFrame = %d (after HostVM factor)\n", __func__, PDEAndMetaPTEBytesFrame);
+	dml_print("DML::%s: PixelPTEReqHeight = %d\n", __func__, *PixelPTEReqHeight);
+	dml_print("DML::%s: PixelPTEReqWidth = %d\n", __func__, *PixelPTEReqWidth);
+	dml_print("DML::%s: PixelPTEReqWidth_linear = %d\n", __func__, PixelPTEReqWidth_linear);
+	dml_print("DML::%s: PTERequestSize = %d\n", __func__, *PTERequestSize);
+	dml_print("DML::%s: Pitch = %d\n", __func__, Pitch);
+#endif
+
+	*dpte_row_height_one_row_per_frame = vp_height_dpte_ub;
+	*dpte_row_width_ub_one_row_per_frame = (dml_ceil(((double)Pitch * (double)*dpte_row_height_one_row_per_frame /
+			(double) *PixelPTEReqHeight - 1) / (double) *PixelPTEReqWidth, 1) + 1) *
+					(double) *PixelPTEReqWidth;
+	*PixelPTEBytesPerRow_one_row_per_frame = *dpte_row_width_ub_one_row_per_frame / *PixelPTEReqWidth *
+			*PTERequestSize;
+
+	if (SurfaceTiling == dm_sw_linear) {
+		*dpte_row_height = dml_min(128, 1 << (unsigned int) dml_floor(dml_log2(PTEBufferSizeInRequests *
+				*PixelPTEReqWidth / Pitch), 1));
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: dpte_row_height = %d (1)\n", __func__,
+				PTEBufferSizeInRequests * *PixelPTEReqWidth / Pitch);
+		dml_print("DML::%s: dpte_row_height = %f (2)\n", __func__,
+				dml_log2(PTEBufferSizeInRequests * *PixelPTEReqWidth / Pitch));
+		dml_print("DML::%s: dpte_row_height = %f (3)\n", __func__,
+				dml_floor(dml_log2(PTEBufferSizeInRequests * *PixelPTEReqWidth / Pitch), 1));
+		dml_print("DML::%s: dpte_row_height = %d (4)\n", __func__,
+				1 << (unsigned int) dml_floor(dml_log2(PTEBufferSizeInRequests *
+						*PixelPTEReqWidth / Pitch), 1));
+		dml_print("DML::%s: dpte_row_height = %d\n", __func__, *dpte_row_height);
+#endif
+		*dpte_row_width_ub = dml_ceil(((double) Pitch * (double) *dpte_row_height - 1),
+				(double) *PixelPTEReqWidth) + *PixelPTEReqWidth;
+		*PixelPTEBytesPerRow = *dpte_row_width_ub / (double)*PixelPTEReqWidth * (double)*PTERequestSize;
+
+		// VBA_DELTA, VBA doesn't have programming value for pte row height linear.
+		*dpte_row_height_linear = 1 << (unsigned int) dml_floor(dml_log2(PTEBufferSizeInRequests *
+				PixelPTEReqWidth_linear / Pitch), 1);
+		if (*dpte_row_height_linear > 128)
+			*dpte_row_height_linear = 128;
+
+	} else if (!IsVertical(SourceRotation)) {
+		*dpte_row_height = *PixelPTEReqHeight;
+
+		if (GPUVMMinPageSizeKBytes > 64) {
+			*dpte_row_width_ub = (dml_ceil((Pitch * *dpte_row_height / *PixelPTEReqHeight - 1) /
+					*PixelPTEReqWidth, 1) + 1) * *PixelPTEReqWidth;
+		} else if (ViewportStationary && (NumberOfDPPs == 1)) {
+			*dpte_row_width_ub = dml_floor(ViewportXStart + SwathWidth +
+					*PixelPTEReqWidth - 1, *PixelPTEReqWidth) -
+					dml_floor(ViewportXStart, *PixelPTEReqWidth);
+		} else {
+			*dpte_row_width_ub = (dml_ceil((SwathWidth - 1) / *PixelPTEReqWidth, 1) + 1) *
+					*PixelPTEReqWidth;
+		}
+
+		*PixelPTEBytesPerRow = *dpte_row_width_ub / *PixelPTEReqWidth * *PTERequestSize;
+	} else {
+		*dpte_row_height = dml_min(*PixelPTEReqWidth, MacroTileWidth);
+
+		if (ViewportStationary && (NumberOfDPPs == 1)) {
+			*dpte_row_width_ub = dml_floor(ViewportYStart + ViewportHeight + *PixelPTEReqHeight - 1,
+					*PixelPTEReqHeight) - dml_floor(ViewportYStart, *PixelPTEReqHeight);
+		} else {
+			*dpte_row_width_ub = (dml_ceil((SwathWidth - 1) / *PixelPTEReqHeight, 1) + 1)
+					* *PixelPTEReqHeight;
+		}
+
+		*PixelPTEBytesPerRow = *dpte_row_width_ub / *PixelPTEReqHeight * *PTERequestSize;
+	}
+
+	if (GPUVMEnable != true)
+		*PixelPTEBytesPerRow = 0;
+	if (HostVMEnable == true)
+		*PixelPTEBytesPerRow = *PixelPTEBytesPerRow * (1 + 8 * HostVMDynamicLevels);
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: GPUVMMinPageSizeKBytes = %d\n", __func__, GPUVMMinPageSizeKBytes);
+	dml_print("DML::%s: dpte_row_height = %d\n", __func__, *dpte_row_height);
+	dml_print("DML::%s: dpte_row_height_linear = %d\n", __func__, *dpte_row_height_linear);
+	dml_print("DML::%s: dpte_row_width_ub = %d\n", __func__, *dpte_row_width_ub);
+	dml_print("DML::%s: PixelPTEBytesPerRow = %d\n", __func__, *PixelPTEBytesPerRow);
+	dml_print("DML::%s: PTEBufferSizeInRequests = %d\n", __func__, PTEBufferSizeInRequests);
+	dml_print("DML::%s: dpte_row_height_one_row_per_frame = %d\n", __func__, *dpte_row_height_one_row_per_frame);
+	dml_print("DML::%s: dpte_row_width_ub_one_row_per_frame = %d\n",
+			__func__, *dpte_row_width_ub_one_row_per_frame);
+	dml_print("DML::%s: PixelPTEBytesPerRow_one_row_per_frame = %d\n",
+			__func__, *PixelPTEBytesPerRow_one_row_per_frame);
+	dml_print("DML: vm_bytes = meta_pte_bytes_per_frame (per_pipe) = MetaPTEBytesFrame = : %i\n",
+			*MetaPTEBytesFrame);
+#endif
+
+	return PDEAndMetaPTEBytesFrame;
+} // CalculateVMAndRowBytes
+
+double dml32_CalculatePrefetchSourceLines(
+		double VRatio,
+		unsigned int VTaps,
+		bool Interlace,
+		bool ProgressiveToInterlaceUnitInOPP,
+		unsigned int SwathHeight,
+		enum dm_rotation_angle SourceRotation,
+		bool ViewportStationary,
+		double SwathWidth,
+		unsigned int ViewportHeight,
+		unsigned int ViewportXStart,
+		unsigned int ViewportYStart,
+
+		/* Output */
+		double *VInitPreFill,
+		unsigned int *MaxNumSwath)
+{
+
+	unsigned int vp_start_rot;
+	unsigned int sw0_tmp;
+	unsigned int MaxPartialSwath;
+	double numLines;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: VRatio = %f\n", __func__, VRatio);
+	dml_print("DML::%s: VTaps = %d\n", __func__, VTaps);
+	dml_print("DML::%s: ViewportXStart = %d\n", __func__, ViewportXStart);
+	dml_print("DML::%s: ViewportYStart = %d\n", __func__, ViewportYStart);
+	dml_print("DML::%s: ViewportStationary = %d\n", __func__, ViewportStationary);
+	dml_print("DML::%s: SwathHeight = %d\n", __func__, SwathHeight);
+#endif
+	if (ProgressiveToInterlaceUnitInOPP)
+		*VInitPreFill = dml_floor((VRatio + (double) VTaps + 1) / 2.0, 1);
+	else
+		*VInitPreFill = dml_floor((VRatio + (double) VTaps + 1 + Interlace * 0.5 * VRatio) / 2.0, 1);
+
+	if (ViewportStationary) {
+		if (SourceRotation == dm_rotation_180 || SourceRotation == dm_rotation_180m) {
+			vp_start_rot = SwathHeight -
+					(((unsigned int) (ViewportYStart + ViewportHeight - 1) % SwathHeight) + 1);
+		} else if (SourceRotation == dm_rotation_270 || SourceRotation == dm_rotation_90m) {
+			vp_start_rot = ViewportXStart;
+		} else if (SourceRotation == dm_rotation_90 || SourceRotation == dm_rotation_270m) {
+			vp_start_rot = SwathHeight -
+					(((unsigned int)(ViewportYStart + SwathWidth - 1) % SwathHeight) + 1);
+		} else {
+			vp_start_rot = ViewportYStart;
+		}
+		sw0_tmp = SwathHeight - (vp_start_rot % SwathHeight);
+		if (sw0_tmp < *VInitPreFill)
+			*MaxNumSwath = dml_ceil((*VInitPreFill - sw0_tmp) / SwathHeight, 1) + 1;
+		else
+			*MaxNumSwath = 1;
+		MaxPartialSwath = dml_max(1, (unsigned int) (vp_start_rot + *VInitPreFill - 1) % SwathHeight);
+	} else {
+		*MaxNumSwath = dml_ceil((*VInitPreFill - 1.0) / SwathHeight, 1) + 1;
+		if (*VInitPreFill > 1)
+			MaxPartialSwath = dml_max(1, (unsigned int) (*VInitPreFill - 2) % SwathHeight);
+		else
+			MaxPartialSwath = dml_max(1, (unsigned int) (*VInitPreFill + SwathHeight - 2) % SwathHeight);
+	}
+	numLines = *MaxNumSwath * SwathHeight + MaxPartialSwath;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: vp_start_rot = %d\n", __func__, vp_start_rot);
+	dml_print("DML::%s: VInitPreFill = %d\n", __func__, *VInitPreFill);
+	dml_print("DML::%s: MaxPartialSwath = %d\n", __func__, MaxPartialSwath);
+	dml_print("DML::%s: MaxNumSwath = %d\n", __func__, *MaxNumSwath);
+	dml_print("DML::%s: Prefetch source lines = %3.2f\n", __func__, numLines);
+#endif
+	return numLines;
+
+} // CalculatePrefetchSourceLines
+
+void dml32_CalculateMALLUseForStaticScreen(
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int MALLAllocatedForDCNFinal,
+		enum dm_use_mall_for_static_screen_mode *UseMALLForStaticScreen,
+		unsigned int SurfaceSizeInMALL[],
+		bool one_row_per_frame_fits_in_buffer[],
+
+		/* output */
+		bool UsesMALLForStaticScreen[])
+{
+	unsigned int k;
+	unsigned int SurfaceToAddToMALL;
+	bool CanAddAnotherSurfaceToMALL;
+	unsigned int TotalSurfaceSizeInMALL;
+
+	TotalSurfaceSizeInMALL = 0;
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		UsesMALLForStaticScreen[k] = (UseMALLForStaticScreen[k] == dm_use_mall_static_screen_enable);
+		if (UsesMALLForStaticScreen[k])
+			TotalSurfaceSizeInMALL = TotalSurfaceSizeInMALL + SurfaceSizeInMALL[k];
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d, UsesMALLForStaticScreen = %d\n",  __func__, k, UsesMALLForStaticScreen[k]);
+		dml_print("DML::%s: k=%d, TotalSurfaceSizeInMALL = %d\n",  __func__, k, TotalSurfaceSizeInMALL);
+#endif
+	}
+
+	SurfaceToAddToMALL = 0;
+	CanAddAnotherSurfaceToMALL = true;
+	while (CanAddAnotherSurfaceToMALL) {
+		CanAddAnotherSurfaceToMALL = false;
+		for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+			if (TotalSurfaceSizeInMALL + SurfaceSizeInMALL[k] <= MALLAllocatedForDCNFinal * 1024 * 1024 &&
+					!UsesMALLForStaticScreen[k] &&
+					UseMALLForStaticScreen[k] != dm_use_mall_static_screen_disable &&
+					one_row_per_frame_fits_in_buffer[k] &&
+					(!CanAddAnotherSurfaceToMALL ||
+					SurfaceSizeInMALL[k] < SurfaceSizeInMALL[SurfaceToAddToMALL])) {
+				CanAddAnotherSurfaceToMALL = true;
+				SurfaceToAddToMALL = k;
+#ifdef __DML_VBA_DEBUG__
+				dml_print("DML::%s: k=%d, UseMALLForStaticScreen = %d (dis, en, optimize)\n",
+						__func__, k, UseMALLForStaticScreen[k]);
+#endif
+			}
+		}
+		if (CanAddAnotherSurfaceToMALL) {
+			UsesMALLForStaticScreen[SurfaceToAddToMALL] = true;
+			TotalSurfaceSizeInMALL = TotalSurfaceSizeInMALL + SurfaceSizeInMALL[SurfaceToAddToMALL];
+
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: SurfaceToAddToMALL       = %d\n",  __func__, SurfaceToAddToMALL);
+			dml_print("DML::%s: TotalSurfaceSizeInMALL   = %d\n",  __func__, TotalSurfaceSizeInMALL);
+#endif
+
+		}
+	}
+}
+
+void dml32_CalculateRowBandwidth(
+		bool GPUVMEnable,
+		enum source_format_class SourcePixelFormat,
+		double VRatio,
+		double VRatioChroma,
+		bool DCCEnable,
+		double LineTime,
+		unsigned int MetaRowByteLuma,
+		unsigned int MetaRowByteChroma,
+		unsigned int meta_row_height_luma,
+		unsigned int meta_row_height_chroma,
+		unsigned int PixelPTEBytesPerRowLuma,
+		unsigned int PixelPTEBytesPerRowChroma,
+		unsigned int dpte_row_height_luma,
+		unsigned int dpte_row_height_chroma,
+		/* Output */
+		double *meta_row_bw,
+		double *dpte_row_bw)
+{
+	if (DCCEnable != true) {
+		*meta_row_bw = 0;
+	} else if (SourcePixelFormat == dm_420_8 || SourcePixelFormat == dm_420_10 || SourcePixelFormat == dm_420_12 ||
+			SourcePixelFormat == dm_rgbe_alpha) {
+		*meta_row_bw = VRatio * MetaRowByteLuma / (meta_row_height_luma * LineTime) + VRatioChroma *
+				MetaRowByteChroma / (meta_row_height_chroma * LineTime);
+	} else {
+		*meta_row_bw = VRatio * MetaRowByteLuma / (meta_row_height_luma * LineTime);
+	}
+
+	if (GPUVMEnable != true) {
+		*dpte_row_bw = 0;
+	} else if (SourcePixelFormat == dm_420_8 || SourcePixelFormat == dm_420_10 || SourcePixelFormat == dm_420_12 ||
+			SourcePixelFormat == dm_rgbe_alpha) {
+		*dpte_row_bw = VRatio * PixelPTEBytesPerRowLuma / (dpte_row_height_luma * LineTime) +
+				VRatioChroma * PixelPTEBytesPerRowChroma / (dpte_row_height_chroma * LineTime);
+	} else {
+		*dpte_row_bw = VRatio * PixelPTEBytesPerRowLuma / (dpte_row_height_luma * LineTime);
+	}
+}
+
+double dml32_CalculateUrgentLatency(
+		double UrgentLatencyPixelDataOnly,
+		double UrgentLatencyPixelMixedWithVMData,
+		double UrgentLatencyVMDataOnly,
+		bool   DoUrgentLatencyAdjustment,
+		double UrgentLatencyAdjustmentFabricClockComponent,
+		double UrgentLatencyAdjustmentFabricClockReference,
+		double FabricClock)
+{
+	double   ret;
+
+	ret = dml_max3(UrgentLatencyPixelDataOnly, UrgentLatencyPixelMixedWithVMData, UrgentLatencyVMDataOnly);
+	if (DoUrgentLatencyAdjustment == true) {
+		ret = ret + UrgentLatencyAdjustmentFabricClockComponent *
+				(UrgentLatencyAdjustmentFabricClockReference / FabricClock - 1);
+	}
+	return ret;
+}
+
+void dml32_CalculateUrgentBurstFactor(
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange,
+		unsigned int    swath_width_luma_ub,
+		unsigned int    swath_width_chroma_ub,
+		unsigned int SwathHeightY,
+		unsigned int SwathHeightC,
+		double  LineTime,
+		double  UrgentLatency,
+		double  CursorBufferSize,
+		unsigned int CursorWidth,
+		unsigned int CursorBPP,
+		double  VRatio,
+		double  VRatioC,
+		double  BytePerPixelInDETY,
+		double  BytePerPixelInDETC,
+		unsigned int    DETBufferSizeY,
+		unsigned int    DETBufferSizeC,
+		/* Output */
+		double *UrgentBurstFactorCursor,
+		double *UrgentBurstFactorLuma,
+		double *UrgentBurstFactorChroma,
+		bool   *NotEnoughUrgentLatencyHiding)
+{
+	double       LinesInDETLuma;
+	double       LinesInDETChroma;
+	unsigned int LinesInCursorBuffer;
+	double       CursorBufferSizeInTime;
+	double       DETBufferSizeInTimeLuma;
+	double       DETBufferSizeInTimeChroma;
+
+	*NotEnoughUrgentLatencyHiding = 0;
+
+	if (CursorWidth > 0) {
+		LinesInCursorBuffer = 1 << (unsigned int) dml_floor(dml_log2(CursorBufferSize * 1024.0 /
+				(CursorWidth * CursorBPP / 8.0)), 1.0);
+		if (VRatio > 0) {
+			CursorBufferSizeInTime = LinesInCursorBuffer * LineTime / VRatio;
+			if (CursorBufferSizeInTime - UrgentLatency <= 0) {
+				*NotEnoughUrgentLatencyHiding = 1;
+				*UrgentBurstFactorCursor = 0;
+			} else {
+				*UrgentBurstFactorCursor = CursorBufferSizeInTime /
+						(CursorBufferSizeInTime - UrgentLatency);
+			}
+		} else {
+			*UrgentBurstFactorCursor = 1;
+		}
+	}
+
+	LinesInDETLuma = (UseMALLForPStateChange == dm_use_mall_pstate_change_phantom_pipe ? 1024*1024 :
+			DETBufferSizeY) / BytePerPixelInDETY / swath_width_luma_ub;
+
+	if (VRatio > 0) {
+		DETBufferSizeInTimeLuma = dml_floor(LinesInDETLuma, SwathHeightY) * LineTime / VRatio;
+		if (DETBufferSizeInTimeLuma - UrgentLatency <= 0) {
+			*NotEnoughUrgentLatencyHiding = 1;
+			*UrgentBurstFactorLuma = 0;
+		} else {
+			*UrgentBurstFactorLuma = DETBufferSizeInTimeLuma / (DETBufferSizeInTimeLuma - UrgentLatency);
+		}
+	} else {
+		*UrgentBurstFactorLuma = 1;
+	}
+
+	if (BytePerPixelInDETC > 0) {
+		LinesInDETChroma = (UseMALLForPStateChange == dm_use_mall_pstate_change_phantom_pipe ?
+					1024 * 1024 : DETBufferSizeC) / BytePerPixelInDETC
+					/ swath_width_chroma_ub;
+
+		if (VRatio > 0) {
+			DETBufferSizeInTimeChroma = dml_floor(LinesInDETChroma, SwathHeightC) * LineTime / VRatio;
+			if (DETBufferSizeInTimeChroma - UrgentLatency <= 0) {
+				*NotEnoughUrgentLatencyHiding = 1;
+				*UrgentBurstFactorChroma = 0;
+			} else {
+				*UrgentBurstFactorChroma = DETBufferSizeInTimeChroma
+						/ (DETBufferSizeInTimeChroma - UrgentLatency);
+			}
+		} else {
+			*UrgentBurstFactorChroma = 1;
+		}
+	}
+} // CalculateUrgentBurstFactor
+
+void dml32_CalculateDCFCLKDeepSleep(
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int BytePerPixelY[],
+		unsigned int BytePerPixelC[],
+		double VRatio[],
+		double VRatioChroma[],
+		double SwathWidthY[],
+		double SwathWidthC[],
+		unsigned int DPPPerSurface[],
+		double HRatio[],
+		double HRatioChroma[],
+		double PixelClock[],
+		double PSCL_THROUGHPUT[],
+		double PSCL_THROUGHPUT_CHROMA[],
+		double Dppclk[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		unsigned int ReturnBusWidth,
+
+		/* Output */
+		double *DCFClkDeepSleep)
+{
+	unsigned int k;
+	double   DisplayPipeLineDeliveryTimeLuma;
+	double   DisplayPipeLineDeliveryTimeChroma;
+	double   DCFClkDeepSleepPerSurface[DC__NUM_DPP__MAX];
+	double ReadBandwidth = 0.0;
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+
+		if (VRatio[k] <= 1) {
+			DisplayPipeLineDeliveryTimeLuma = SwathWidthY[k] * DPPPerSurface[k] / HRatio[k]
+					/ PixelClock[k];
+		} else {
+			DisplayPipeLineDeliveryTimeLuma = SwathWidthY[k] / PSCL_THROUGHPUT[k] / Dppclk[k];
+		}
+		if (BytePerPixelC[k] == 0) {
+			DisplayPipeLineDeliveryTimeChroma = 0;
+		} else {
+			if (VRatioChroma[k] <= 1) {
+				DisplayPipeLineDeliveryTimeChroma = SwathWidthC[k] *
+						DPPPerSurface[k] / HRatioChroma[k] / PixelClock[k];
+			} else {
+				DisplayPipeLineDeliveryTimeChroma = SwathWidthC[k] / PSCL_THROUGHPUT_CHROMA[k]
+						/ Dppclk[k];
+			}
+		}
+
+		if (BytePerPixelC[k] > 0) {
+			DCFClkDeepSleepPerSurface[k] = dml_max(__DML_MIN_DCFCLK_FACTOR__ * SwathWidthY[k] *
+					BytePerPixelY[k] / 32.0 / DisplayPipeLineDeliveryTimeLuma,
+					__DML_MIN_DCFCLK_FACTOR__ * SwathWidthC[k] * BytePerPixelC[k] /
+					32.0 / DisplayPipeLineDeliveryTimeChroma);
+		} else {
+			DCFClkDeepSleepPerSurface[k] = __DML_MIN_DCFCLK_FACTOR__ * SwathWidthY[k] * BytePerPixelY[k] /
+					64.0 / DisplayPipeLineDeliveryTimeLuma;
+		}
+		DCFClkDeepSleepPerSurface[k] = dml_max(DCFClkDeepSleepPerSurface[k], PixelClock[k] / 16);
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d, PixelClock = %f\n", __func__, k, PixelClock[k]);
+		dml_print("DML::%s: k=%d, DCFClkDeepSleepPerSurface = %f\n", __func__, k, DCFClkDeepSleepPerSurface[k]);
+#endif
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k)
+		ReadBandwidth = ReadBandwidth + ReadBandwidthLuma[k] + ReadBandwidthChroma[k];
+
+	*DCFClkDeepSleep = dml_max(8.0, __DML_MIN_DCFCLK_FACTOR__ * ReadBandwidth / (double) ReturnBusWidth);
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: __DML_MIN_DCFCLK_FACTOR__ = %f\n", __func__, __DML_MIN_DCFCLK_FACTOR__);
+	dml_print("DML::%s: ReadBandwidth = %f\n", __func__, ReadBandwidth);
+	dml_print("DML::%s: ReturnBusWidth = %d\n", __func__, ReturnBusWidth);
+	dml_print("DML::%s: DCFClkDeepSleep = %f\n", __func__, *DCFClkDeepSleep);
+#endif
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k)
+		*DCFClkDeepSleep = dml_max(*DCFClkDeepSleep, DCFClkDeepSleepPerSurface[k]);
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: DCFClkDeepSleep = %f (final)\n", __func__, *DCFClkDeepSleep);
+#endif
+} // CalculateDCFCLKDeepSleep
+
+double dml32_CalculateWriteBackDelay(
+		enum source_format_class WritebackPixelFormat,
+		double WritebackHRatio,
+		double WritebackVRatio,
+		unsigned int WritebackVTaps,
+		unsigned int         WritebackDestinationWidth,
+		unsigned int         WritebackDestinationHeight,
+		unsigned int         WritebackSourceHeight,
+		unsigned int HTotal)
+{
+	double CalculateWriteBackDelay;
+	double Line_length;
+	double Output_lines_last_notclamped;
+	double WritebackVInit;
+
+	WritebackVInit = (WritebackVRatio + WritebackVTaps + 1) / 2;
+	Line_length = dml_max((double) WritebackDestinationWidth,
+			dml_ceil((double)WritebackDestinationWidth / 6.0, 1.0) * WritebackVTaps);
+	Output_lines_last_notclamped = WritebackDestinationHeight - 1 -
+			dml_ceil(((double)WritebackSourceHeight -
+					(double) WritebackVInit) / (double)WritebackVRatio, 1.0);
+	if (Output_lines_last_notclamped < 0) {
+		CalculateWriteBackDelay = 0;
+	} else {
+		CalculateWriteBackDelay = Output_lines_last_notclamped * Line_length +
+				(HTotal - WritebackDestinationWidth) + 80;
+	}
+	return CalculateWriteBackDelay;
+}
+
+void dml32_UseMinimumDCFCLK(
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange[],
+		bool DRRDisplay[],
+		bool SynchronizeDRRDisplaysForUCLKPStateChangeFinal,
+		unsigned int MaxInterDCNTileRepeaters,
+		unsigned int MaxPrefetchMode,
+		double DRAMClockChangeLatencyFinal,
+		double FCLKChangeLatency,
+		double SREnterPlusExitTime,
+		unsigned int ReturnBusWidth,
+		unsigned int RoundTripPingLatencyCycles,
+		unsigned int ReorderingBytes,
+		unsigned int PixelChunkSizeInKByte,
+		unsigned int MetaChunkSize,
+		bool GPUVMEnable,
+		unsigned int GPUVMMaxPageTableLevels,
+		bool HostVMEnable,
+		unsigned int NumberOfActiveSurfaces,
+		double HostVMMinPageSize,
+		unsigned int HostVMMaxNonCachedPageTableLevels,
+		bool DynamicMetadataVMEnabled,
+		bool ImmediateFlipRequirement,
+		bool ProgressiveToInterlaceUnitInOPP,
+		double MaxAveragePercentOfIdealSDPPortBWDisplayCanUseInNormalSystemOperation,
+		double PercentOfIdealSDPPortBWReceivedAfterUrgLatency,
+		unsigned int VTotal[],
+		unsigned int VActive[],
+		unsigned int DynamicMetadataTransmittedBytes[],
+		unsigned int DynamicMetadataLinesBeforeActiveRequired[],
+		bool Interlace[],
+		double RequiredDPPCLKPerSurface[][2][DC__NUM_DPP__MAX],
+		double RequiredDISPCLK[][2],
+		double UrgLatency[],
+		unsigned int NoOfDPP[][2][DC__NUM_DPP__MAX],
+		double ProjectedDCFClkDeepSleep[][2],
+		double MaximumVStartup[][2][DC__NUM_DPP__MAX],
+		unsigned int TotalNumberOfActiveDPP[][2],
+		unsigned int TotalNumberOfDCCActiveDPP[][2],
+		unsigned int dpte_group_bytes[],
+		double PrefetchLinesY[][2][DC__NUM_DPP__MAX],
+		double PrefetchLinesC[][2][DC__NUM_DPP__MAX],
+		unsigned int swath_width_luma_ub_all_states[][2][DC__NUM_DPP__MAX],
+		unsigned int swath_width_chroma_ub_all_states[][2][DC__NUM_DPP__MAX],
+		unsigned int BytePerPixelY[],
+		unsigned int BytePerPixelC[],
+		unsigned int HTotal[],
+		double PixelClock[],
+		double PDEAndMetaPTEBytesPerFrame[][2][DC__NUM_DPP__MAX],
+		double DPTEBytesPerRow[][2][DC__NUM_DPP__MAX],
+		double MetaRowBytes[][2][DC__NUM_DPP__MAX],
+		bool DynamicMetadataEnable[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		double DCFCLKPerState[],
+		/* Output */
+		double DCFCLKState[][2])
+{
+	unsigned int i, j, k;
+	unsigned int     dummy1;
+	double dummy2, dummy3;
+	double   NormalEfficiency;
+	double   TotalMaxPrefetchFlipDPTERowBandwidth[DC__VOLTAGE_STATES][2];
+
+	NormalEfficiency = PercentOfIdealSDPPortBWReceivedAfterUrgLatency / 100.0;
+	for  (i = 0; i < DC__VOLTAGE_STATES; ++i) {
+		for  (j = 0; j <= 1; ++j) {
+			double PixelDCFCLKCyclesRequiredInPrefetch[DC__NUM_DPP__MAX];
+			double PrefetchPixelLinesTime[DC__NUM_DPP__MAX];
+			double DCFCLKRequiredForPeakBandwidthPerSurface[DC__NUM_DPP__MAX];
+			double DynamicMetadataVMExtraLatency[DC__NUM_DPP__MAX];
+			double MinimumTWait = 0.0;
+			double DPTEBandwidth;
+			double DCFCLKRequiredForAverageBandwidth;
+			unsigned int ExtraLatencyBytes;
+			double ExtraLatencyCycles;
+			double DCFCLKRequiredForPeakBandwidth;
+			unsigned int NoOfDPPState[DC__NUM_DPP__MAX];
+			double MinimumTvmPlus2Tr0;
+
+			TotalMaxPrefetchFlipDPTERowBandwidth[i][j] = 0;
+			for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+				TotalMaxPrefetchFlipDPTERowBandwidth[i][j] = TotalMaxPrefetchFlipDPTERowBandwidth[i][j]
+						+ NoOfDPP[i][j][k] * DPTEBytesPerRow[i][j][k]
+								/ (15.75 * HTotal[k] / PixelClock[k]);
+			}
+
+			for (k = 0; k <= NumberOfActiveSurfaces - 1; ++k)
+				NoOfDPPState[k] = NoOfDPP[i][j][k];
+
+			DPTEBandwidth = TotalMaxPrefetchFlipDPTERowBandwidth[i][j];
+			DCFCLKRequiredForAverageBandwidth = dml_max(ProjectedDCFClkDeepSleep[i][j], DPTEBandwidth / NormalEfficiency / ReturnBusWidth);
+
+			ExtraLatencyBytes = dml32_CalculateExtraLatencyBytes(ReorderingBytes,
+					TotalNumberOfActiveDPP[i][j], PixelChunkSizeInKByte,
+					TotalNumberOfDCCActiveDPP[i][j], MetaChunkSize, GPUVMEnable, HostVMEnable,
+					NumberOfActiveSurfaces, NoOfDPPState, dpte_group_bytes, 1, HostVMMinPageSize,
+					HostVMMaxNonCachedPageTableLevels);
+			ExtraLatencyCycles = RoundTripPingLatencyCycles + __DML_ARB_TO_RET_DELAY__
+					+ ExtraLatencyBytes / NormalEfficiency / ReturnBusWidth;
+			for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+				double DCFCLKCyclesRequiredInPrefetch;
+				double PrefetchTime;
+
+				PixelDCFCLKCyclesRequiredInPrefetch[k] = (PrefetchLinesY[i][j][k]
+						* swath_width_luma_ub_all_states[i][j][k] * BytePerPixelY[k]
+						+ PrefetchLinesC[i][j][k] * swath_width_chroma_ub_all_states[i][j][k]
+								* BytePerPixelC[k]) / NormalEfficiency
+						/ ReturnBusWidth;
+				DCFCLKCyclesRequiredInPrefetch = 2 * ExtraLatencyCycles / NoOfDPPState[k]
+						+ PDEAndMetaPTEBytesPerFrame[i][j][k] / NormalEfficiency
+								/ NormalEfficiency / ReturnBusWidth
+								* (GPUVMMaxPageTableLevels > 2 ? 1 : 0)
+						+ 2 * DPTEBytesPerRow[i][j][k] / NormalEfficiency / NormalEfficiency
+								/ ReturnBusWidth
+						+ 2 * MetaRowBytes[i][j][k] / NormalEfficiency / ReturnBusWidth
+						+ PixelDCFCLKCyclesRequiredInPrefetch[k];
+				PrefetchPixelLinesTime[k] = dml_max(PrefetchLinesY[i][j][k], PrefetchLinesC[i][j][k])
+						* HTotal[k] / PixelClock[k];
+				DynamicMetadataVMExtraLatency[k] = (GPUVMEnable == true &&
+						DynamicMetadataEnable[k] == true && DynamicMetadataVMEnabled == true) ?
+						UrgLatency[i] * GPUVMMaxPageTableLevels *
+						(HostVMEnable == true ? HostVMMaxNonCachedPageTableLevels + 1 : 1) : 0;
+
+				MinimumTWait = dml32_CalculateTWait(MaxPrefetchMode,
+						UseMALLForPStateChange[k],
+						SynchronizeDRRDisplaysForUCLKPStateChangeFinal,
+						DRRDisplay[k],
+						DRAMClockChangeLatencyFinal,
+						FCLKChangeLatency,
+						UrgLatency[i],
+						SREnterPlusExitTime);
+
+				PrefetchTime = (MaximumVStartup[i][j][k] - 1) * HTotal[k] / PixelClock[k] -
+						MinimumTWait - UrgLatency[i] *
+						((GPUVMMaxPageTableLevels <= 2 ? GPUVMMaxPageTableLevels :
+						GPUVMMaxPageTableLevels - 2) *  (HostVMEnable == true ?
+						HostVMMaxNonCachedPageTableLevels + 1 : 1) - 1) -
+						DynamicMetadataVMExtraLatency[k];
+
+				if (PrefetchTime > 0) {
+					double ExpectedVRatioPrefetch;
+
+					ExpectedVRatioPrefetch = PrefetchPixelLinesTime[k] / (PrefetchTime *
+							PixelDCFCLKCyclesRequiredInPrefetch[k] /
+							DCFCLKCyclesRequiredInPrefetch);
+					DCFCLKRequiredForPeakBandwidthPerSurface[k] = NoOfDPPState[k] *
+							PixelDCFCLKCyclesRequiredInPrefetch[k] /
+							PrefetchPixelLinesTime[k] *
+							dml_max(1.0, ExpectedVRatioPrefetch) *
+							dml_max(1.0, ExpectedVRatioPrefetch / 4);
+					if (HostVMEnable == true || ImmediateFlipRequirement == true) {
+						DCFCLKRequiredForPeakBandwidthPerSurface[k] =
+								DCFCLKRequiredForPeakBandwidthPerSurface[k] +
+								NoOfDPPState[k] * DPTEBandwidth / NormalEfficiency /
+								NormalEfficiency / ReturnBusWidth;
+					}
+				} else {
+					DCFCLKRequiredForPeakBandwidthPerSurface[k] = DCFCLKPerState[i];
+				}
+				if (DynamicMetadataEnable[k] == true) {
+					double TSetupPipe;
+					double TdmbfPipe;
+					double TdmsksPipe;
+					double TdmecPipe;
+					double AllowedTimeForUrgentExtraLatency;
+
+					dml32_CalculateVUpdateAndDynamicMetadataParameters(
+							MaxInterDCNTileRepeaters,
+							RequiredDPPCLKPerSurface[i][j][k],
+							RequiredDISPCLK[i][j],
+							ProjectedDCFClkDeepSleep[i][j],
+							PixelClock[k],
+							HTotal[k],
+							VTotal[k] - VActive[k],
+							DynamicMetadataTransmittedBytes[k],
+							DynamicMetadataLinesBeforeActiveRequired[k],
+							Interlace[k],
+							ProgressiveToInterlaceUnitInOPP,
+
+							/* output */
+							&TSetupPipe,
+							&TdmbfPipe,
+							&TdmecPipe,
+							&TdmsksPipe,
+							&dummy1,
+							&dummy2,
+							&dummy3);
+					AllowedTimeForUrgentExtraLatency = MaximumVStartup[i][j][k] * HTotal[k] /
+							PixelClock[k] - MinimumTWait - TSetupPipe - TdmbfPipe -
+							TdmecPipe - TdmsksPipe - DynamicMetadataVMExtraLatency[k];
+					if (AllowedTimeForUrgentExtraLatency > 0)
+						DCFCLKRequiredForPeakBandwidthPerSurface[k] =
+								dml_max(DCFCLKRequiredForPeakBandwidthPerSurface[k],
+								ExtraLatencyCycles / AllowedTimeForUrgentExtraLatency);
+					else
+						DCFCLKRequiredForPeakBandwidthPerSurface[k] = DCFCLKPerState[i];
+				}
+			}
+			DCFCLKRequiredForPeakBandwidth = 0;
+			for (k = 0; k <= NumberOfActiveSurfaces - 1; ++k) {
+				DCFCLKRequiredForPeakBandwidth = DCFCLKRequiredForPeakBandwidth +
+						DCFCLKRequiredForPeakBandwidthPerSurface[k];
+			}
+			MinimumTvmPlus2Tr0 = UrgLatency[i] * (GPUVMEnable == true ?
+					(HostVMEnable == true ? (GPUVMMaxPageTableLevels + 2) *
+					(HostVMMaxNonCachedPageTableLevels + 1) - 1 : GPUVMMaxPageTableLevels + 1) : 0);
+			for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+				double MaximumTvmPlus2Tr0PlusTsw;
+
+				MaximumTvmPlus2Tr0PlusTsw = (MaximumVStartup[i][j][k] - 2) * HTotal[k] /
+						PixelClock[k] - MinimumTWait - DynamicMetadataVMExtraLatency[k];
+				if (MaximumTvmPlus2Tr0PlusTsw <= MinimumTvmPlus2Tr0 + PrefetchPixelLinesTime[k] / 4) {
+					DCFCLKRequiredForPeakBandwidth = DCFCLKPerState[i];
+				} else {
+					DCFCLKRequiredForPeakBandwidth = dml_max3(DCFCLKRequiredForPeakBandwidth,
+							2 * ExtraLatencyCycles / (MaximumTvmPlus2Tr0PlusTsw -
+								MinimumTvmPlus2Tr0 -
+								PrefetchPixelLinesTime[k] / 4),
+							(2 * ExtraLatencyCycles +
+								PixelDCFCLKCyclesRequiredInPrefetch[k]) /
+								(MaximumTvmPlus2Tr0PlusTsw - MinimumTvmPlus2Tr0));
+				}
+			}
+			DCFCLKState[i][j] = dml_min(DCFCLKPerState[i], 1.05 *
+					dml_max(DCFCLKRequiredForAverageBandwidth, DCFCLKRequiredForPeakBandwidth));
+		}
+	}
+}
+
+unsigned int dml32_CalculateExtraLatencyBytes(unsigned int ReorderingBytes,
+		unsigned int TotalNumberOfActiveDPP,
+		unsigned int PixelChunkSizeInKByte,
+		unsigned int TotalNumberOfDCCActiveDPP,
+		unsigned int MetaChunkSize,
+		bool GPUVMEnable,
+		bool HostVMEnable,
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int NumberOfDPP[],
+		unsigned int dpte_group_bytes[],
+		double HostVMInefficiencyFactor,
+		double HostVMMinPageSize,
+		unsigned int HostVMMaxNonCachedPageTableLevels)
+{
+	unsigned int k;
+	double   ret;
+	unsigned int  HostVMDynamicLevels;
+
+	if (GPUVMEnable == true && HostVMEnable == true) {
+		if (HostVMMinPageSize < 2048)
+			HostVMDynamicLevels = HostVMMaxNonCachedPageTableLevels;
+		else if (HostVMMinPageSize >= 2048 && HostVMMinPageSize < 1048576)
+			HostVMDynamicLevels = dml_max(0, (int) HostVMMaxNonCachedPageTableLevels - 1);
+		else
+			HostVMDynamicLevels = dml_max(0, (int) HostVMMaxNonCachedPageTableLevels - 2);
+	} else {
+		HostVMDynamicLevels = 0;
+	}
+
+	ret = ReorderingBytes + (TotalNumberOfActiveDPP * PixelChunkSizeInKByte +
+			TotalNumberOfDCCActiveDPP * MetaChunkSize) * 1024.0;
+
+	if (GPUVMEnable == true) {
+		for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+			ret = ret + NumberOfDPP[k] * dpte_group_bytes[k] *
+					(1 + 8 * HostVMDynamicLevels) * HostVMInefficiencyFactor;
+		}
+	}
+	return ret;
+}
+
+void dml32_CalculateVUpdateAndDynamicMetadataParameters(
+		unsigned int MaxInterDCNTileRepeaters,
+		double Dppclk,
+		double Dispclk,
+		double DCFClkDeepSleep,
+		double PixelClock,
+		unsigned int HTotal,
+		unsigned int VBlank,
+		unsigned int DynamicMetadataTransmittedBytes,
+		unsigned int DynamicMetadataLinesBeforeActiveRequired,
+		unsigned int InterlaceEnable,
+		bool ProgressiveToInterlaceUnitInOPP,
+
+		/* output */
+		double *TSetup,
+		double *Tdmbf,
+		double *Tdmec,
+		double *Tdmsks,
+		unsigned int *VUpdateOffsetPix,
+		double *VUpdateWidthPix,
+		double *VReadyOffsetPix)
+{
+	double TotalRepeaterDelayTime;
+
+	TotalRepeaterDelayTime = MaxInterDCNTileRepeaters * (2 / Dppclk + 3 / Dispclk);
+	*VUpdateWidthPix  =
+			dml_ceil((14.0 / DCFClkDeepSleep + 12.0 / Dppclk + TotalRepeaterDelayTime) * PixelClock, 1.0);
+	*VReadyOffsetPix  = dml_ceil(dml_max(150.0 / Dppclk,
+			TotalRepeaterDelayTime + 20.0 / DCFClkDeepSleep + 10.0 / Dppclk) * PixelClock, 1.0);
+	*VUpdateOffsetPix = dml_ceil(HTotal / 4.0, 1.0);
+	*TSetup = (*VUpdateOffsetPix + *VUpdateWidthPix + *VReadyOffsetPix) / PixelClock;
+	*Tdmbf = DynamicMetadataTransmittedBytes / 4.0 / Dispclk;
+	*Tdmec = HTotal / PixelClock;
+
+	if (DynamicMetadataLinesBeforeActiveRequired == 0)
+		*Tdmsks = VBlank * HTotal / PixelClock / 2.0;
+	else
+		*Tdmsks = DynamicMetadataLinesBeforeActiveRequired * HTotal / PixelClock;
+
+	if (InterlaceEnable == 1 && ProgressiveToInterlaceUnitInOPP == false)
+		*Tdmsks = *Tdmsks / 2;
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: VUpdateWidthPix = %d\n", __func__, *VUpdateWidthPix);
+	dml_print("DML::%s: VReadyOffsetPix = %d\n", __func__, *VReadyOffsetPix);
+	dml_print("DML::%s: VUpdateOffsetPix = %d\n", __func__, *VUpdateOffsetPix);
+
+	dml_print("DML::%s: DynamicMetadataLinesBeforeActiveRequired = %d\n",
+			__func__, DynamicMetadataLinesBeforeActiveRequired);
+	dml_print("DML::%s: VBlank = %d\n", __func__, VBlank);
+	dml_print("DML::%s: HTotal = %d\n", __func__, HTotal);
+	dml_print("DML::%s: PixelClock = %f\n", __func__, PixelClock);
+	dml_print("DML::%s: Tdmsks = %f\n", __func__, *Tdmsks);
+#endif
+}
+
+double dml32_CalculateTWait(
+		unsigned int PrefetchMode,
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange,
+		bool SynchronizeDRRDisplaysForUCLKPStateChangeFinal,
+		bool DRRDisplay,
+		double DRAMClockChangeLatency,
+		double FCLKChangeLatency,
+		double UrgentLatency,
+		double SREnterPlusExitTime)
+{
+	double TWait = 0.0;
+
+	if (PrefetchMode == 0 &&
+			!(UseMALLForPStateChange == dm_use_mall_pstate_change_full_frame) &&
+			!(UseMALLForPStateChange == dm_use_mall_pstate_change_sub_viewport) &&
+			!(UseMALLForPStateChange == dm_use_mall_pstate_change_phantom_pipe) &&
+			!(SynchronizeDRRDisplaysForUCLKPStateChangeFinal && DRRDisplay)) {
+		TWait = dml_max3(DRAMClockChangeLatency + UrgentLatency, SREnterPlusExitTime, UrgentLatency);
+	} else if (PrefetchMode <= 1 && !(UseMALLForPStateChange == dm_use_mall_pstate_change_phantom_pipe)) {
+		TWait = dml_max3(FCLKChangeLatency + UrgentLatency, SREnterPlusExitTime, UrgentLatency);
+	} else if (PrefetchMode <= 2 && !(UseMALLForPStateChange == dm_use_mall_pstate_change_phantom_pipe)) {
+		TWait = dml_max(SREnterPlusExitTime, UrgentLatency);
+	} else {
+		TWait = UrgentLatency;
+	}
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: PrefetchMode = %d\n", __func__, PrefetchMode);
+	dml_print("DML::%s: TWait = %f\n", __func__, TWait);
+#endif
+	return TWait;
+} // CalculateTWait
+
+// Function: get_return_bw_mbps
+// Megabyte per second
+double dml32_get_return_bw_mbps(const soc_bounding_box_st *soc,
+		const int VoltageLevel,
+		const bool HostVMEnable,
+		const double DCFCLK,
+		const double FabricClock,
+		const double DRAMSpeed)
+{
+	double ReturnBW = 0.;
+	double IdealSDPPortBandwidth    = soc->return_bus_width_bytes /*mode_lib->vba.ReturnBusWidth*/ * DCFCLK;
+	double IdealFabricBandwidth     = FabricClock * soc->fabric_datapath_to_dcn_data_return_bytes;
+	double IdealDRAMBandwidth       = DRAMSpeed * soc->num_chans * soc->dram_channel_width_bytes;
+	double PixelDataOnlyReturnBW    = dml_min3(IdealSDPPortBandwidth * soc->pct_ideal_sdp_bw_after_urgent / 100,
+			IdealFabricBandwidth * soc->pct_ideal_fabric_bw_after_urgent / 100,
+			IdealDRAMBandwidth * (VoltageLevel < 2 ? soc->pct_ideal_dram_bw_after_urgent_strobe  :
+					soc->pct_ideal_dram_sdp_bw_after_urgent_pixel_only) / 100);
+	double PixelMixedWithVMDataReturnBW = dml_min3(IdealSDPPortBandwidth * soc->pct_ideal_sdp_bw_after_urgent / 100,
+			IdealFabricBandwidth * soc->pct_ideal_fabric_bw_after_urgent / 100,
+			IdealDRAMBandwidth * (VoltageLevel < 2 ? soc->pct_ideal_dram_bw_after_urgent_strobe :
+					soc->pct_ideal_dram_sdp_bw_after_urgent_pixel_only) / 100);
+
+	if (HostVMEnable != true)
+		ReturnBW = PixelDataOnlyReturnBW;
+	else
+		ReturnBW = PixelMixedWithVMDataReturnBW;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: VoltageLevel = %d\n", __func__, VoltageLevel);
+	dml_print("DML::%s: HostVMEnable = %d\n", __func__, HostVMEnable);
+	dml_print("DML::%s: DCFCLK       = %f\n", __func__, DCFCLK);
+	dml_print("DML::%s: FabricClock  = %f\n", __func__, FabricClock);
+	dml_print("DML::%s: DRAMSpeed    = %f\n", __func__, DRAMSpeed);
+	dml_print("DML::%s: IdealSDPPortBandwidth        = %f\n", __func__, IdealSDPPortBandwidth);
+	dml_print("DML::%s: IdealFabricBandwidth         = %f\n", __func__, IdealFabricBandwidth);
+	dml_print("DML::%s: IdealDRAMBandwidth           = %f\n", __func__, IdealDRAMBandwidth);
+	dml_print("DML::%s: PixelDataOnlyReturnBW        = %f\n", __func__, PixelDataOnlyReturnBW);
+	dml_print("DML::%s: PixelMixedWithVMDataReturnBW = %f\n", __func__, PixelMixedWithVMDataReturnBW);
+	dml_print("DML::%s: ReturnBW                     = %f MBps\n", __func__, ReturnBW);
+#endif
+	return ReturnBW;
+}
+
+// Function: get_return_bw_mbps_vm_only
+// Megabyte per second
+double dml32_get_return_bw_mbps_vm_only(const soc_bounding_box_st *soc,
+		const int VoltageLevel,
+		const double DCFCLK,
+		const double FabricClock,
+		const double DRAMSpeed)
+{
+	double VMDataOnlyReturnBW = dml_min3(
+			soc->return_bus_width_bytes * DCFCLK * soc->pct_ideal_sdp_bw_after_urgent / 100.0,
+			FabricClock * soc->fabric_datapath_to_dcn_data_return_bytes
+					* soc->pct_ideal_sdp_bw_after_urgent / 100.0,
+			DRAMSpeed * soc->num_chans * soc->dram_channel_width_bytes
+					* (VoltageLevel < 2 ?
+							soc->pct_ideal_dram_bw_after_urgent_strobe :
+							soc->pct_ideal_dram_sdp_bw_after_urgent_vm_only) / 100.0);
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: VoltageLevel = %d\n", __func__, VoltageLevel);
+	dml_print("DML::%s: DCFCLK       = %f\n", __func__, DCFCLK);
+	dml_print("DML::%s: FabricClock  = %f\n", __func__, FabricClock);
+	dml_print("DML::%s: DRAMSpeed    = %f\n", __func__, DRAMSpeed);
+	dml_print("DML::%s: VMDataOnlyReturnBW = %f\n", __func__, VMDataOnlyReturnBW);
+#endif
+	return VMDataOnlyReturnBW;
+}
+
+double dml32_CalculateExtraLatency(
+		unsigned int RoundTripPingLatencyCycles,
+		unsigned int ReorderingBytes,
+		double DCFCLK,
+		unsigned int TotalNumberOfActiveDPP,
+		unsigned int PixelChunkSizeInKByte,
+		unsigned int TotalNumberOfDCCActiveDPP,
+		unsigned int MetaChunkSize,
+		double ReturnBW,
+		bool GPUVMEnable,
+		bool HostVMEnable,
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int NumberOfDPP[],
+		unsigned int dpte_group_bytes[],
+		double HostVMInefficiencyFactor,
+		double HostVMMinPageSize,
+		unsigned int HostVMMaxNonCachedPageTableLevels)
+{
+	double ExtraLatencyBytes;
+	double ExtraLatency;
+
+	ExtraLatencyBytes = dml32_CalculateExtraLatencyBytes(
+			ReorderingBytes,
+			TotalNumberOfActiveDPP,
+			PixelChunkSizeInKByte,
+			TotalNumberOfDCCActiveDPP,
+			MetaChunkSize,
+			GPUVMEnable,
+			HostVMEnable,
+			NumberOfActiveSurfaces,
+			NumberOfDPP,
+			dpte_group_bytes,
+			HostVMInefficiencyFactor,
+			HostVMMinPageSize,
+			HostVMMaxNonCachedPageTableLevels);
+
+	ExtraLatency = (RoundTripPingLatencyCycles + __DML_ARB_TO_RET_DELAY__) / DCFCLK + ExtraLatencyBytes / ReturnBW;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: RoundTripPingLatencyCycles=%d\n", __func__, RoundTripPingLatencyCycles);
+	dml_print("DML::%s: DCFCLK=%f\n", __func__, DCFCLK);
+	dml_print("DML::%s: ExtraLatencyBytes=%f\n", __func__, ExtraLatencyBytes);
+	dml_print("DML::%s: ReturnBW=%f\n", __func__, ReturnBW);
+	dml_print("DML::%s: ExtraLatency=%f\n", __func__, ExtraLatency);
+#endif
+
+	return ExtraLatency;
+} // CalculateExtraLatency
+
+bool dml32_CalculatePrefetchSchedule(
+		double HostVMInefficiencyFactor,
+		DmlPipe *myPipe,
+		unsigned int DSCDelay,
+		double DPPCLKDelaySubtotalPlusCNVCFormater,
+		double DPPCLKDelaySCL,
+		double DPPCLKDelaySCLLBOnly,
+		double DPPCLKDelayCNVCCursor,
+		double DISPCLKDelaySubtotal,
+		unsigned int DPP_RECOUT_WIDTH,
+		enum output_format_class OutputFormat,
+		unsigned int MaxInterDCNTileRepeaters,
+		unsigned int VStartup,
+		unsigned int MaxVStartup,
+		unsigned int GPUVMPageTableLevels,
+		bool GPUVMEnable,
+		bool HostVMEnable,
+		unsigned int HostVMMaxNonCachedPageTableLevels,
+		double HostVMMinPageSize,
+		bool DynamicMetadataEnable,
+		bool DynamicMetadataVMEnabled,
+		int DynamicMetadataLinesBeforeActiveRequired,
+		unsigned int DynamicMetadataTransmittedBytes,
+		double UrgentLatency,
+		double UrgentExtraLatency,
+		double TCalc,
+		unsigned int PDEAndMetaPTEBytesFrame,
+		unsigned int MetaRowByte,
+		unsigned int PixelPTEBytesPerRow,
+		double PrefetchSourceLinesY,
+		unsigned int SwathWidthY,
+		unsigned int VInitPreFillY,
+		unsigned int MaxNumSwathY,
+		double PrefetchSourceLinesC,
+		unsigned int SwathWidthC,
+		unsigned int VInitPreFillC,
+		unsigned int MaxNumSwathC,
+		unsigned int swath_width_luma_ub,
+		unsigned int swath_width_chroma_ub,
+		unsigned int SwathHeightY,
+		unsigned int SwathHeightC,
+		double TWait,
+		/* Output */
+		double   *DSTXAfterScaler,
+		double   *DSTYAfterScaler,
+		double *DestinationLinesForPrefetch,
+		double *PrefetchBandwidth,
+		double *DestinationLinesToRequestVMInVBlank,
+		double *DestinationLinesToRequestRowInVBlank,
+		double *VRatioPrefetchY,
+		double *VRatioPrefetchC,
+		double *RequiredPrefetchPixDataBWLuma,
+		double *RequiredPrefetchPixDataBWChroma,
+		bool   *NotEnoughTimeForDynamicMetadata,
+		double *Tno_bw,
+		double *prefetch_vmrow_bw,
+		double *Tdmdl_vm,
+		double *Tdmdl,
+		double *TSetup,
+		unsigned int   *VUpdateOffsetPix,
+		double   *VUpdateWidthPix,
+		double   *VReadyOffsetPix)
+{
+	bool MyError = false;
+	unsigned int DPPCycles, DISPCLKCycles;
+	double DSTTotalPixelsAfterScaler;
+	double LineTime;
+	double dst_y_prefetch_equ;
+	double prefetch_bw_oto;
+	double Tvm_oto;
+	double Tr0_oto;
+	double Tvm_oto_lines;
+	double Tr0_oto_lines;
+	double dst_y_prefetch_oto;
+	double TimeForFetchingMetaPTE = 0;
+	double TimeForFetchingRowInVBlank = 0;
+	double LinesToRequestPrefetchPixelData = 0;
+	unsigned int HostVMDynamicLevelsTrips;
+	double  trip_to_mem;
+	double  Tvm_trips;
+	double  Tr0_trips;
+	double  Tvm_trips_rounded;
+	double  Tr0_trips_rounded;
+	double  Lsw_oto;
+	double  Tpre_rounded;
+	double  prefetch_bw_equ;
+	double  Tvm_equ;
+	double  Tr0_equ;
+	double  Tdmbf;
+	double  Tdmec;
+	double  Tdmsks;
+	double  prefetch_sw_bytes;
+	double  bytes_pp;
+	double  dep_bytes;
+	unsigned int max_vratio_pre = __DML_MAX_VRATIO_PRE__;
+	double  min_Lsw;
+	double  Tsw_est1 = 0;
+	double  Tsw_est3 = 0;
+
+	if (GPUVMEnable == true && HostVMEnable == true)
+		HostVMDynamicLevelsTrips = HostVMMaxNonCachedPageTableLevels;
+	else
+		HostVMDynamicLevelsTrips = 0;
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: GPUVMEnable = %d\n", __func__, GPUVMEnable);
+	dml_print("DML::%s: GPUVMPageTableLevels = %d\n", __func__, GPUVMPageTableLevels);
+	dml_print("DML::%s: DCCEnable = %d\n", __func__, myPipe->DCCEnable);
+	dml_print("DML::%s: HostVMEnable=%d HostVMInefficiencyFactor=%f\n",
+			__func__, HostVMEnable, HostVMInefficiencyFactor);
+#endif
+	dml32_CalculateVUpdateAndDynamicMetadataParameters(
+			MaxInterDCNTileRepeaters,
+			myPipe->Dppclk,
+			myPipe->Dispclk,
+			myPipe->DCFClkDeepSleep,
+			myPipe->PixelClock,
+			myPipe->HTotal,
+			myPipe->VBlank,
+			DynamicMetadataTransmittedBytes,
+			DynamicMetadataLinesBeforeActiveRequired,
+			myPipe->InterlaceEnable,
+			myPipe->ProgressiveToInterlaceUnitInOPP,
+			TSetup,
+
+			/* output */
+			&Tdmbf,
+			&Tdmec,
+			&Tdmsks,
+			VUpdateOffsetPix,
+			VUpdateWidthPix,
+			VReadyOffsetPix);
+
+	LineTime = myPipe->HTotal / myPipe->PixelClock;
+	trip_to_mem = UrgentLatency;
+	Tvm_trips = UrgentExtraLatency + trip_to_mem * (GPUVMPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1);
+
+	if (DynamicMetadataVMEnabled == true)
+		*Tdmdl = TWait + Tvm_trips + trip_to_mem;
+	else
+		*Tdmdl = TWait + UrgentExtraLatency;
+
+#ifdef __DML_VBA_ALLOW_DELTA__
+	if (DynamicMetadataEnable == false)
+		*Tdmdl = 0.0;
+#endif
+
+	if (DynamicMetadataEnable == true) {
+		if (VStartup * LineTime < *TSetup + *Tdmdl + Tdmbf + Tdmec + Tdmsks) {
+			*NotEnoughTimeForDynamicMetadata = true;
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: Not Enough Time for Dynamic Meta!\n", __func__);
+			dml_print("DML::%s: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n",
+					__func__, Tdmbf);
+			dml_print("DML::%s: Tdmec: %fus - time dio takes to transfer dmd\n", __func__, Tdmec);
+			dml_print("DML::%s: Tdmsks: %fus - time before active dmd must complete transmission at dio\n",
+					__func__, Tdmsks);
+			dml_print("DML::%s: Tdmdl: %fus - time for fabric to become ready and fetch dmd\n",
+					__func__, *Tdmdl);
+#endif
+		} else {
+			*NotEnoughTimeForDynamicMetadata = false;
+		}
+	} else {
+		*NotEnoughTimeForDynamicMetadata = false;
+	}
+
+	*Tdmdl_vm =  (DynamicMetadataEnable == true && DynamicMetadataVMEnabled == true &&
+			GPUVMEnable == true ? TWait + Tvm_trips : 0);
+
+	if (myPipe->ScalerEnabled)
+		DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + DPPCLKDelaySCL;
+	else
+		DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + DPPCLKDelaySCLLBOnly;
+
+	DPPCycles = DPPCycles + myPipe->NumberOfCursors * DPPCLKDelayCNVCCursor;
+
+	DISPCLKCycles = DISPCLKDelaySubtotal;
+
+	if (myPipe->Dppclk == 0.0 || myPipe->Dispclk == 0.0)
+		return true;
+
+	*DSTXAfterScaler = DPPCycles * myPipe->PixelClock / myPipe->Dppclk + DISPCLKCycles *
+			myPipe->PixelClock / myPipe->Dispclk + DSCDelay;
+
+	*DSTXAfterScaler = *DSTXAfterScaler + (myPipe->ODMMode != dm_odm_combine_mode_disabled ? 18 : 0)
+			+ (myPipe->DPPPerSurface - 1) * DPP_RECOUT_WIDTH
+			+ ((myPipe->ODMMode == dm_odm_split_mode_1to2 || myPipe->ODMMode == dm_odm_mode_mso_1to2) ?
+					myPipe->HActive / 2 : 0)
+			+ ((myPipe->ODMMode == dm_odm_mode_mso_1to4) ? myPipe->HActive * 3 / 4 : 0);
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: DPPCycles: %d\n", __func__, DPPCycles);
+	dml_print("DML::%s: PixelClock: %f\n", __func__, myPipe->PixelClock);
+	dml_print("DML::%s: Dppclk: %f\n", __func__, myPipe->Dppclk);
+	dml_print("DML::%s: DISPCLKCycles: %d\n", __func__, DISPCLKCycles);
+	dml_print("DML::%s: DISPCLK: %f\n", __func__,  myPipe->Dispclk);
+	dml_print("DML::%s: DSCDelay: %d\n", __func__,  DSCDelay);
+	dml_print("DML::%s: ODMMode: %d\n", __func__,  myPipe->ODMMode);
+	dml_print("DML::%s: DPP_RECOUT_WIDTH: %d\n", __func__, DPP_RECOUT_WIDTH);
+	dml_print("DML::%s: DSTXAfterScaler: %d\n", __func__,  *DSTXAfterScaler);
+#endif
+
+	if (OutputFormat == dm_420 || (myPipe->InterlaceEnable && myPipe->ProgressiveToInterlaceUnitInOPP))
+		*DSTYAfterScaler = 1;
+	else
+		*DSTYAfterScaler = 0;
+
+	DSTTotalPixelsAfterScaler = *DSTYAfterScaler * myPipe->HTotal + *DSTXAfterScaler;
+	*DSTYAfterScaler = dml_floor(DSTTotalPixelsAfterScaler / myPipe->HTotal, 1);
+	*DSTXAfterScaler = DSTTotalPixelsAfterScaler - ((double) (*DSTYAfterScaler * myPipe->HTotal));
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: DSTXAfterScaler: %d (final)\n", __func__,  *DSTXAfterScaler);
+	dml_print("DML::%s: DSTYAfterScaler: %d (final)\n", __func__, *DSTYAfterScaler);
+#endif
+
+	MyError = false;
+
+	Tr0_trips = trip_to_mem * (HostVMDynamicLevelsTrips + 1);
+
+	if (GPUVMEnable == true) {
+		Tvm_trips_rounded = dml_ceil(4.0 * Tvm_trips / LineTime, 1.0) / 4.0 * LineTime;
+		Tr0_trips_rounded = dml_ceil(4.0 * Tr0_trips / LineTime, 1.0) / 4.0 * LineTime;
+		if (GPUVMPageTableLevels >= 3) {
+			*Tno_bw = UrgentExtraLatency + trip_to_mem *
+					(double) ((GPUVMPageTableLevels - 2) * (HostVMDynamicLevelsTrips + 1) - 1);
+		} else if (GPUVMPageTableLevels == 1 && myPipe->DCCEnable != true) {
+			Tr0_trips_rounded = dml_ceil(4.0 * UrgentExtraLatency / LineTime, 1.0) /
+					4.0 * LineTime; // VBA_ERROR
+			*Tno_bw = UrgentExtraLatency;
+		} else {
+			*Tno_bw = 0;
+		}
+	} else if (myPipe->DCCEnable == true) {
+		Tvm_trips_rounded = LineTime / 4.0;
+		Tr0_trips_rounded = dml_ceil(4.0 * Tr0_trips / LineTime, 1.0) / 4.0 * LineTime;
+		*Tno_bw = 0;
+	} else {
+		Tvm_trips_rounded = LineTime / 4.0;
+		Tr0_trips_rounded = LineTime / 2.0;
+		*Tno_bw = 0;
+	}
+	Tvm_trips_rounded = dml_max(Tvm_trips_rounded, LineTime / 4.0);
+	Tr0_trips_rounded = dml_max(Tr0_trips_rounded, LineTime / 4.0);
+
+	if (myPipe->SourcePixelFormat == dm_420_8 || myPipe->SourcePixelFormat == dm_420_10
+			|| myPipe->SourcePixelFormat == dm_420_12) {
+		bytes_pp = myPipe->BytePerPixelY + myPipe->BytePerPixelC / 4;
+	} else {
+		bytes_pp = myPipe->BytePerPixelY + myPipe->BytePerPixelC;
+	}
+
+	prefetch_sw_bytes = PrefetchSourceLinesY * swath_width_luma_ub * myPipe->BytePerPixelY
+			+ PrefetchSourceLinesC * swath_width_chroma_ub * myPipe->BytePerPixelC;
+	prefetch_bw_oto = dml_max(bytes_pp * myPipe->PixelClock / myPipe->DPPPerSurface,
+			prefetch_sw_bytes / (dml_max(PrefetchSourceLinesY, PrefetchSourceLinesC) * LineTime));
+
+	min_Lsw = dml_max(PrefetchSourceLinesY, PrefetchSourceLinesC) / max_vratio_pre;
+	min_Lsw = dml_max(min_Lsw, 1.0);
+	Lsw_oto = dml_ceil(4.0 * dml_max(prefetch_sw_bytes / prefetch_bw_oto / LineTime, min_Lsw), 1.0) / 4.0;
+
+	if (GPUVMEnable == true) {
+		Tvm_oto = dml_max3(
+				Tvm_trips,
+				*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_oto,
+				LineTime / 4.0);
+	} else
+		Tvm_oto = LineTime / 4.0;
+
+	if ((GPUVMEnable == true || myPipe->DCCEnable == true)) {
+		Tr0_oto = dml_max4(
+				Tr0_trips,
+				(MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_oto,
+				(LineTime - Tvm_oto)/2.0,
+				LineTime / 4.0);
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: Tr0_oto max0 = %f\n", __func__,
+				(MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_oto);
+		dml_print("DML::%s: Tr0_oto max1 = %f\n", __func__, Tr0_trips);
+		dml_print("DML::%s: Tr0_oto max2 = %f\n", __func__, LineTime - Tvm_oto);
+		dml_print("DML::%s: Tr0_oto max3 = %f\n", __func__, LineTime / 4);
+#endif
+	} else
+		Tr0_oto = (LineTime - Tvm_oto) / 2.0;
+
+	Tvm_oto_lines = dml_ceil(4.0 * Tvm_oto / LineTime, 1) / 4.0;
+	Tr0_oto_lines = dml_ceil(4.0 * Tr0_oto / LineTime, 1) / 4.0;
+	dst_y_prefetch_oto = Tvm_oto_lines + 2 * Tr0_oto_lines + Lsw_oto;
+
+	dst_y_prefetch_equ = VStartup - (*TSetup + dml_max(TWait + TCalc, *Tdmdl)) / LineTime -
+			(*DSTYAfterScaler + (double) *DSTXAfterScaler / (double) myPipe->HTotal);
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: HTotal = %d\n", __func__, myPipe->HTotal);
+	dml_print("DML::%s: min_Lsw = %f\n", __func__, min_Lsw);
+	dml_print("DML::%s: *Tno_bw = %f\n", __func__, *Tno_bw);
+	dml_print("DML::%s: UrgentExtraLatency = %f\n", __func__, UrgentExtraLatency);
+	dml_print("DML::%s: trip_to_mem = %f\n", __func__, trip_to_mem);
+	dml_print("DML::%s: BytePerPixelY = %d\n", __func__, myPipe->BytePerPixelY);
+	dml_print("DML::%s: PrefetchSourceLinesY = %f\n", __func__, PrefetchSourceLinesY);
+	dml_print("DML::%s: swath_width_luma_ub = %d\n", __func__, swath_width_luma_ub);
+	dml_print("DML::%s: BytePerPixelC = %d\n", __func__, myPipe->BytePerPixelC);
+	dml_print("DML::%s: PrefetchSourceLinesC = %f\n", __func__, PrefetchSourceLinesC);
+	dml_print("DML::%s: swath_width_chroma_ub = %d\n", __func__, swath_width_chroma_ub);
+	dml_print("DML::%s: prefetch_sw_bytes = %f\n", __func__, prefetch_sw_bytes);
+	dml_print("DML::%s: bytes_pp = %f\n", __func__, bytes_pp);
+	dml_print("DML::%s: PDEAndMetaPTEBytesFrame = %d\n", __func__, PDEAndMetaPTEBytesFrame);
+	dml_print("DML::%s: MetaRowByte = %d\n", __func__, MetaRowByte);
+	dml_print("DML::%s: PixelPTEBytesPerRow = %d\n", __func__, PixelPTEBytesPerRow);
+	dml_print("DML::%s: HostVMInefficiencyFactor = %f\n", __func__, HostVMInefficiencyFactor);
+	dml_print("DML::%s: Tvm_trips = %f\n", __func__, Tvm_trips);
+	dml_print("DML::%s: Tr0_trips = %f\n", __func__, Tr0_trips);
+	dml_print("DML::%s: prefetch_bw_oto = %f\n", __func__, prefetch_bw_oto);
+	dml_print("DML::%s: Tr0_oto = %f\n", __func__, Tr0_oto);
+	dml_print("DML::%s: Tvm_oto = %f\n", __func__, Tvm_oto);
+	dml_print("DML::%s: Tvm_oto_lines = %f\n", __func__, Tvm_oto_lines);
+	dml_print("DML::%s: Tr0_oto_lines = %f\n", __func__, Tr0_oto_lines);
+	dml_print("DML::%s: Lsw_oto = %f\n", __func__, Lsw_oto);
+	dml_print("DML::%s: dst_y_prefetch_oto = %f\n", __func__, dst_y_prefetch_oto);
+	dml_print("DML::%s: dst_y_prefetch_equ = %f\n", __func__, dst_y_prefetch_equ);
+#endif
+
+	dst_y_prefetch_equ = dml_floor(4.0 * (dst_y_prefetch_equ + 0.125), 1) / 4.0;
+	Tpre_rounded = dst_y_prefetch_equ * LineTime;
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: dst_y_prefetch_equ: %f (after round)\n", __func__, dst_y_prefetch_equ);
+	dml_print("DML::%s: LineTime: %f\n", __func__, LineTime);
+	dml_print("DML::%s: VStartup: %d\n", __func__, VStartup);
+	dml_print("DML::%s: Tvstartup: %fus - time between vstartup and first pixel of active\n",
+			__func__, VStartup * LineTime);
+	dml_print("DML::%s: TSetup: %fus - time from vstartup to vready\n", __func__, *TSetup);
+	dml_print("DML::%s: TCalc: %fus - time for calculations in dchub starting at vready\n", __func__, TCalc);
+	dml_print("DML::%s: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n", __func__, Tdmbf);
+	dml_print("DML::%s: Tdmec: %fus - time dio takes to transfer dmd\n", __func__, Tdmec);
+	dml_print("DML::%s: Tdmdl_vm: %fus - time for vm stages of dmd\n", __func__, *Tdmdl_vm);
+	dml_print("DML::%s: Tdmdl: %fus - time for fabric to become ready and fetch dmd\n", __func__, *Tdmdl);
+	dml_print("DML::%s: DSTYAfterScaler: %d lines - number of lines of pipeline and buffer delay after scaler\n",
+			__func__, *DSTYAfterScaler);
+#endif
+	dep_bytes = dml_max(PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor,
+			MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor);
+
+	if (prefetch_sw_bytes < dep_bytes)
+		prefetch_sw_bytes = 2 * dep_bytes;
+
+	*PrefetchBandwidth = 0;
+	*DestinationLinesToRequestVMInVBlank = 0;
+	*DestinationLinesToRequestRowInVBlank = 0;
+	*VRatioPrefetchY = 0;
+	*VRatioPrefetchC = 0;
+	*RequiredPrefetchPixDataBWLuma = 0;
+	if (dst_y_prefetch_equ > 1) {
+		double PrefetchBandwidth1;
+		double PrefetchBandwidth2;
+		double PrefetchBandwidth3;
+		double PrefetchBandwidth4;
+
+		if (Tpre_rounded - *Tno_bw > 0) {
+			PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte
+					+ 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor
+					+ prefetch_sw_bytes) / (Tpre_rounded - *Tno_bw);
+			Tsw_est1 = prefetch_sw_bytes / PrefetchBandwidth1;
+		} else
+			PrefetchBandwidth1 = 0;
+
+		if (VStartup == MaxVStartup && (Tsw_est1 / LineTime < min_Lsw)
+				&& Tpre_rounded - min_Lsw * LineTime - 0.75 * LineTime - *Tno_bw > 0) {
+			PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte
+					+ 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor)
+					/ (Tpre_rounded - min_Lsw * LineTime - 0.75 * LineTime - *Tno_bw);
+		}
+
+		if (Tpre_rounded - *Tno_bw - 2 * Tr0_trips_rounded > 0)
+			PrefetchBandwidth2 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + prefetch_sw_bytes) /
+			(Tpre_rounded - *Tno_bw - 2 * Tr0_trips_rounded);
+		else
+			PrefetchBandwidth2 = 0;
+
+		if (Tpre_rounded - Tvm_trips_rounded > 0) {
+			PrefetchBandwidth3 = (2 * MetaRowByte + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor
+					+ prefetch_sw_bytes) / (Tpre_rounded - Tvm_trips_rounded);
+			Tsw_est3 = prefetch_sw_bytes / PrefetchBandwidth3;
+		} else
+			PrefetchBandwidth3 = 0;
+
+
+		if (VStartup == MaxVStartup &&
+				(Tsw_est3 / LineTime < min_Lsw) && Tpre_rounded - min_Lsw * LineTime - 0.75 *
+				LineTime - Tvm_trips_rounded > 0) {
+			PrefetchBandwidth3 = (2 * MetaRowByte + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor)
+					/ (Tpre_rounded - min_Lsw * LineTime - 0.75 * LineTime - Tvm_trips_rounded);
+		}
+
+		if (Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded > 0) {
+			PrefetchBandwidth4 = prefetch_sw_bytes /
+					(Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded);
+		} else {
+			PrefetchBandwidth4 = 0;
+		}
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: Tpre_rounded: %f\n", __func__, Tpre_rounded);
+		dml_print("DML::%s: Tno_bw: %f\n", __func__, *Tno_bw);
+		dml_print("DML::%s: Tvm_trips_rounded: %f\n", __func__, Tvm_trips_rounded);
+		dml_print("DML::%s: Tsw_est1: %f\n", __func__, Tsw_est1);
+		dml_print("DML::%s: Tsw_est3: %f\n", __func__, Tsw_est3);
+		dml_print("DML::%s: PrefetchBandwidth1: %f\n", __func__, PrefetchBandwidth1);
+		dml_print("DML::%s: PrefetchBandwidth2: %f\n", __func__, PrefetchBandwidth2);
+		dml_print("DML::%s: PrefetchBandwidth3: %f\n", __func__, PrefetchBandwidth3);
+		dml_print("DML::%s: PrefetchBandwidth4: %f\n", __func__, PrefetchBandwidth4);
+#endif
+		{
+			bool Case1OK;
+			bool Case2OK;
+			bool Case3OK;
+
+			if (PrefetchBandwidth1 > 0) {
+				if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth1
+						>= Tvm_trips_rounded
+						&& (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor)
+								/ PrefetchBandwidth1 >= Tr0_trips_rounded) {
+					Case1OK = true;
+				} else {
+					Case1OK = false;
+				}
+			} else {
+				Case1OK = false;
+			}
+
+			if (PrefetchBandwidth2 > 0) {
+				if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth2
+						>= Tvm_trips_rounded
+						&& (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor)
+						/ PrefetchBandwidth2 < Tr0_trips_rounded) {
+					Case2OK = true;
+				} else {
+					Case2OK = false;
+				}
+			} else {
+				Case2OK = false;
+			}
+
+			if (PrefetchBandwidth3 > 0) {
+				if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth3 <
+						Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow *
+								HostVMInefficiencyFactor) / PrefetchBandwidth3 >=
+								Tr0_trips_rounded) {
+					Case3OK = true;
+				} else {
+					Case3OK = false;
+				}
+			} else {
+				Case3OK = false;
+			}
+
+			if (Case1OK)
+				prefetch_bw_equ = PrefetchBandwidth1;
+			else if (Case2OK)
+				prefetch_bw_equ = PrefetchBandwidth2;
+			else if (Case3OK)
+				prefetch_bw_equ = PrefetchBandwidth3;
+			else
+				prefetch_bw_equ = PrefetchBandwidth4;
+
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: Case1OK: %d\n", __func__, Case1OK);
+			dml_print("DML::%s: Case2OK: %d\n", __func__, Case2OK);
+			dml_print("DML::%s: Case3OK: %d\n", __func__, Case3OK);
+			dml_print("DML::%s: prefetch_bw_equ: %f\n", __func__, prefetch_bw_equ);
+#endif
+
+			if (prefetch_bw_equ > 0) {
+				if (GPUVMEnable == true) {
+					Tvm_equ = dml_max3(*Tno_bw + PDEAndMetaPTEBytesFrame *
+							HostVMInefficiencyFactor / prefetch_bw_equ,
+							Tvm_trips, LineTime / 4);
+				} else {
+					Tvm_equ = LineTime / 4;
+				}
+
+				if ((GPUVMEnable == true || myPipe->DCCEnable == true)) {
+					Tr0_equ = dml_max4((MetaRowByte + PixelPTEBytesPerRow *
+							HostVMInefficiencyFactor) / prefetch_bw_equ, Tr0_trips,
+							(LineTime - Tvm_equ) / 2, LineTime / 4);
+				} else {
+					Tr0_equ = (LineTime - Tvm_equ) / 2;
+				}
+			} else {
+				Tvm_equ = 0;
+				Tr0_equ = 0;
+#ifdef __DML_VBA_DEBUG__
+				dml_print("DML: prefetch_bw_equ equals 0! %s:%d\n", __FILE__, __LINE__);
+#endif
+			}
+		}
+
+		if (dst_y_prefetch_oto < dst_y_prefetch_equ) {
+			*DestinationLinesForPrefetch = dst_y_prefetch_oto;
+			TimeForFetchingMetaPTE = Tvm_oto;
+			TimeForFetchingRowInVBlank = Tr0_oto;
+			*PrefetchBandwidth = prefetch_bw_oto;
+		} else {
+			*DestinationLinesForPrefetch = dst_y_prefetch_equ;
+			TimeForFetchingMetaPTE = Tvm_equ;
+			TimeForFetchingRowInVBlank = Tr0_equ;
+			*PrefetchBandwidth = prefetch_bw_equ;
+		}
+
+		*DestinationLinesToRequestVMInVBlank = dml_ceil(4.0 * TimeForFetchingMetaPTE / LineTime, 1.0) / 4.0;
+
+		*DestinationLinesToRequestRowInVBlank =
+				dml_ceil(4.0 * TimeForFetchingRowInVBlank / LineTime, 1.0) / 4.0;
+
+		LinesToRequestPrefetchPixelData = *DestinationLinesForPrefetch -
+				*DestinationLinesToRequestVMInVBlank - 2 * *DestinationLinesToRequestRowInVBlank;
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: DestinationLinesForPrefetch = %f\n", __func__, *DestinationLinesForPrefetch);
+		dml_print("DML::%s: DestinationLinesToRequestVMInVBlank = %f\n",
+				__func__, *DestinationLinesToRequestVMInVBlank);
+		dml_print("DML::%s: TimeForFetchingRowInVBlank = %f\n", __func__, TimeForFetchingRowInVBlank);
+		dml_print("DML::%s: LineTime = %f\n", __func__, LineTime);
+		dml_print("DML::%s: DestinationLinesToRequestRowInVBlank = %f\n",
+				__func__, *DestinationLinesToRequestRowInVBlank);
+		dml_print("DML::%s: PrefetchSourceLinesY = %f\n", __func__, PrefetchSourceLinesY);
+		dml_print("DML::%s: LinesToRequestPrefetchPixelData = %f\n", __func__, LinesToRequestPrefetchPixelData);
+#endif
+
+		if (LinesToRequestPrefetchPixelData >= 1 && prefetch_bw_equ > 0) {
+			*VRatioPrefetchY = (double) PrefetchSourceLinesY / LinesToRequestPrefetchPixelData;
+			*VRatioPrefetchY = dml_max(*VRatioPrefetchY, 1.0);
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: VRatioPrefetchY = %f\n", __func__, *VRatioPrefetchY);
+			dml_print("DML::%s: SwathHeightY = %d\n", __func__, SwathHeightY);
+			dml_print("DML::%s: VInitPreFillY = %d\n", __func__, VInitPreFillY);
+#endif
+			if ((SwathHeightY > 4) && (VInitPreFillY > 3)) {
+				if (LinesToRequestPrefetchPixelData > (VInitPreFillY - 3.0) / 2.0) {
+					*VRatioPrefetchY =
+							dml_max((double) PrefetchSourceLinesY /
+									LinesToRequestPrefetchPixelData,
+									(double) MaxNumSwathY * SwathHeightY /
+									(LinesToRequestPrefetchPixelData -
+									(VInitPreFillY - 3.0) / 2.0));
+					*VRatioPrefetchY = dml_max(*VRatioPrefetchY, 1.0);
+				} else {
+					MyError = true;
+					*VRatioPrefetchY = 0;
+				}
+#ifdef __DML_VBA_DEBUG__
+				dml_print("DML::%s: VRatioPrefetchY = %f\n", __func__, *VRatioPrefetchY);
+				dml_print("DML::%s: PrefetchSourceLinesY = %f\n", __func__, PrefetchSourceLinesY);
+				dml_print("DML::%s: MaxNumSwathY = %d\n", __func__, MaxNumSwathY);
+#endif
+			}
+
+			*VRatioPrefetchC = (double) PrefetchSourceLinesC / LinesToRequestPrefetchPixelData;
+			*VRatioPrefetchC = dml_max(*VRatioPrefetchC, 1.0);
+
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: VRatioPrefetchC = %f\n", __func__, *VRatioPrefetchC);
+			dml_print("DML::%s: SwathHeightC = %d\n", __func__, SwathHeightC);
+			dml_print("DML::%s: VInitPreFillC = %d\n", __func__, VInitPreFillC);
+#endif
+			if ((SwathHeightC > 4)) {
+				if (LinesToRequestPrefetchPixelData > (VInitPreFillC - 3.0) / 2.0) {
+					*VRatioPrefetchC =
+						dml_max(*VRatioPrefetchC,
+							(double) MaxNumSwathC * SwathHeightC /
+							(LinesToRequestPrefetchPixelData -
+							(VInitPreFillC - 3.0) / 2.0));
+					*VRatioPrefetchC = dml_max(*VRatioPrefetchC, 1.0);
+				} else {
+					MyError = true;
+					*VRatioPrefetchC = 0;
+				}
+#ifdef __DML_VBA_DEBUG__
+				dml_print("DML::%s: VRatioPrefetchC = %f\n", __func__, *VRatioPrefetchC);
+				dml_print("DML::%s: PrefetchSourceLinesC = %f\n", __func__, PrefetchSourceLinesC);
+				dml_print("DML::%s: MaxNumSwathC = %d\n", __func__, MaxNumSwathC);
+#endif
+			}
+
+			*RequiredPrefetchPixDataBWLuma = (double) PrefetchSourceLinesY
+					/ LinesToRequestPrefetchPixelData * myPipe->BytePerPixelY * swath_width_luma_ub
+					/ LineTime;
+
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: BytePerPixelY = %d\n", __func__, myPipe->BytePerPixelY);
+			dml_print("DML::%s: swath_width_luma_ub = %d\n", __func__, swath_width_luma_ub);
+			dml_print("DML::%s: LineTime = %f\n", __func__, LineTime);
+			dml_print("DML::%s: RequiredPrefetchPixDataBWLuma = %f\n",
+					__func__, *RequiredPrefetchPixDataBWLuma);
+#endif
+			*RequiredPrefetchPixDataBWChroma = (double) PrefetchSourceLinesC /
+					LinesToRequestPrefetchPixelData
+					* myPipe->BytePerPixelC
+					* swath_width_chroma_ub / LineTime;
+		} else {
+			MyError = true;
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML:%s: MyErr set. LinesToRequestPrefetchPixelData: %f, should be > 0\n",
+					__func__, LinesToRequestPrefetchPixelData);
+#endif
+			*VRatioPrefetchY = 0;
+			*VRatioPrefetchC = 0;
+			*RequiredPrefetchPixDataBWLuma = 0;
+			*RequiredPrefetchPixDataBWChroma = 0;
+		}
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML: Tpre: %fus - sum of time to request meta pte, 2 x data pte + meta data, swaths\n",
+			(double)LinesToRequestPrefetchPixelData * LineTime +
+			2.0*TimeForFetchingRowInVBlank + TimeForFetchingMetaPTE);
+		dml_print("DML:  Tvm: %fus - time to fetch page tables for meta surface\n", TimeForFetchingMetaPTE);
+		dml_print("DML: To: %fus - time for propagation from scaler to optc\n",
+			(*DSTYAfterScaler + ((double) (*DSTXAfterScaler) / (double) myPipe->HTotal)) * LineTime);
+		dml_print("DML: Tvstartup - TSetup - Tcalc - Twait - Tpre - To > 0\n");
+		dml_print("DML: Tslack(pre): %fus - time left over in schedule\n", VStartup * LineTime -
+			TimeForFetchingMetaPTE - 2*TimeForFetchingRowInVBlank - (*DSTYAfterScaler +
+			((double) (*DSTXAfterScaler) / (double) myPipe->HTotal)) * LineTime - TWait - TCalc - *TSetup);
+		dml_print("DML: row_bytes = dpte_row_bytes (per_pipe) = PixelPTEBytesPerRow = : %d\n",
+				PixelPTEBytesPerRow);
+#endif
+	} else {
+		MyError = true;
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: MyErr set, dst_y_prefetch_equ = %f (should be > 1)\n",
+				__func__, dst_y_prefetch_equ);
+#endif
+	}
+
+	{
+		double prefetch_vm_bw;
+		double prefetch_row_bw;
+
+		if (PDEAndMetaPTEBytesFrame == 0) {
+			prefetch_vm_bw = 0;
+		} else if (*DestinationLinesToRequestVMInVBlank > 0) {
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: PDEAndMetaPTEBytesFrame = %d\n", __func__, PDEAndMetaPTEBytesFrame);
+			dml_print("DML::%s: HostVMInefficiencyFactor = %f\n", __func__, HostVMInefficiencyFactor);
+			dml_print("DML::%s: DestinationLinesToRequestVMInVBlank = %f\n",
+					__func__, *DestinationLinesToRequestVMInVBlank);
+			dml_print("DML::%s: LineTime = %f\n", __func__, LineTime);
+#endif
+			prefetch_vm_bw = PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor /
+					(*DestinationLinesToRequestVMInVBlank * LineTime);
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: prefetch_vm_bw = %f\n", __func__, prefetch_vm_bw);
+#endif
+		} else {
+			prefetch_vm_bw = 0;
+			MyError = true;
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: MyErr set. DestinationLinesToRequestVMInVBlank=%f (should be > 0)\n",
+					__func__, *DestinationLinesToRequestVMInVBlank);
+#endif
+		}
+
+		if (MetaRowByte + PixelPTEBytesPerRow == 0) {
+			prefetch_row_bw = 0;
+		} else if (*DestinationLinesToRequestRowInVBlank > 0) {
+			prefetch_row_bw = (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) /
+					(*DestinationLinesToRequestRowInVBlank * LineTime);
+
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: MetaRowByte = %d\n", __func__, MetaRowByte);
+			dml_print("DML::%s: PixelPTEBytesPerRow = %d\n", __func__, PixelPTEBytesPerRow);
+			dml_print("DML::%s: DestinationLinesToRequestRowInVBlank = %f\n",
+					__func__, *DestinationLinesToRequestRowInVBlank);
+			dml_print("DML::%s: prefetch_row_bw = %f\n", __func__, prefetch_row_bw);
+#endif
+		} else {
+			prefetch_row_bw = 0;
+			MyError = true;
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: MyErr set. DestinationLinesToRequestRowInVBlank=%f (should be > 0)\n",
+					__func__, *DestinationLinesToRequestRowInVBlank);
+#endif
+		}
+
+		*prefetch_vmrow_bw = dml_max(prefetch_vm_bw, prefetch_row_bw);
+	}
+
+	if (MyError) {
+		*PrefetchBandwidth = 0;
+		TimeForFetchingMetaPTE = 0;
+		TimeForFetchingRowInVBlank = 0;
+		*DestinationLinesToRequestVMInVBlank = 0;
+		*DestinationLinesToRequestRowInVBlank = 0;
+		*DestinationLinesForPrefetch = 0;
+		LinesToRequestPrefetchPixelData = 0;
+		*VRatioPrefetchY = 0;
+		*VRatioPrefetchC = 0;
+		*RequiredPrefetchPixDataBWLuma = 0;
+		*RequiredPrefetchPixDataBWChroma = 0;
+	}
+
+	return MyError;
+} // CalculatePrefetchSchedule
+
+void dml32_CalculateFlipSchedule(
+		double HostVMInefficiencyFactor,
+		double UrgentExtraLatency,
+		double UrgentLatency,
+		unsigned int GPUVMMaxPageTableLevels,
+		bool HostVMEnable,
+		unsigned int HostVMMaxNonCachedPageTableLevels,
+		bool GPUVMEnable,
+		double HostVMMinPageSize,
+		double PDEAndMetaPTEBytesPerFrame,
+		double MetaRowBytes,
+		double DPTEBytesPerRow,
+		double BandwidthAvailableForImmediateFlip,
+		unsigned int TotImmediateFlipBytes,
+		enum source_format_class SourcePixelFormat,
+		double LineTime,
+		double VRatio,
+		double VRatioChroma,
+		double Tno_bw,
+		bool DCCEnable,
+		unsigned int dpte_row_height,
+		unsigned int meta_row_height,
+		unsigned int dpte_row_height_chroma,
+		unsigned int meta_row_height_chroma,
+		bool    use_one_row_for_frame_flip,
+
+		/* Output */
+		double *DestinationLinesToRequestVMInImmediateFlip,
+		double *DestinationLinesToRequestRowInImmediateFlip,
+		double *final_flip_bw,
+		bool *ImmediateFlipSupportedForPipe)
+{
+	double min_row_time = 0.0;
+	unsigned int HostVMDynamicLevelsTrips;
+	double TimeForFetchingMetaPTEImmediateFlip;
+	double TimeForFetchingRowInVBlankImmediateFlip;
+	double ImmediateFlipBW;
+
+	if (GPUVMEnable == true && HostVMEnable == true)
+		HostVMDynamicLevelsTrips = HostVMMaxNonCachedPageTableLevels;
+	else
+		HostVMDynamicLevelsTrips = 0;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: TotImmediateFlipBytes = %d\n", __func__, TotImmediateFlipBytes);
+	dml_print("DML::%s: BandwidthAvailableForImmediateFlip = %f\n", __func__, BandwidthAvailableForImmediateFlip);
+#endif
+
+	if (TotImmediateFlipBytes > 0) {
+		if (use_one_row_for_frame_flip) {
+			ImmediateFlipBW = (PDEAndMetaPTEBytesPerFrame + MetaRowBytes + 2 * DPTEBytesPerRow) *
+					BandwidthAvailableForImmediateFlip / TotImmediateFlipBytes;
+		} else {
+			ImmediateFlipBW = (PDEAndMetaPTEBytesPerFrame + MetaRowBytes + DPTEBytesPerRow) *
+					BandwidthAvailableForImmediateFlip / TotImmediateFlipBytes;
+		}
+		if (GPUVMEnable == true) {
+			TimeForFetchingMetaPTEImmediateFlip = dml_max3(Tno_bw + PDEAndMetaPTEBytesPerFrame *
+					HostVMInefficiencyFactor / ImmediateFlipBW,
+					UrgentExtraLatency + UrgentLatency *
+					(GPUVMMaxPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1),
+					LineTime / 4.0);
+		} else {
+			TimeForFetchingMetaPTEImmediateFlip = 0;
+		}
+		if ((GPUVMEnable == true || DCCEnable == true)) {
+			TimeForFetchingRowInVBlankImmediateFlip = dml_max3(
+					(MetaRowBytes + DPTEBytesPerRow * HostVMInefficiencyFactor) / ImmediateFlipBW,
+					UrgentLatency * (HostVMDynamicLevelsTrips + 1), LineTime / 4.0);
+		} else {
+			TimeForFetchingRowInVBlankImmediateFlip = 0;
+		}
+
+		*DestinationLinesToRequestVMInImmediateFlip =
+				dml_ceil(4.0 * (TimeForFetchingMetaPTEImmediateFlip / LineTime), 1.0) / 4.0;
+		*DestinationLinesToRequestRowInImmediateFlip =
+				dml_ceil(4.0 * (TimeForFetchingRowInVBlankImmediateFlip / LineTime), 1.0) / 4.0;
+
+		if (GPUVMEnable == true) {
+			*final_flip_bw = dml_max(PDEAndMetaPTEBytesPerFrame * HostVMInefficiencyFactor /
+					(*DestinationLinesToRequestVMInImmediateFlip * LineTime),
+					(MetaRowBytes + DPTEBytesPerRow * HostVMInefficiencyFactor) /
+					(*DestinationLinesToRequestRowInImmediateFlip * LineTime));
+		} else if ((GPUVMEnable == true || DCCEnable == true)) {
+			*final_flip_bw = (MetaRowBytes + DPTEBytesPerRow * HostVMInefficiencyFactor) /
+					(*DestinationLinesToRequestRowInImmediateFlip * LineTime);
+		} else {
+			*final_flip_bw = 0;
+		}
+	} else {
+		TimeForFetchingMetaPTEImmediateFlip = 0;
+		TimeForFetchingRowInVBlankImmediateFlip = 0;
+		*DestinationLinesToRequestVMInImmediateFlip = 0;
+		*DestinationLinesToRequestRowInImmediateFlip = 0;
+		*final_flip_bw = 0;
+	}
+
+	if (SourcePixelFormat == dm_420_8 || SourcePixelFormat == dm_420_10 || SourcePixelFormat == dm_rgbe_alpha) {
+		if (GPUVMEnable == true && DCCEnable != true) {
+			min_row_time = dml_min(dpte_row_height *
+					LineTime / VRatio, dpte_row_height_chroma * LineTime / VRatioChroma);
+		} else if (GPUVMEnable != true && DCCEnable == true) {
+			min_row_time = dml_min(meta_row_height *
+					LineTime / VRatio, meta_row_height_chroma * LineTime / VRatioChroma);
+		} else {
+			min_row_time = dml_min4(dpte_row_height * LineTime / VRatio, meta_row_height *
+					LineTime / VRatio, dpte_row_height_chroma * LineTime /
+					VRatioChroma, meta_row_height_chroma * LineTime / VRatioChroma);
+		}
+	} else {
+		if (GPUVMEnable == true && DCCEnable != true) {
+			min_row_time = dpte_row_height * LineTime / VRatio;
+		} else if (GPUVMEnable != true && DCCEnable == true) {
+			min_row_time = meta_row_height * LineTime / VRatio;
+		} else {
+			min_row_time =
+				dml_min(dpte_row_height * LineTime / VRatio, meta_row_height * LineTime / VRatio);
+		}
+	}
+
+	if (*DestinationLinesToRequestVMInImmediateFlip >= 32 || *DestinationLinesToRequestRowInImmediateFlip >= 16
+			|| TimeForFetchingMetaPTEImmediateFlip + 2 * TimeForFetchingRowInVBlankImmediateFlip
+					> min_row_time) {
+		*ImmediateFlipSupportedForPipe = false;
+	} else {
+		*ImmediateFlipSupportedForPipe = true;
+	}
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: GPUVMEnable = %d\n", __func__, GPUVMEnable);
+	dml_print("DML::%s: DCCEnable = %d\n", __func__, DCCEnable);
+	dml_print("DML::%s: DestinationLinesToRequestVMInImmediateFlip = %f\n",
+			__func__, *DestinationLinesToRequestVMInImmediateFlip);
+	dml_print("DML::%s: DestinationLinesToRequestRowInImmediateFlip = %f\n",
+			__func__, *DestinationLinesToRequestRowInImmediateFlip);
+	dml_print("DML::%s: TimeForFetchingMetaPTEImmediateFlip = %f\n", __func__, TimeForFetchingMetaPTEImmediateFlip);
+	dml_print("DML::%s: TimeForFetchingRowInVBlankImmediateFlip = %f\n",
+			__func__, TimeForFetchingRowInVBlankImmediateFlip);
+	dml_print("DML::%s: min_row_time = %f\n", __func__, min_row_time);
+	dml_print("DML::%s: ImmediateFlipSupportedForPipe = %d\n", __func__, *ImmediateFlipSupportedForPipe);
+#endif
+} // CalculateFlipSchedule
+
+void dml32_CalculateWatermarksMALLUseAndDRAMSpeedChangeSupport(
+		bool USRRetrainingRequiredFinal,
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange[],
+		unsigned int PrefetchMode,
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int MaxLineBufferLines,
+		unsigned int LineBufferSize,
+		unsigned int WritebackInterfaceBufferSize,
+		double DCFCLK,
+		double ReturnBW,
+		bool SynchronizeTimingsFinal,
+		bool SynchronizeDRRDisplaysForUCLKPStateChangeFinal,
+		bool DRRDisplay[],
+		unsigned int dpte_group_bytes[],
+		unsigned int meta_row_height[],
+		unsigned int meta_row_height_chroma[],
+		SOCParametersList mmSOCParameters,
+		unsigned int WritebackChunkSize,
+		double SOCCLK,
+		double DCFClkDeepSleep,
+		unsigned int DETBufferSizeY[],
+		unsigned int DETBufferSizeC[],
+		unsigned int SwathHeightY[],
+		unsigned int SwathHeightC[],
+		unsigned int LBBitPerPixel[],
+		double SwathWidthY[],
+		double SwathWidthC[],
+		double HRatio[],
+		double HRatioChroma[],
+		unsigned int VTaps[],
+		unsigned int VTapsChroma[],
+		double VRatio[],
+		double VRatioChroma[],
+		unsigned int HTotal[],
+		unsigned int VTotal[],
+		unsigned int VActive[],
+		double PixelClock[],
+		unsigned int BlendingAndTiming[],
+		unsigned int DPPPerSurface[],
+		double BytePerPixelDETY[],
+		double BytePerPixelDETC[],
+		double DSTXAfterScaler[],
+		double DSTYAfterScaler[],
+		bool WritebackEnable[],
+		enum source_format_class WritebackPixelFormat[],
+		double WritebackDestinationWidth[],
+		double WritebackDestinationHeight[],
+		double WritebackSourceHeight[],
+		bool UnboundedRequestEnabled,
+		unsigned int CompressedBufferSizeInkByte,
+
+		/* Output */
+		Watermarks *Watermark,
+		enum clock_change_support *DRAMClockChangeSupport,
+		double MaxActiveDRAMClockChangeLatencySupported[],
+		unsigned int SubViewportLinesNeededInMALL[],
+		enum dm_fclock_change_support *FCLKChangeSupport,
+		double *MinActiveFCLKChangeLatencySupported,
+		bool *USRRetrainingSupport,
+		double ActiveDRAMClockChangeLatencyMargin[])
+{
+	unsigned int i, j, k;
+	unsigned int SurfaceWithMinActiveFCLKChangeMargin = 0;
+	unsigned int DRAMClockChangeSupportNumber = 0;
+	unsigned int LastSurfaceWithoutMargin;
+	unsigned int DRAMClockChangeMethod = 0;
+	bool FoundFirstSurfaceWithMinActiveFCLKChangeMargin = false;
+	double MinActiveFCLKChangeMargin = 0.;
+	double SecondMinActiveFCLKChangeMarginOneDisplayInVBLank = 0.;
+	double ActiveClockChangeLatencyHidingY;
+	double ActiveClockChangeLatencyHidingC;
+	double ActiveClockChangeLatencyHiding;
+    double EffectiveDETBufferSizeY;
+	double     ActiveFCLKChangeLatencyMargin[DC__NUM_DPP__MAX];
+	double     USRRetrainingLatencyMargin[DC__NUM_DPP__MAX];
+	double TotalPixelBW = 0.0;
+	bool    SynchronizedSurfaces[DC__NUM_DPP__MAX][DC__NUM_DPP__MAX];
+	double     EffectiveLBLatencyHidingY;
+	double     EffectiveLBLatencyHidingC;
+	double     LinesInDETY[DC__NUM_DPP__MAX];
+	double     LinesInDETC[DC__NUM_DPP__MAX];
+	unsigned int    LinesInDETYRoundedDownToSwath[DC__NUM_DPP__MAX];
+	unsigned int    LinesInDETCRoundedDownToSwath[DC__NUM_DPP__MAX];
+	double     FullDETBufferingTimeY;
+	double     FullDETBufferingTimeC;
+	double     WritebackDRAMClockChangeLatencyMargin;
+	double     WritebackFCLKChangeLatencyMargin;
+	double     WritebackLatencyHiding;
+	bool    SameTimingForFCLKChange;
+
+	unsigned int    TotalActiveWriteback = 0;
+	unsigned int LBLatencyHidingSourceLinesY[DC__NUM_DPP__MAX];
+	unsigned int LBLatencyHidingSourceLinesC[DC__NUM_DPP__MAX];
+
+	Watermark->UrgentWatermark = mmSOCParameters.UrgentLatency + mmSOCParameters.ExtraLatency;
+	Watermark->USRRetrainingWatermark = mmSOCParameters.UrgentLatency + mmSOCParameters.ExtraLatency
+			+ mmSOCParameters.USRRetrainingLatency + mmSOCParameters.SMNLatency;
+	Watermark->DRAMClockChangeWatermark = mmSOCParameters.DRAMClockChangeLatency + Watermark->UrgentWatermark;
+	Watermark->FCLKChangeWatermark = mmSOCParameters.FCLKChangeLatency + Watermark->UrgentWatermark;
+	Watermark->StutterExitWatermark = mmSOCParameters.SRExitTime + mmSOCParameters.ExtraLatency
+			+ 10 / DCFClkDeepSleep;
+	Watermark->StutterEnterPlusExitWatermark = mmSOCParameters.SREnterPlusExitTime + mmSOCParameters.ExtraLatency
+			+ 10 / DCFClkDeepSleep;
+	Watermark->Z8StutterExitWatermark = mmSOCParameters.SRExitZ8Time + mmSOCParameters.ExtraLatency
+			+ 10 / DCFClkDeepSleep;
+	Watermark->Z8StutterEnterPlusExitWatermark = mmSOCParameters.SREnterPlusExitZ8Time
+			+ mmSOCParameters.ExtraLatency + 10 / DCFClkDeepSleep;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: UrgentLatency = %f\n", __func__, mmSOCParameters.UrgentLatency);
+	dml_print("DML::%s: ExtraLatency = %f\n", __func__, mmSOCParameters.ExtraLatency);
+	dml_print("DML::%s: DRAMClockChangeLatency = %f\n", __func__, mmSOCParameters.DRAMClockChangeLatency);
+	dml_print("DML::%s: UrgentWatermark = %f\n", __func__, Watermark->UrgentWatermark);
+	dml_print("DML::%s: USRRetrainingWatermark = %f\n", __func__, Watermark->USRRetrainingWatermark);
+	dml_print("DML::%s: DRAMClockChangeWatermark = %f\n", __func__, Watermark->DRAMClockChangeWatermark);
+	dml_print("DML::%s: FCLKChangeWatermark = %f\n", __func__, Watermark->FCLKChangeWatermark);
+	dml_print("DML::%s: StutterExitWatermark = %f\n", __func__, Watermark->StutterExitWatermark);
+	dml_print("DML::%s: StutterEnterPlusExitWatermark = %f\n", __func__, Watermark->StutterEnterPlusExitWatermark);
+	dml_print("DML::%s: Z8StutterExitWatermark = %f\n", __func__, Watermark->Z8StutterExitWatermark);
+	dml_print("DML::%s: Z8StutterEnterPlusExitWatermark = %f\n",
+			__func__, Watermark->Z8StutterEnterPlusExitWatermark);
+#endif
+
+
+	TotalActiveWriteback = 0;
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (WritebackEnable[k] == true)
+			TotalActiveWriteback = TotalActiveWriteback + 1;
+	}
+
+	if (TotalActiveWriteback <= 1) {
+		Watermark->WritebackUrgentWatermark = mmSOCParameters.WritebackLatency;
+	} else {
+		Watermark->WritebackUrgentWatermark = mmSOCParameters.WritebackLatency
+				+ WritebackChunkSize * 1024.0 / 32.0 / SOCCLK;
+	}
+	if (USRRetrainingRequiredFinal)
+		Watermark->WritebackUrgentWatermark = Watermark->WritebackUrgentWatermark
+				+ mmSOCParameters.USRRetrainingLatency;
+
+	if (TotalActiveWriteback <= 1) {
+		Watermark->WritebackDRAMClockChangeWatermark = mmSOCParameters.DRAMClockChangeLatency
+				+ mmSOCParameters.WritebackLatency;
+		Watermark->WritebackFCLKChangeWatermark = mmSOCParameters.FCLKChangeLatency
+				+ mmSOCParameters.WritebackLatency;
+	} else {
+		Watermark->WritebackDRAMClockChangeWatermark = mmSOCParameters.DRAMClockChangeLatency
+				+ mmSOCParameters.WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK;
+		Watermark->WritebackFCLKChangeWatermark = mmSOCParameters.FCLKChangeLatency
+				+ mmSOCParameters.WritebackLatency + WritebackChunkSize * 1024 / 32 / SOCCLK;
+	}
+
+	if (USRRetrainingRequiredFinal)
+		Watermark->WritebackDRAMClockChangeWatermark = Watermark->WritebackDRAMClockChangeWatermark
+				+ mmSOCParameters.USRRetrainingLatency;
+
+	if (USRRetrainingRequiredFinal)
+		Watermark->WritebackFCLKChangeWatermark = Watermark->WritebackFCLKChangeWatermark
+				+ mmSOCParameters.USRRetrainingLatency;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: WritebackDRAMClockChangeWatermark = %f\n",
+			__func__, Watermark->WritebackDRAMClockChangeWatermark);
+	dml_print("DML::%s: WritebackFCLKChangeWatermark = %f\n", __func__, Watermark->WritebackFCLKChangeWatermark);
+	dml_print("DML::%s: WritebackUrgentWatermark = %f\n", __func__, Watermark->WritebackUrgentWatermark);
+	dml_print("DML::%s: USRRetrainingRequiredFinal = %d\n", __func__, USRRetrainingRequiredFinal);
+	dml_print("DML::%s: USRRetrainingLatency = %f\n", __func__, mmSOCParameters.USRRetrainingLatency);
+#endif
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		TotalPixelBW = TotalPixelBW + DPPPerSurface[k] * (SwathWidthY[k] * BytePerPixelDETY[k] * VRatio[k] +
+				SwathWidthC[k] * BytePerPixelDETC[k] * VRatioChroma[k]) / (HTotal[k] / PixelClock[k]);
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+
+		LBLatencyHidingSourceLinesY[k] = dml_min((double) MaxLineBufferLines, dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(HRatio[k], 1.0)), 1)) - (VTaps[k] - 1);
+		LBLatencyHidingSourceLinesC[k] = dml_min((double) MaxLineBufferLines, dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(HRatioChroma[k], 1.0)), 1)) - (VTapsChroma[k] - 1);
+
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d, MaxLineBufferLines = %d\n", __func__, k, MaxLineBufferLines);
+		dml_print("DML::%s: k=%d, LineBufferSize     = %d\n", __func__, k, LineBufferSize);
+		dml_print("DML::%s: k=%d, LBBitPerPixel      = %d\n", __func__, k, LBBitPerPixel[k]);
+		dml_print("DML::%s: k=%d, HRatio             = %f\n", __func__, k, HRatio[k]);
+		dml_print("DML::%s: k=%d, VTaps              = %d\n", __func__, k, VTaps[k]);
+#endif
+
+		EffectiveLBLatencyHidingY = LBLatencyHidingSourceLinesY[k] / VRatio[k] * (HTotal[k] / PixelClock[k]);
+		EffectiveLBLatencyHidingC = LBLatencyHidingSourceLinesC[k] / VRatioChroma[k] * (HTotal[k] / PixelClock[k]);
+		EffectiveDETBufferSizeY = DETBufferSizeY[k];
+
+		if (UnboundedRequestEnabled) {
+			EffectiveDETBufferSizeY = EffectiveDETBufferSizeY
+					+ CompressedBufferSizeInkByte * 1024
+							* (SwathWidthY[k] * BytePerPixelDETY[k] * VRatio[k])
+							/ (HTotal[k] / PixelClock[k]) / TotalPixelBW;
+		}
+
+		LinesInDETY[k] = (double) EffectiveDETBufferSizeY / BytePerPixelDETY[k] / SwathWidthY[k];
+		LinesInDETYRoundedDownToSwath[k] = dml_floor(LinesInDETY[k], SwathHeightY[k]);
+		FullDETBufferingTimeY = LinesInDETYRoundedDownToSwath[k] * (HTotal[k] / PixelClock[k]) / VRatio[k];
+
+		ActiveClockChangeLatencyHidingY = EffectiveLBLatencyHidingY + FullDETBufferingTimeY
+				- (DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) * HTotal[k] / PixelClock[k];
+
+		if (NumberOfActiveSurfaces > 1) {
+			ActiveClockChangeLatencyHidingY = ActiveClockChangeLatencyHidingY
+					- (1 - 1 / NumberOfActiveSurfaces) * SwathHeightY[k] * HTotal[k]
+							/ PixelClock[k] / VRatio[k];
+		}
+
+		if (BytePerPixelDETC[k] > 0) {
+			LinesInDETC[k] = DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k];
+			LinesInDETCRoundedDownToSwath[k] = dml_floor(LinesInDETC[k], SwathHeightC[k]);
+			FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath[k] * (HTotal[k] / PixelClock[k])
+					/ VRatioChroma[k];
+			ActiveClockChangeLatencyHidingC = EffectiveLBLatencyHidingC + FullDETBufferingTimeC
+					- (DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) * HTotal[k]
+							/ PixelClock[k];
+			if (NumberOfActiveSurfaces > 1) {
+				ActiveClockChangeLatencyHidingC = ActiveClockChangeLatencyHidingC
+						- (1 - 1 / NumberOfActiveSurfaces) * SwathHeightC[k] * HTotal[k]
+								/ PixelClock[k] / VRatioChroma[k];
+			}
+			ActiveClockChangeLatencyHiding = dml_min(ActiveClockChangeLatencyHidingY,
+					ActiveClockChangeLatencyHidingC);
+		} else {
+			ActiveClockChangeLatencyHiding = ActiveClockChangeLatencyHidingY;
+		}
+
+		ActiveDRAMClockChangeLatencyMargin[k] = ActiveClockChangeLatencyHiding - Watermark->UrgentWatermark
+				- Watermark->DRAMClockChangeWatermark;
+		ActiveFCLKChangeLatencyMargin[k] = ActiveClockChangeLatencyHiding - Watermark->UrgentWatermark
+				- Watermark->FCLKChangeWatermark;
+		USRRetrainingLatencyMargin[k] = ActiveClockChangeLatencyHiding - Watermark->USRRetrainingWatermark;
+
+		if (WritebackEnable[k]) {
+			WritebackLatencyHiding = WritebackInterfaceBufferSize * 1024
+					/ (WritebackDestinationWidth[k] * WritebackDestinationHeight[k]
+							/ (WritebackSourceHeight[k] * HTotal[k] / PixelClock[k]) * 4);
+			if (WritebackPixelFormat[k] == dm_444_64)
+				WritebackLatencyHiding = WritebackLatencyHiding / 2;
+
+			WritebackDRAMClockChangeLatencyMargin = WritebackLatencyHiding
+					- Watermark->WritebackDRAMClockChangeWatermark;
+
+			WritebackFCLKChangeLatencyMargin = WritebackLatencyHiding
+					- Watermark->WritebackFCLKChangeWatermark;
+
+			ActiveDRAMClockChangeLatencyMargin[k] = dml_min(ActiveDRAMClockChangeLatencyMargin[k],
+					WritebackFCLKChangeLatencyMargin);
+			ActiveFCLKChangeLatencyMargin[k] = dml_min(ActiveFCLKChangeLatencyMargin[k],
+					WritebackDRAMClockChangeLatencyMargin);
+		}
+		MaxActiveDRAMClockChangeLatencySupported[k] =
+				(UseMALLForPStateChange[k] == dm_use_mall_pstate_change_phantom_pipe) ?
+						0 :
+						(ActiveDRAMClockChangeLatencyMargin[k]
+								+ mmSOCParameters.DRAMClockChangeLatency);
+	}
+
+	for (i = 0; i < NumberOfActiveSurfaces; ++i) {
+		for (j = 0; j < NumberOfActiveSurfaces; ++j) {
+			if (i == j ||
+					(BlendingAndTiming[i] == i && BlendingAndTiming[j] == i) ||
+					(BlendingAndTiming[j] == j && BlendingAndTiming[i] == j) ||
+					(BlendingAndTiming[i] == BlendingAndTiming[j] && BlendingAndTiming[i] != i) ||
+					(SynchronizeTimingsFinal && PixelClock[i] == PixelClock[j] &&
+					HTotal[i] == HTotal[j] && VTotal[i] == VTotal[j] &&
+					VActive[i] == VActive[j]) || (SynchronizeDRRDisplaysForUCLKPStateChangeFinal &&
+					(DRRDisplay[i] || DRRDisplay[j]))) {
+				SynchronizedSurfaces[i][j] = true;
+			} else {
+				SynchronizedSurfaces[i][j] = false;
+			}
+		}
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if ((UseMALLForPStateChange[k] != dm_use_mall_pstate_change_phantom_pipe) &&
+				(!FoundFirstSurfaceWithMinActiveFCLKChangeMargin ||
+				ActiveFCLKChangeLatencyMargin[k] < MinActiveFCLKChangeMargin)) {
+			FoundFirstSurfaceWithMinActiveFCLKChangeMargin = true;
+			MinActiveFCLKChangeMargin = ActiveFCLKChangeLatencyMargin[k];
+			SurfaceWithMinActiveFCLKChangeMargin = k;
+		}
+	}
+
+	*MinActiveFCLKChangeLatencySupported = MinActiveFCLKChangeMargin + mmSOCParameters.FCLKChangeLatency;
+
+	SameTimingForFCLKChange = true;
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (!SynchronizedSurfaces[k][SurfaceWithMinActiveFCLKChangeMargin]) {
+			if ((UseMALLForPStateChange[k] != dm_use_mall_pstate_change_phantom_pipe) &&
+					(SameTimingForFCLKChange ||
+					ActiveFCLKChangeLatencyMargin[k] <
+					SecondMinActiveFCLKChangeMarginOneDisplayInVBLank)) {
+				SecondMinActiveFCLKChangeMarginOneDisplayInVBLank = ActiveFCLKChangeLatencyMargin[k];
+			}
+			SameTimingForFCLKChange = false;
+		}
+	}
+
+	if (MinActiveFCLKChangeMargin > 0) {
+		*FCLKChangeSupport = dm_fclock_change_vactive;
+	} else if ((SameTimingForFCLKChange || SecondMinActiveFCLKChangeMarginOneDisplayInVBLank > 0) &&
+			(PrefetchMode <= 1)) {
+		*FCLKChangeSupport = dm_fclock_change_vblank;
+	} else {
+		*FCLKChangeSupport = dm_fclock_change_unsupported;
+	}
+
+	*USRRetrainingSupport = true;
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if ((UseMALLForPStateChange[k] != dm_use_mall_pstate_change_phantom_pipe) &&
+				(USRRetrainingLatencyMargin[k] < 0)) {
+			*USRRetrainingSupport = false;
+		}
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (UseMALLForPStateChange[k] != dm_use_mall_pstate_change_full_frame &&
+				UseMALLForPStateChange[k] != dm_use_mall_pstate_change_sub_viewport &&
+				UseMALLForPStateChange[k] != dm_use_mall_pstate_change_phantom_pipe &&
+				ActiveDRAMClockChangeLatencyMargin[k] < 0) {
+			if (PrefetchMode > 0) {
+				DRAMClockChangeSupportNumber = 2;
+			} else if (DRAMClockChangeSupportNumber == 0) {
+				DRAMClockChangeSupportNumber = 1;
+				LastSurfaceWithoutMargin = k;
+			} else if (DRAMClockChangeSupportNumber == 1 &&
+					!SynchronizedSurfaces[LastSurfaceWithoutMargin][k]) {
+				DRAMClockChangeSupportNumber = 2;
+			}
+		}
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (UseMALLForPStateChange[k] == dm_use_mall_pstate_change_full_frame)
+			DRAMClockChangeMethod = 1;
+		else if (UseMALLForPStateChange[k] == dm_use_mall_pstate_change_sub_viewport)
+			DRAMClockChangeMethod = 2;
+	}
+
+	if (DRAMClockChangeMethod == 0) {
+		if (DRAMClockChangeSupportNumber == 0)
+			*DRAMClockChangeSupport = dm_dram_clock_change_vactive;
+		else if (DRAMClockChangeSupportNumber == 1)
+			*DRAMClockChangeSupport = dm_dram_clock_change_vblank;
+		else
+			*DRAMClockChangeSupport = dm_dram_clock_change_unsupported;
+	} else if (DRAMClockChangeMethod == 1) {
+		if (DRAMClockChangeSupportNumber == 0)
+			*DRAMClockChangeSupport = dm_dram_clock_change_vactive_w_mall_full_frame;
+		else if (DRAMClockChangeSupportNumber == 1)
+			*DRAMClockChangeSupport = dm_dram_clock_change_vblank_w_mall_full_frame;
+		else
+			*DRAMClockChangeSupport = dm_dram_clock_change_unsupported;
+	} else {
+		if (DRAMClockChangeSupportNumber == 0)
+			*DRAMClockChangeSupport = dm_dram_clock_change_vactive_w_mall_sub_vp;
+		else if (DRAMClockChangeSupportNumber == 1)
+			*DRAMClockChangeSupport = dm_dram_clock_change_vblank_w_mall_sub_vp;
+		else
+			*DRAMClockChangeSupport = dm_dram_clock_change_unsupported;
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		unsigned int dst_y_pstate;
+		unsigned int src_y_pstate_l;
+		unsigned int src_y_pstate_c;
+		unsigned int src_y_ahead_l, src_y_ahead_c, sub_vp_lines_l, sub_vp_lines_c;
+
+		dst_y_pstate = dml_ceil((mmSOCParameters.DRAMClockChangeLatency + mmSOCParameters.UrgentLatency) / (HTotal[k] / PixelClock[k]), 1);
+		src_y_pstate_l = dml_ceil(dst_y_pstate * VRatio[k], SwathHeightY[k]);
+		src_y_ahead_l = dml_floor(DETBufferSizeY[k] / BytePerPixelDETY[k] / SwathWidthY[k], SwathHeightY[k]) + LBLatencyHidingSourceLinesY[k];
+		sub_vp_lines_l = src_y_pstate_l + src_y_ahead_l + meta_row_height[k];
+
+#ifdef __DML_VBA_DEBUG__
+dml_print("DML::%s: k=%d, DETBufferSizeY               = %d\n", __func__, k, DETBufferSizeY[k]);
+dml_print("DML::%s: k=%d, BytePerPixelDETY             = %f\n", __func__, k, BytePerPixelDETY[k]);
+dml_print("DML::%s: k=%d, SwathWidthY                  = %d\n", __func__, k, SwathWidthY[k]);
+dml_print("DML::%s: k=%d, SwathHeightY                 = %d\n", __func__, k, SwathHeightY[k]);
+dml_print("DML::%s: k=%d, LBLatencyHidingSourceLinesY  = %d\n", __func__, k, LBLatencyHidingSourceLinesY[k]);
+dml_print("DML::%s: k=%d, dst_y_pstate      = %d\n", __func__, k, dst_y_pstate);
+dml_print("DML::%s: k=%d, src_y_pstate_l    = %d\n", __func__, k, src_y_pstate_l);
+dml_print("DML::%s: k=%d, src_y_ahead_l     = %d\n", __func__, k, src_y_ahead_l);
+dml_print("DML::%s: k=%d, meta_row_height   = %d\n", __func__, k, meta_row_height[k]);
+dml_print("DML::%s: k=%d, sub_vp_lines_l    = %d\n", __func__, k, sub_vp_lines_l);
+#endif
+		SubViewportLinesNeededInMALL[k] = sub_vp_lines_l;
+
+		if (BytePerPixelDETC[k] > 0) {
+			src_y_pstate_c = dml_ceil(dst_y_pstate * VRatioChroma[k], SwathHeightC[k]);
+			src_y_ahead_c = dml_floor(DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k], SwathHeightC[k]) + LBLatencyHidingSourceLinesC[k];
+			sub_vp_lines_c = src_y_pstate_c + src_y_ahead_c + meta_row_height_chroma[k];
+			SubViewportLinesNeededInMALL[k] = dml_max(sub_vp_lines_l, sub_vp_lines_c);
+
+#ifdef __DML_VBA_DEBUG__
+dml_print("DML::%s: k=%d, src_y_pstate_c            = %d\n", __func__, k, src_y_pstate_c);
+dml_print("DML::%s: k=%d, src_y_ahead_c             = %d\n", __func__, k, src_y_ahead_c);
+dml_print("DML::%s: k=%d, meta_row_height_chroma    = %d\n", __func__, k, meta_row_height_chroma[k]);
+dml_print("DML::%s: k=%d, sub_vp_lines_c            = %d\n", __func__, k, sub_vp_lines_c);
+#endif
+		}
+	}
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: DRAMClockChangeSupport = %d\n", __func__, *DRAMClockChangeSupport);
+	dml_print("DML::%s: FCLKChangeSupport = %d\n", __func__, *FCLKChangeSupport);
+	dml_print("DML::%s: MinActiveFCLKChangeLatencySupported = %f\n",
+			__func__, *MinActiveFCLKChangeLatencySupported);
+	dml_print("DML::%s: USRRetrainingSupport = %d\n", __func__, *USRRetrainingSupport);
+#endif
+} // CalculateWatermarksMALLUseAndDRAMSpeedChangeSupport
+
+double dml32_CalculateWriteBackDISPCLK(
+		enum source_format_class WritebackPixelFormat,
+		double PixelClock,
+		double WritebackHRatio,
+		double WritebackVRatio,
+		unsigned int WritebackHTaps,
+		unsigned int WritebackVTaps,
+		unsigned int   WritebackSourceWidth,
+		unsigned int   WritebackDestinationWidth,
+		unsigned int HTotal,
+		unsigned int WritebackLineBufferSize,
+		double DISPCLKDPPCLKVCOSpeed)
+{
+	double DISPCLK_H, DISPCLK_V, DISPCLK_HB;
+
+	DISPCLK_H = PixelClock * dml_ceil(WritebackHTaps / 8.0, 1) / WritebackHRatio;
+	DISPCLK_V = PixelClock * (WritebackVTaps * dml_ceil(WritebackDestinationWidth / 6.0, 1) + 8.0) / HTotal;
+	DISPCLK_HB = PixelClock * WritebackVTaps * (WritebackDestinationWidth *
+			WritebackVTaps - WritebackLineBufferSize / 57.0) / 6.0 / WritebackSourceWidth;
+	return dml32_RoundToDFSGranularity(dml_max3(DISPCLK_H, DISPCLK_V, DISPCLK_HB), 1, DISPCLKDPPCLKVCOSpeed);
+}
+
+void dml32_CalculateMinAndMaxPrefetchMode(
+		enum dm_prefetch_modes   AllowForPStateChangeOrStutterInVBlankFinal,
+		unsigned int             *MinPrefetchMode,
+		unsigned int             *MaxPrefetchMode)
+{
+	if (AllowForPStateChangeOrStutterInVBlankFinal == dm_prefetch_support_none) {
+		*MinPrefetchMode = 3;
+		*MaxPrefetchMode = 3;
+	} else if (AllowForPStateChangeOrStutterInVBlankFinal == dm_prefetch_support_stutter) {
+		*MinPrefetchMode = 2;
+		*MaxPrefetchMode = 2;
+	} else if (AllowForPStateChangeOrStutterInVBlankFinal == dm_prefetch_support_fclk_and_stutter) {
+		*MinPrefetchMode = 1;
+		*MaxPrefetchMode = 1;
+	} else if (AllowForPStateChangeOrStutterInVBlankFinal == dm_prefetch_support_uclk_fclk_and_stutter) {
+		*MinPrefetchMode = 0;
+		*MaxPrefetchMode = 0;
+	} else if (AllowForPStateChangeOrStutterInVBlankFinal ==
+			dm_prefetch_support_uclk_fclk_and_stutter_if_possible) {
+		*MinPrefetchMode = 0;
+		*MaxPrefetchMode = 3;
+	} else {
+		*MinPrefetchMode = 0;
+		*MaxPrefetchMode = 3;
+	}
+} // CalculateMinAndMaxPrefetchMode
+
+void dml32_CalculatePixelDeliveryTimes(
+		unsigned int             NumberOfActiveSurfaces,
+		double              VRatio[],
+		double              VRatioChroma[],
+		double              VRatioPrefetchY[],
+		double              VRatioPrefetchC[],
+		unsigned int             swath_width_luma_ub[],
+		unsigned int             swath_width_chroma_ub[],
+		unsigned int             DPPPerSurface[],
+		double              HRatio[],
+		double              HRatioChroma[],
+		double              PixelClock[],
+		double              PSCL_THROUGHPUT[],
+		double              PSCL_THROUGHPUT_CHROMA[],
+		double              Dppclk[],
+		unsigned int             BytePerPixelC[],
+		enum dm_rotation_angle   SourceRotation[],
+		unsigned int             NumberOfCursors[],
+		unsigned int             CursorWidth[][DC__NUM_CURSOR__MAX],
+		unsigned int             CursorBPP[][DC__NUM_CURSOR__MAX],
+		unsigned int             BlockWidth256BytesY[],
+		unsigned int             BlockHeight256BytesY[],
+		unsigned int             BlockWidth256BytesC[],
+		unsigned int             BlockHeight256BytesC[],
+
+		/* Output */
+		double              DisplayPipeLineDeliveryTimeLuma[],
+		double              DisplayPipeLineDeliveryTimeChroma[],
+		double              DisplayPipeLineDeliveryTimeLumaPrefetch[],
+		double              DisplayPipeLineDeliveryTimeChromaPrefetch[],
+		double              DisplayPipeRequestDeliveryTimeLuma[],
+		double              DisplayPipeRequestDeliveryTimeChroma[],
+		double              DisplayPipeRequestDeliveryTimeLumaPrefetch[],
+		double              DisplayPipeRequestDeliveryTimeChromaPrefetch[],
+		double              CursorRequestDeliveryTime[],
+		double              CursorRequestDeliveryTimePrefetch[])
+{
+	double   req_per_swath_ub;
+	unsigned int k;
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d : HRatio = %f\n", __func__, k, HRatio[k]);
+		dml_print("DML::%s: k=%d : VRatio = %f\n", __func__, k, VRatio[k]);
+		dml_print("DML::%s: k=%d : HRatioChroma = %f\n", __func__, k, HRatioChroma[k]);
+		dml_print("DML::%s: k=%d : VRatioChroma = %f\n", __func__, k, VRatioChroma[k]);
+		dml_print("DML::%s: k=%d : swath_width_luma_ub = %d\n", __func__, k, swath_width_luma_ub[k]);
+		dml_print("DML::%s: k=%d : swath_width_chroma_ub = %d\n", __func__, k, swath_width_chroma_ub[k]);
+		dml_print("DML::%s: k=%d : PSCL_THROUGHPUT = %f\n", __func__, k, PSCL_THROUGHPUT[k]);
+		dml_print("DML::%s: k=%d : PSCL_THROUGHPUT_CHROMA = %f\n", __func__, k, PSCL_THROUGHPUT_CHROMA[k]);
+		dml_print("DML::%s: k=%d : DPPPerSurface = %d\n", __func__, k, DPPPerSurface[k]);
+		dml_print("DML::%s: k=%d : PixelClock = %f\n", __func__, k, PixelClock[k]);
+		dml_print("DML::%s: k=%d : Dppclk = %f\n", __func__, k, Dppclk[k]);
+#endif
+
+		if (VRatio[k] <= 1) {
+			DisplayPipeLineDeliveryTimeLuma[k] =
+					swath_width_luma_ub[k] * DPPPerSurface[k] / HRatio[k] / PixelClock[k];
+		} else {
+			DisplayPipeLineDeliveryTimeLuma[k] = swath_width_luma_ub[k] / PSCL_THROUGHPUT[k] / Dppclk[k];
+		}
+
+		if (BytePerPixelC[k] == 0) {
+			DisplayPipeLineDeliveryTimeChroma[k] = 0;
+		} else {
+			if (VRatioChroma[k] <= 1) {
+				DisplayPipeLineDeliveryTimeChroma[k] =
+					swath_width_chroma_ub[k] * DPPPerSurface[k] / HRatioChroma[k] / PixelClock[k];
+			} else {
+				DisplayPipeLineDeliveryTimeChroma[k] =
+					swath_width_chroma_ub[k] / PSCL_THROUGHPUT_CHROMA[k] / Dppclk[k];
+			}
+		}
+
+		if (VRatioPrefetchY[k] <= 1) {
+			DisplayPipeLineDeliveryTimeLumaPrefetch[k] =
+					swath_width_luma_ub[k] * DPPPerSurface[k] / HRatio[k] / PixelClock[k];
+		} else {
+			DisplayPipeLineDeliveryTimeLumaPrefetch[k] =
+					swath_width_luma_ub[k] / PSCL_THROUGHPUT[k] / Dppclk[k];
+		}
+
+		if (BytePerPixelC[k] == 0) {
+			DisplayPipeLineDeliveryTimeChromaPrefetch[k] = 0;
+		} else {
+			if (VRatioPrefetchC[k] <= 1) {
+				DisplayPipeLineDeliveryTimeChromaPrefetch[k] = swath_width_chroma_ub[k] *
+						DPPPerSurface[k] / HRatioChroma[k] / PixelClock[k];
+			} else {
+				DisplayPipeLineDeliveryTimeChromaPrefetch[k] =
+						swath_width_chroma_ub[k] / PSCL_THROUGHPUT_CHROMA[k] / Dppclk[k];
+			}
+		}
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d : DisplayPipeLineDeliveryTimeLuma = %f\n",
+				__func__, k, DisplayPipeLineDeliveryTimeLuma[k]);
+		dml_print("DML::%s: k=%d : DisplayPipeLineDeliveryTimeLumaPrefetch = %f\n",
+				__func__, k, DisplayPipeLineDeliveryTimeLumaPrefetch[k]);
+		dml_print("DML::%s: k=%d : DisplayPipeLineDeliveryTimeChroma = %f\n",
+				__func__, k, DisplayPipeLineDeliveryTimeChroma[k]);
+		dml_print("DML::%s: k=%d : DisplayPipeLineDeliveryTimeChromaPrefetch = %f\n",
+				__func__, k, DisplayPipeLineDeliveryTimeChromaPrefetch[k]);
+#endif
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (!IsVertical(SourceRotation[k]))
+			req_per_swath_ub = swath_width_luma_ub[k] / BlockWidth256BytesY[k];
+		else
+			req_per_swath_ub = swath_width_luma_ub[k] / BlockHeight256BytesY[k];
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d : req_per_swath_ub = %f (Luma)\n", __func__, k, req_per_swath_ub);
+#endif
+
+		DisplayPipeRequestDeliveryTimeLuma[k] = DisplayPipeLineDeliveryTimeLuma[k] / req_per_swath_ub;
+		DisplayPipeRequestDeliveryTimeLumaPrefetch[k] =
+				DisplayPipeLineDeliveryTimeLumaPrefetch[k] / req_per_swath_ub;
+		if (BytePerPixelC[k] == 0) {
+			DisplayPipeRequestDeliveryTimeChroma[k] = 0;
+			DisplayPipeRequestDeliveryTimeChromaPrefetch[k] = 0;
+		} else {
+			if (!IsVertical(SourceRotation[k]))
+				req_per_swath_ub = swath_width_chroma_ub[k] / BlockWidth256BytesC[k];
+			else
+				req_per_swath_ub = swath_width_chroma_ub[k] / BlockHeight256BytesC[k];
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: k=%d : req_per_swath_ub = %f (Chroma)\n", __func__, k, req_per_swath_ub);
+#endif
+			DisplayPipeRequestDeliveryTimeChroma[k] =
+					DisplayPipeLineDeliveryTimeChroma[k] / req_per_swath_ub;
+			DisplayPipeRequestDeliveryTimeChromaPrefetch[k] =
+					DisplayPipeLineDeliveryTimeChromaPrefetch[k] / req_per_swath_ub;
+		}
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d : DisplayPipeRequestDeliveryTimeLuma = %f\n",
+				__func__, k, DisplayPipeRequestDeliveryTimeLuma[k]);
+		dml_print("DML::%s: k=%d : DisplayPipeRequestDeliveryTimeLumaPrefetch = %f\n",
+				__func__, k, DisplayPipeRequestDeliveryTimeLumaPrefetch[k]);
+		dml_print("DML::%s: k=%d : DisplayPipeRequestDeliveryTimeChroma = %f\n",
+				__func__, k, DisplayPipeRequestDeliveryTimeChroma[k]);
+		dml_print("DML::%s: k=%d : DisplayPipeRequestDeliveryTimeChromaPrefetch = %f\n",
+				__func__, k, DisplayPipeRequestDeliveryTimeChromaPrefetch[k]);
+#endif
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		unsigned int cursor_req_per_width;
+
+		cursor_req_per_width = dml_ceil((double) CursorWidth[k][0] * (double) CursorBPP[k][0] /
+				256.0 / 8.0, 1.0);
+		if (NumberOfCursors[k] > 0) {
+			if (VRatio[k] <= 1) {
+				CursorRequestDeliveryTime[k] = (double) CursorWidth[k][0] /
+						HRatio[k] / PixelClock[k] / cursor_req_per_width;
+			} else {
+				CursorRequestDeliveryTime[k] = (double) CursorWidth[k][0] /
+						PSCL_THROUGHPUT[k] / Dppclk[k] / cursor_req_per_width;
+			}
+			if (VRatioPrefetchY[k] <= 1) {
+				CursorRequestDeliveryTimePrefetch[k] = (double) CursorWidth[k][0] /
+						HRatio[k] / PixelClock[k] / cursor_req_per_width;
+			} else {
+				CursorRequestDeliveryTimePrefetch[k] = (double) CursorWidth[k][0] /
+						PSCL_THROUGHPUT[k] / Dppclk[k] / cursor_req_per_width;
+			}
+		} else {
+			CursorRequestDeliveryTime[k] = 0;
+			CursorRequestDeliveryTimePrefetch[k] = 0;
+		}
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%d : NumberOfCursors = %d\n",
+				__func__, k, NumberOfCursors[k]);
+		dml_print("DML::%s: k=%d : CursorRequestDeliveryTime = %f\n",
+				__func__, k, CursorRequestDeliveryTime[k]);
+		dml_print("DML::%s: k=%d : CursorRequestDeliveryTimePrefetch = %f\n",
+				__func__, k, CursorRequestDeliveryTimePrefetch[k]);
+#endif
+	}
+} // CalculatePixelDeliveryTimes
+
+void dml32_CalculateMetaAndPTETimes(
+		bool use_one_row_for_frame[],
+		unsigned int NumberOfActiveSurfaces,
+		bool GPUVMEnable,
+		unsigned int MetaChunkSize,
+		unsigned int MinMetaChunkSizeBytes,
+		unsigned int    HTotal[],
+		double  VRatio[],
+		double  VRatioChroma[],
+		double  DestinationLinesToRequestRowInVBlank[],
+		double  DestinationLinesToRequestRowInImmediateFlip[],
+		bool DCCEnable[],
+		double  PixelClock[],
+		unsigned int BytePerPixelY[],
+		unsigned int BytePerPixelC[],
+		enum dm_rotation_angle SourceRotation[],
+		unsigned int dpte_row_height[],
+		unsigned int dpte_row_height_chroma[],
+		unsigned int meta_row_width[],
+		unsigned int meta_row_width_chroma[],
+		unsigned int meta_row_height[],
+		unsigned int meta_row_height_chroma[],
+		unsigned int meta_req_width[],
+		unsigned int meta_req_width_chroma[],
+		unsigned int meta_req_height[],
+		unsigned int meta_req_height_chroma[],
+		unsigned int dpte_group_bytes[],
+		unsigned int    PTERequestSizeY[],
+		unsigned int    PTERequestSizeC[],
+		unsigned int    PixelPTEReqWidthY[],
+		unsigned int    PixelPTEReqHeightY[],
+		unsigned int    PixelPTEReqWidthC[],
+		unsigned int    PixelPTEReqHeightC[],
+		unsigned int    dpte_row_width_luma_ub[],
+		unsigned int    dpte_row_width_chroma_ub[],
+
+		/* Output */
+		double DST_Y_PER_PTE_ROW_NOM_L[],
+		double DST_Y_PER_PTE_ROW_NOM_C[],
+		double DST_Y_PER_META_ROW_NOM_L[],
+		double DST_Y_PER_META_ROW_NOM_C[],
+		double TimePerMetaChunkNominal[],
+		double TimePerChromaMetaChunkNominal[],
+		double TimePerMetaChunkVBlank[],
+		double TimePerChromaMetaChunkVBlank[],
+		double TimePerMetaChunkFlip[],
+		double TimePerChromaMetaChunkFlip[],
+		double time_per_pte_group_nom_luma[],
+		double time_per_pte_group_vblank_luma[],
+		double time_per_pte_group_flip_luma[],
+		double time_per_pte_group_nom_chroma[],
+		double time_per_pte_group_vblank_chroma[],
+		double time_per_pte_group_flip_chroma[])
+{
+	unsigned int   meta_chunk_width;
+	unsigned int   min_meta_chunk_width;
+	unsigned int   meta_chunk_per_row_int;
+	unsigned int   meta_row_remainder;
+	unsigned int   meta_chunk_threshold;
+	unsigned int   meta_chunks_per_row_ub;
+	unsigned int   meta_chunk_width_chroma;
+	unsigned int   min_meta_chunk_width_chroma;
+	unsigned int   meta_chunk_per_row_int_chroma;
+	unsigned int   meta_row_remainder_chroma;
+	unsigned int   meta_chunk_threshold_chroma;
+	unsigned int   meta_chunks_per_row_ub_chroma;
+	unsigned int   dpte_group_width_luma;
+	unsigned int   dpte_groups_per_row_luma_ub;
+	unsigned int   dpte_group_width_chroma;
+	unsigned int   dpte_groups_per_row_chroma_ub;
+	unsigned int k;
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		DST_Y_PER_PTE_ROW_NOM_L[k] = dpte_row_height[k] / VRatio[k];
+		if (BytePerPixelC[k] == 0)
+			DST_Y_PER_PTE_ROW_NOM_C[k] = 0;
+		else
+			DST_Y_PER_PTE_ROW_NOM_C[k] = dpte_row_height_chroma[k] / VRatioChroma[k];
+		DST_Y_PER_META_ROW_NOM_L[k] = meta_row_height[k] / VRatio[k];
+		if (BytePerPixelC[k] == 0)
+			DST_Y_PER_META_ROW_NOM_C[k] = 0;
+		else
+			DST_Y_PER_META_ROW_NOM_C[k] = meta_row_height_chroma[k] / VRatioChroma[k];
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (DCCEnable[k] == true) {
+			meta_chunk_width = MetaChunkSize * 1024 * 256 / BytePerPixelY[k] / meta_row_height[k];
+			min_meta_chunk_width = MinMetaChunkSizeBytes * 256 / BytePerPixelY[k] / meta_row_height[k];
+			meta_chunk_per_row_int = meta_row_width[k] / meta_chunk_width;
+			meta_row_remainder = meta_row_width[k] % meta_chunk_width;
+			if (!IsVertical(SourceRotation[k]))
+				meta_chunk_threshold = 2 * min_meta_chunk_width - meta_req_width[k];
+			else
+				meta_chunk_threshold = 2 * min_meta_chunk_width - meta_req_height[k];
+
+			if (meta_row_remainder <= meta_chunk_threshold)
+				meta_chunks_per_row_ub = meta_chunk_per_row_int + 1;
+			else
+				meta_chunks_per_row_ub = meta_chunk_per_row_int + 2;
+
+			TimePerMetaChunkNominal[k] = meta_row_height[k] / VRatio[k] *
+					HTotal[k] / PixelClock[k] / meta_chunks_per_row_ub;
+			TimePerMetaChunkVBlank[k] = DestinationLinesToRequestRowInVBlank[k] *
+					HTotal[k] / PixelClock[k] / meta_chunks_per_row_ub;
+			TimePerMetaChunkFlip[k] = DestinationLinesToRequestRowInImmediateFlip[k] *
+					HTotal[k] / PixelClock[k] / meta_chunks_per_row_ub;
+			if (BytePerPixelC[k] == 0) {
+				TimePerChromaMetaChunkNominal[k] = 0;
+				TimePerChromaMetaChunkVBlank[k] = 0;
+				TimePerChromaMetaChunkFlip[k] = 0;
+			} else {
+				meta_chunk_width_chroma = MetaChunkSize * 1024 * 256 / BytePerPixelC[k] /
+						meta_row_height_chroma[k];
+				min_meta_chunk_width_chroma = MinMetaChunkSizeBytes * 256 / BytePerPixelC[k] /
+						meta_row_height_chroma[k];
+				meta_chunk_per_row_int_chroma = (double) meta_row_width_chroma[k] /
+						meta_chunk_width_chroma;
+				meta_row_remainder_chroma = meta_row_width_chroma[k] % meta_chunk_width_chroma;
+				if (!IsVertical(SourceRotation[k])) {
+					meta_chunk_threshold_chroma = 2 * min_meta_chunk_width_chroma -
+							meta_req_width_chroma[k];
+				} else {
+					meta_chunk_threshold_chroma = 2 * min_meta_chunk_width_chroma -
+							meta_req_height_chroma[k];
+				}
+				if (meta_row_remainder_chroma <= meta_chunk_threshold_chroma)
+					meta_chunks_per_row_ub_chroma = meta_chunk_per_row_int_chroma + 1;
+				else
+					meta_chunks_per_row_ub_chroma = meta_chunk_per_row_int_chroma + 2;
+
+				TimePerChromaMetaChunkNominal[k] = meta_row_height_chroma[k] / VRatioChroma[k] *
+						HTotal[k] / PixelClock[k] / meta_chunks_per_row_ub_chroma;
+				TimePerChromaMetaChunkVBlank[k] = DestinationLinesToRequestRowInVBlank[k] *
+						HTotal[k] / PixelClock[k] / meta_chunks_per_row_ub_chroma;
+				TimePerChromaMetaChunkFlip[k] = DestinationLinesToRequestRowInImmediateFlip[k] *
+						HTotal[k] / PixelClock[k] / meta_chunks_per_row_ub_chroma;
+			}
+		} else {
+			TimePerMetaChunkNominal[k] = 0;
+			TimePerMetaChunkVBlank[k] = 0;
+			TimePerMetaChunkFlip[k] = 0;
+			TimePerChromaMetaChunkNominal[k] = 0;
+			TimePerChromaMetaChunkVBlank[k] = 0;
+			TimePerChromaMetaChunkFlip[k] = 0;
+		}
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (GPUVMEnable == true) {
+			if (!IsVertical(SourceRotation[k])) {
+				dpte_group_width_luma = (double) dpte_group_bytes[k] /
+						(double) PTERequestSizeY[k] * PixelPTEReqWidthY[k];
+			} else {
+				dpte_group_width_luma = (double) dpte_group_bytes[k] /
+						(double) PTERequestSizeY[k] * PixelPTEReqHeightY[k];
+			}
+
+			if (use_one_row_for_frame[k]) {
+				dpte_groups_per_row_luma_ub = dml_ceil((double) dpte_row_width_luma_ub[k] /
+						(double) dpte_group_width_luma / 2.0, 1.0);
+			} else {
+				dpte_groups_per_row_luma_ub = dml_ceil((double) dpte_row_width_luma_ub[k] /
+						(double) dpte_group_width_luma, 1.0);
+			}
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: k=%0d, use_one_row_for_frame        = %d\n",
+					__func__, k, use_one_row_for_frame[k]);
+			dml_print("DML::%s: k=%0d, dpte_group_bytes             = %d\n",
+					__func__, k, dpte_group_bytes[k]);
+			dml_print("DML::%s: k=%0d, PTERequestSizeY              = %d\n",
+					__func__, k, PTERequestSizeY[k]);
+			dml_print("DML::%s: k=%0d, PixelPTEReqWidthY            = %d\n",
+					__func__, k, PixelPTEReqWidthY[k]);
+			dml_print("DML::%s: k=%0d, PixelPTEReqHeightY           = %d\n",
+					__func__, k, PixelPTEReqHeightY[k]);
+			dml_print("DML::%s: k=%0d, dpte_row_width_luma_ub       = %d\n",
+					__func__, k, dpte_row_width_luma_ub[k]);
+			dml_print("DML::%s: k=%0d, dpte_group_width_luma        = %d\n",
+					__func__, k, dpte_group_width_luma);
+			dml_print("DML::%s: k=%0d, dpte_groups_per_row_luma_ub  = %d\n",
+					__func__, k, dpte_groups_per_row_luma_ub);
+#endif
+
+			time_per_pte_group_nom_luma[k] = DST_Y_PER_PTE_ROW_NOM_L[k] *
+					HTotal[k] / PixelClock[k] / dpte_groups_per_row_luma_ub;
+			time_per_pte_group_vblank_luma[k] = DestinationLinesToRequestRowInVBlank[k] *
+					HTotal[k] / PixelClock[k] / dpte_groups_per_row_luma_ub;
+			time_per_pte_group_flip_luma[k] = DestinationLinesToRequestRowInImmediateFlip[k] *
+					HTotal[k] / PixelClock[k] / dpte_groups_per_row_luma_ub;
+			if (BytePerPixelC[k] == 0) {
+				time_per_pte_group_nom_chroma[k] = 0;
+				time_per_pte_group_vblank_chroma[k] = 0;
+				time_per_pte_group_flip_chroma[k] = 0;
+			} else {
+				if (!IsVertical(SourceRotation[k])) {
+					dpte_group_width_chroma = (double) dpte_group_bytes[k] /
+							(double) PTERequestSizeC[k] * PixelPTEReqWidthC[k];
+				} else {
+					dpte_group_width_chroma = (double) dpte_group_bytes[k] /
+							(double) PTERequestSizeC[k] * PixelPTEReqHeightC[k];
+				}
+
+				if (use_one_row_for_frame[k]) {
+					dpte_groups_per_row_chroma_ub = dml_ceil((double) dpte_row_width_chroma_ub[k] /
+							(double) dpte_group_width_chroma / 2.0, 1.0);
+				} else {
+					dpte_groups_per_row_chroma_ub = dml_ceil((double) dpte_row_width_chroma_ub[k] /
+							(double) dpte_group_width_chroma, 1.0);
+				}
+#ifdef __DML_VBA_DEBUG__
+				dml_print("DML::%s: k=%0d, dpte_row_width_chroma_ub        = %d\n",
+						__func__, k, dpte_row_width_chroma_ub[k]);
+				dml_print("DML::%s: k=%0d, dpte_group_width_chroma        = %d\n",
+						__func__, k, dpte_group_width_chroma);
+				dml_print("DML::%s: k=%0d, dpte_groups_per_row_chroma_ub  = %d\n",
+						__func__, k, dpte_groups_per_row_chroma_ub);
+#endif
+				time_per_pte_group_nom_chroma[k] = DST_Y_PER_PTE_ROW_NOM_C[k] *
+						HTotal[k] / PixelClock[k] / dpte_groups_per_row_chroma_ub;
+				time_per_pte_group_vblank_chroma[k] = DestinationLinesToRequestRowInVBlank[k] *
+						HTotal[k] / PixelClock[k] / dpte_groups_per_row_chroma_ub;
+				time_per_pte_group_flip_chroma[k] = DestinationLinesToRequestRowInImmediateFlip[k] *
+						HTotal[k] / PixelClock[k] / dpte_groups_per_row_chroma_ub;
+			}
+		} else {
+			time_per_pte_group_nom_luma[k] = 0;
+			time_per_pte_group_vblank_luma[k] = 0;
+			time_per_pte_group_flip_luma[k] = 0;
+			time_per_pte_group_nom_chroma[k] = 0;
+			time_per_pte_group_vblank_chroma[k] = 0;
+			time_per_pte_group_flip_chroma[k] = 0;
+		}
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%0d, DestinationLinesToRequestRowInVBlank         = %f\n",
+				__func__, k, DestinationLinesToRequestRowInVBlank[k]);
+		dml_print("DML::%s: k=%0d, DestinationLinesToRequestRowInImmediateFlip  = %f\n",
+				__func__, k, DestinationLinesToRequestRowInImmediateFlip[k]);
+		dml_print("DML::%s: k=%0d, DST_Y_PER_PTE_ROW_NOM_L                      = %f\n",
+				__func__, k, DST_Y_PER_PTE_ROW_NOM_L[k]);
+		dml_print("DML::%s: k=%0d, DST_Y_PER_PTE_ROW_NOM_C                      = %f\n",
+				__func__, k, DST_Y_PER_PTE_ROW_NOM_C[k]);
+		dml_print("DML::%s: k=%0d, DST_Y_PER_META_ROW_NOM_L                     = %f\n",
+				__func__, k, DST_Y_PER_META_ROW_NOM_L[k]);
+		dml_print("DML::%s: k=%0d, DST_Y_PER_META_ROW_NOM_C                     = %f\n",
+				__func__, k, DST_Y_PER_META_ROW_NOM_C[k]);
+		dml_print("DML::%s: k=%0d, TimePerMetaChunkNominal          = %f\n",
+				__func__, k, TimePerMetaChunkNominal[k]);
+		dml_print("DML::%s: k=%0d, TimePerMetaChunkVBlank           = %f\n",
+				__func__, k, TimePerMetaChunkVBlank[k]);
+		dml_print("DML::%s: k=%0d, TimePerMetaChunkFlip             = %f\n",
+				__func__, k, TimePerMetaChunkFlip[k]);
+		dml_print("DML::%s: k=%0d, TimePerChromaMetaChunkNominal    = %f\n",
+				__func__, k, TimePerChromaMetaChunkNominal[k]);
+		dml_print("DML::%s: k=%0d, TimePerChromaMetaChunkVBlank     = %f\n",
+				__func__, k, TimePerChromaMetaChunkVBlank[k]);
+		dml_print("DML::%s: k=%0d, TimePerChromaMetaChunkFlip       = %f\n",
+				__func__, k, TimePerChromaMetaChunkFlip[k]);
+		dml_print("DML::%s: k=%0d, time_per_pte_group_nom_luma      = %f\n",
+				__func__, k, time_per_pte_group_nom_luma[k]);
+		dml_print("DML::%s: k=%0d, time_per_pte_group_vblank_luma   = %f\n",
+				__func__, k, time_per_pte_group_vblank_luma[k]);
+		dml_print("DML::%s: k=%0d, time_per_pte_group_flip_luma     = %f\n",
+				__func__, k, time_per_pte_group_flip_luma[k]);
+		dml_print("DML::%s: k=%0d, time_per_pte_group_nom_chroma    = %f\n",
+				__func__, k, time_per_pte_group_nom_chroma[k]);
+		dml_print("DML::%s: k=%0d, time_per_pte_group_vblank_chroma = %f\n",
+				__func__, k, time_per_pte_group_vblank_chroma[k]);
+		dml_print("DML::%s: k=%0d, time_per_pte_group_flip_chroma   = %f\n",
+				__func__, k, time_per_pte_group_flip_chroma[k]);
+#endif
+	}
+} // CalculateMetaAndPTETimes
+
+void dml32_CalculateVMGroupAndRequestTimes(
+		unsigned int     NumberOfActiveSurfaces,
+		bool     GPUVMEnable,
+		unsigned int     GPUVMMaxPageTableLevels,
+		unsigned int     HTotal[],
+		unsigned int     BytePerPixelC[],
+		double      DestinationLinesToRequestVMInVBlank[],
+		double      DestinationLinesToRequestVMInImmediateFlip[],
+		bool     DCCEnable[],
+		double      PixelClock[],
+		unsigned int        dpte_row_width_luma_ub[],
+		unsigned int        dpte_row_width_chroma_ub[],
+		unsigned int     vm_group_bytes[],
+		unsigned int     dpde0_bytes_per_frame_ub_l[],
+		unsigned int     dpde0_bytes_per_frame_ub_c[],
+		unsigned int        meta_pte_bytes_per_frame_ub_l[],
+		unsigned int        meta_pte_bytes_per_frame_ub_c[],
+
+		/* Output */
+		double      TimePerVMGroupVBlank[],
+		double      TimePerVMGroupFlip[],
+		double      TimePerVMRequestVBlank[],
+		double      TimePerVMRequestFlip[])
+{
+	unsigned int k;
+	unsigned int   num_group_per_lower_vm_stage;
+	unsigned int   num_req_per_lower_vm_stage;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: NumberOfActiveSurfaces = %d\n", __func__, NumberOfActiveSurfaces);
+	dml_print("DML::%s: GPUVMEnable = %d\n", __func__, GPUVMEnable);
+#endif
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%0d, DCCEnable = %d\n", __func__, k, DCCEnable[k]);
+		dml_print("DML::%s: k=%0d, vm_group_bytes = %d\n", __func__, k, vm_group_bytes[k]);
+		dml_print("DML::%s: k=%0d, dpde0_bytes_per_frame_ub_l = %d\n",
+				__func__, k, dpde0_bytes_per_frame_ub_l[k]);
+		dml_print("DML::%s: k=%0d, dpde0_bytes_per_frame_ub_c = %d\n",
+				__func__, k, dpde0_bytes_per_frame_ub_c[k]);
+		dml_print("DML::%s: k=%0d, meta_pte_bytes_per_frame_ub_l = %d\n",
+				__func__, k, meta_pte_bytes_per_frame_ub_l[k]);
+		dml_print("DML::%s: k=%0d, meta_pte_bytes_per_frame_ub_c = %d\n",
+				__func__, k, meta_pte_bytes_per_frame_ub_c[k]);
+#endif
+
+		if (GPUVMEnable == true && (DCCEnable[k] == true || GPUVMMaxPageTableLevels > 1)) {
+			if (DCCEnable[k] == false) {
+				if (BytePerPixelC[k] > 0) {
+					num_group_per_lower_vm_stage = dml_ceil(
+							(double) (dpde0_bytes_per_frame_ub_l[k]) /
+							(double) (vm_group_bytes[k]), 1.0) +
+							dml_ceil((double) (dpde0_bytes_per_frame_ub_c[k]) /
+							(double) (vm_group_bytes[k]), 1.0);
+				} else {
+					num_group_per_lower_vm_stage = dml_ceil(
+							(double) (dpde0_bytes_per_frame_ub_l[k]) /
+							(double) (vm_group_bytes[k]), 1.0);
+				}
+			} else {
+				if (GPUVMMaxPageTableLevels == 1) {
+					if (BytePerPixelC[k] > 0) {
+						num_group_per_lower_vm_stage = dml_ceil(
+							(double) (meta_pte_bytes_per_frame_ub_l[k]) /
+							(double) (vm_group_bytes[k]), 1.0) +
+							dml_ceil((double) (meta_pte_bytes_per_frame_ub_c[k]) /
+							(double) (vm_group_bytes[k]), 1.0);
+					} else {
+						num_group_per_lower_vm_stage = dml_ceil(
+								(double) (meta_pte_bytes_per_frame_ub_l[k]) /
+								(double) (vm_group_bytes[k]), 1.0);
+					}
+				} else {
+					if (BytePerPixelC[k] > 0) {
+						num_group_per_lower_vm_stage = 2 + dml_ceil(
+							(double) (dpde0_bytes_per_frame_ub_l[k]) /
+							(double) (vm_group_bytes[k]), 1) +
+							dml_ceil((double) (dpde0_bytes_per_frame_ub_c[k]) /
+							(double) (vm_group_bytes[k]), 1) +
+							dml_ceil((double) (meta_pte_bytes_per_frame_ub_l[k]) /
+							(double) (vm_group_bytes[k]), 1) +
+							dml_ceil((double) (meta_pte_bytes_per_frame_ub_c[k]) /
+							(double) (vm_group_bytes[k]), 1);
+					} else {
+						num_group_per_lower_vm_stage = 1 + dml_ceil(
+							(double) (dpde0_bytes_per_frame_ub_l[k]) /
+							(double) (vm_group_bytes[k]), 1) + dml_ceil(
+							(double) (meta_pte_bytes_per_frame_ub_l[k]) /
+							(double) (vm_group_bytes[k]), 1);
+					}
+				}
+			}
+
+			if (DCCEnable[k] == false) {
+				if (BytePerPixelC[k] > 0) {
+					num_req_per_lower_vm_stage = dpde0_bytes_per_frame_ub_l[k] / 64 +
+							dpde0_bytes_per_frame_ub_c[k] / 64;
+				} else {
+					num_req_per_lower_vm_stage = dpde0_bytes_per_frame_ub_l[k] / 64;
+				}
+			} else {
+				if (GPUVMMaxPageTableLevels == 1) {
+					if (BytePerPixelC[k] > 0) {
+						num_req_per_lower_vm_stage = meta_pte_bytes_per_frame_ub_l[k] / 64 +
+								meta_pte_bytes_per_frame_ub_c[k] / 64;
+					} else {
+						num_req_per_lower_vm_stage = meta_pte_bytes_per_frame_ub_l[k] / 64;
+					}
+				} else {
+					if (BytePerPixelC[k] > 0) {
+						num_req_per_lower_vm_stage = dpde0_bytes_per_frame_ub_l[k] /
+								64 + dpde0_bytes_per_frame_ub_c[k] / 64 +
+								meta_pte_bytes_per_frame_ub_l[k] / 64 +
+								meta_pte_bytes_per_frame_ub_c[k] / 64;
+					} else {
+						num_req_per_lower_vm_stage = dpde0_bytes_per_frame_ub_l[k] /
+								64 + meta_pte_bytes_per_frame_ub_l[k] / 64;
+					}
+				}
+			}
+
+			TimePerVMGroupVBlank[k] = DestinationLinesToRequestVMInVBlank[k] *
+					HTotal[k] / PixelClock[k] / num_group_per_lower_vm_stage;
+			TimePerVMGroupFlip[k] = DestinationLinesToRequestVMInImmediateFlip[k] *
+					HTotal[k] / PixelClock[k] / num_group_per_lower_vm_stage;
+			TimePerVMRequestVBlank[k] = DestinationLinesToRequestVMInVBlank[k] *
+					HTotal[k] / PixelClock[k] / num_req_per_lower_vm_stage;
+			TimePerVMRequestFlip[k] = DestinationLinesToRequestVMInImmediateFlip[k] *
+					HTotal[k] / PixelClock[k] / num_req_per_lower_vm_stage;
+
+			if (GPUVMMaxPageTableLevels > 2) {
+				TimePerVMGroupVBlank[k]    = TimePerVMGroupVBlank[k] / 2;
+				TimePerVMGroupFlip[k]      = TimePerVMGroupFlip[k] / 2;
+				TimePerVMRequestVBlank[k]  = TimePerVMRequestVBlank[k] / 2;
+				TimePerVMRequestFlip[k]    = TimePerVMRequestFlip[k] / 2;
+			}
+
+		} else {
+			TimePerVMGroupVBlank[k] = 0;
+			TimePerVMGroupFlip[k] = 0;
+			TimePerVMRequestVBlank[k] = 0;
+			TimePerVMRequestFlip[k] = 0;
+		}
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: k=%0d, TimePerVMGroupVBlank = %f\n", __func__, k, TimePerVMGroupVBlank[k]);
+		dml_print("DML::%s: k=%0d, TimePerVMGroupFlip = %f\n", __func__, k, TimePerVMGroupFlip[k]);
+		dml_print("DML::%s: k=%0d, TimePerVMRequestVBlank = %f\n", __func__, k, TimePerVMRequestVBlank[k]);
+		dml_print("DML::%s: k=%0d, TimePerVMRequestFlip = %f\n", __func__, k, TimePerVMRequestFlip[k]);
+#endif
+	}
+} // CalculateVMGroupAndRequestTimes
+
+void dml32_CalculateDCCConfiguration(
+		bool             DCCEnabled,
+		bool             DCCProgrammingAssumesScanDirectionUnknown,
+		enum source_format_class SourcePixelFormat,
+		unsigned int             SurfaceWidthLuma,
+		unsigned int             SurfaceWidthChroma,
+		unsigned int             SurfaceHeightLuma,
+		unsigned int             SurfaceHeightChroma,
+		unsigned int                nomDETInKByte,
+		unsigned int             RequestHeight256ByteLuma,
+		unsigned int             RequestHeight256ByteChroma,
+		enum dm_swizzle_mode     TilingFormat,
+		unsigned int             BytePerPixelY,
+		unsigned int             BytePerPixelC,
+		double              BytePerPixelDETY,
+		double              BytePerPixelDETC,
+		enum dm_rotation_angle   SourceRotation,
+		/* Output */
+		unsigned int        *MaxUncompressedBlockLuma,
+		unsigned int        *MaxUncompressedBlockChroma,
+		unsigned int        *MaxCompressedBlockLuma,
+		unsigned int        *MaxCompressedBlockChroma,
+		unsigned int        *IndependentBlockLuma,
+		unsigned int        *IndependentBlockChroma)
+{
+
+	enum RequestType   RequestLuma;
+	enum RequestType   RequestChroma;
+
+	unsigned int   segment_order_horz_contiguous_luma;
+	unsigned int   segment_order_horz_contiguous_chroma;
+	unsigned int   segment_order_vert_contiguous_luma;
+	unsigned int   segment_order_vert_contiguous_chroma;
+	unsigned int req128_horz_wc_l;
+	unsigned int req128_horz_wc_c;
+	unsigned int req128_vert_wc_l;
+	unsigned int req128_vert_wc_c;
+	unsigned int MAS_vp_horz_limit;
+	unsigned int MAS_vp_vert_limit;
+	unsigned int max_vp_horz_width;
+	unsigned int max_vp_vert_height;
+	unsigned int eff_surf_width_l;
+	unsigned int eff_surf_width_c;
+	unsigned int eff_surf_height_l;
+	unsigned int eff_surf_height_c;
+	unsigned int full_swath_bytes_horz_wc_l;
+	unsigned int full_swath_bytes_horz_wc_c;
+	unsigned int full_swath_bytes_vert_wc_l;
+	unsigned int full_swath_bytes_vert_wc_c;
+	unsigned int DETBufferSizeForDCC = nomDETInKByte * 1024;
+
+	unsigned int   yuv420;
+	unsigned int   horz_div_l;
+	unsigned int   horz_div_c;
+	unsigned int   vert_div_l;
+	unsigned int   vert_div_c;
+
+	unsigned int     swath_buf_size;
+	double   detile_buf_vp_horz_limit;
+	double   detile_buf_vp_vert_limit;
+
+	typedef enum {
+		REQ_256Bytes,
+		REQ_128BytesNonContiguous,
+		REQ_128BytesContiguous,
+		REQ_NA
+	} RequestType;
+
+	yuv420 = ((SourcePixelFormat == dm_420_8 || SourcePixelFormat == dm_420_10 ||
+			SourcePixelFormat == dm_420_12) ? 1 : 0);
+	horz_div_l = 1;
+	horz_div_c = 1;
+	vert_div_l = 1;
+	vert_div_c = 1;
+
+	if (BytePerPixelY == 1)
+		vert_div_l = 0;
+	if (BytePerPixelC == 1)
+		vert_div_c = 0;
+
+	if (BytePerPixelC == 0) {
+		swath_buf_size = DETBufferSizeForDCC / 2 - 2 * 256;
+		detile_buf_vp_horz_limit = (double) swath_buf_size / ((double) RequestHeight256ByteLuma *
+				BytePerPixelY / (1 + horz_div_l));
+		detile_buf_vp_vert_limit = (double) swath_buf_size / (256.0 / RequestHeight256ByteLuma /
+				(1 + vert_div_l));
+	} else {
+		swath_buf_size = DETBufferSizeForDCC / 2 - 2 * 2 * 256;
+		detile_buf_vp_horz_limit = (double) swath_buf_size / ((double) RequestHeight256ByteLuma *
+				BytePerPixelY / (1 + horz_div_l) + (double) RequestHeight256ByteChroma *
+				BytePerPixelC / (1 + horz_div_c) / (1 + yuv420));
+		detile_buf_vp_vert_limit = (double) swath_buf_size / (256.0 / RequestHeight256ByteLuma /
+				(1 + vert_div_l) + 256.0 / RequestHeight256ByteChroma /
+				(1 + vert_div_c) / (1 + yuv420));
+	}
+
+	if (SourcePixelFormat == dm_420_10) {
+		detile_buf_vp_horz_limit = 1.5 * detile_buf_vp_horz_limit;
+		detile_buf_vp_vert_limit = 1.5 * detile_buf_vp_vert_limit;
+	}
+
+	detile_buf_vp_horz_limit = dml_floor(detile_buf_vp_horz_limit - 1, 16);
+	detile_buf_vp_vert_limit = dml_floor(detile_buf_vp_vert_limit - 1, 16);
+
+	MAS_vp_horz_limit = SourcePixelFormat == dm_rgbe_alpha ? 3840 : 6144;
+	MAS_vp_vert_limit = SourcePixelFormat == dm_rgbe_alpha ? 3840 : (BytePerPixelY == 8 ? 3072 : 6144);
+	max_vp_horz_width = dml_min((double) MAS_vp_horz_limit, detile_buf_vp_horz_limit);
+	max_vp_vert_height = dml_min((double) MAS_vp_vert_limit, detile_buf_vp_vert_limit);
+	eff_surf_width_l =  (SurfaceWidthLuma > max_vp_horz_width ? max_vp_horz_width : SurfaceWidthLuma);
+	eff_surf_width_c =  eff_surf_width_l / (1 + yuv420);
+	eff_surf_height_l =  (SurfaceHeightLuma > max_vp_vert_height ? max_vp_vert_height : SurfaceHeightLuma);
+	eff_surf_height_c =  eff_surf_height_l / (1 + yuv420);
+
+	full_swath_bytes_horz_wc_l = eff_surf_width_l * RequestHeight256ByteLuma * BytePerPixelY;
+	full_swath_bytes_vert_wc_l = eff_surf_height_l * 256 / RequestHeight256ByteLuma;
+	if (BytePerPixelC > 0) {
+		full_swath_bytes_horz_wc_c = eff_surf_width_c * RequestHeight256ByteChroma * BytePerPixelC;
+		full_swath_bytes_vert_wc_c = eff_surf_height_c * 256 / RequestHeight256ByteChroma;
+	} else {
+		full_swath_bytes_horz_wc_c = 0;
+		full_swath_bytes_vert_wc_c = 0;
+	}
+
+	if (SourcePixelFormat == dm_420_10) {
+		full_swath_bytes_horz_wc_l = dml_ceil((double) full_swath_bytes_horz_wc_l * 2.0 / 3.0, 256.0);
+		full_swath_bytes_horz_wc_c = dml_ceil((double) full_swath_bytes_horz_wc_c * 2.0 / 3.0, 256.0);
+		full_swath_bytes_vert_wc_l = dml_ceil((double) full_swath_bytes_vert_wc_l * 2.0 / 3.0, 256.0);
+		full_swath_bytes_vert_wc_c = dml_ceil((double) full_swath_bytes_vert_wc_c * 2.0 / 3.0, 256.0);
+	}
+
+	if (2 * full_swath_bytes_horz_wc_l + 2 * full_swath_bytes_horz_wc_c <= DETBufferSizeForDCC) {
+		req128_horz_wc_l = 0;
+		req128_horz_wc_c = 0;
+	} else if (full_swath_bytes_horz_wc_l < 1.5 * full_swath_bytes_horz_wc_c && 2 * full_swath_bytes_horz_wc_l +
+			full_swath_bytes_horz_wc_c <= DETBufferSizeForDCC) {
+		req128_horz_wc_l = 0;
+		req128_horz_wc_c = 1;
+	} else if (full_swath_bytes_horz_wc_l >= 1.5 * full_swath_bytes_horz_wc_c && full_swath_bytes_horz_wc_l + 2 *
+			full_swath_bytes_horz_wc_c <= DETBufferSizeForDCC) {
+		req128_horz_wc_l = 1;
+		req128_horz_wc_c = 0;
+	} else {
+		req128_horz_wc_l = 1;
+		req128_horz_wc_c = 1;
+	}
+
+	if (2 * full_swath_bytes_vert_wc_l + 2 * full_swath_bytes_vert_wc_c <= DETBufferSizeForDCC) {
+		req128_vert_wc_l = 0;
+		req128_vert_wc_c = 0;
+	} else if (full_swath_bytes_vert_wc_l < 1.5 * full_swath_bytes_vert_wc_c && 2 *
+			full_swath_bytes_vert_wc_l + full_swath_bytes_vert_wc_c <= DETBufferSizeForDCC) {
+		req128_vert_wc_l = 0;
+		req128_vert_wc_c = 1;
+	} else if (full_swath_bytes_vert_wc_l >= 1.5 * full_swath_bytes_vert_wc_c &&
+			full_swath_bytes_vert_wc_l + 2 * full_swath_bytes_vert_wc_c <= DETBufferSizeForDCC) {
+		req128_vert_wc_l = 1;
+		req128_vert_wc_c = 0;
+	} else {
+		req128_vert_wc_l = 1;
+		req128_vert_wc_c = 1;
+	}
+
+	if (BytePerPixelY == 2) {
+		segment_order_horz_contiguous_luma = 0;
+		segment_order_vert_contiguous_luma = 1;
+	} else {
+		segment_order_horz_contiguous_luma = 1;
+		segment_order_vert_contiguous_luma = 0;
+	}
+
+	if (BytePerPixelC == 2) {
+		segment_order_horz_contiguous_chroma = 0;
+		segment_order_vert_contiguous_chroma = 1;
+	} else {
+		segment_order_horz_contiguous_chroma = 1;
+		segment_order_vert_contiguous_chroma = 0;
+	}
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: DCCEnabled = %d\n", __func__, DCCEnabled);
+	dml_print("DML::%s: nomDETInKByte = %d\n", __func__, nomDETInKByte);
+	dml_print("DML::%s: DETBufferSizeForDCC = %d\n", __func__, DETBufferSizeForDCC);
+	dml_print("DML::%s: req128_horz_wc_l = %d\n", __func__, req128_horz_wc_l);
+	dml_print("DML::%s: req128_horz_wc_c = %d\n", __func__, req128_horz_wc_c);
+	dml_print("DML::%s: full_swath_bytes_horz_wc_l = %d\n", __func__, full_swath_bytes_horz_wc_l);
+	dml_print("DML::%s: full_swath_bytes_vert_wc_c = %d\n", __func__, full_swath_bytes_vert_wc_c);
+	dml_print("DML::%s: segment_order_horz_contiguous_luma = %d\n", __func__, segment_order_horz_contiguous_luma);
+	dml_print("DML::%s: segment_order_horz_contiguous_chroma = %d\n",
+			__func__, segment_order_horz_contiguous_chroma);
+#endif
+
+	if (DCCProgrammingAssumesScanDirectionUnknown == true) {
+		if (req128_horz_wc_l == 0 && req128_vert_wc_l == 0)
+			RequestLuma = REQ_256Bytes;
+		else if ((req128_horz_wc_l == 1 && segment_order_horz_contiguous_luma == 0) ||
+				(req128_vert_wc_l == 1 && segment_order_vert_contiguous_luma == 0))
+			RequestLuma = REQ_128BytesNonContiguous;
+		else
+			RequestLuma = REQ_128BytesContiguous;
+
+		if (req128_horz_wc_c == 0 && req128_vert_wc_c == 0)
+			RequestChroma = REQ_256Bytes;
+		else if ((req128_horz_wc_c == 1 && segment_order_horz_contiguous_chroma == 0) ||
+				(req128_vert_wc_c == 1 && segment_order_vert_contiguous_chroma == 0))
+			RequestChroma = REQ_128BytesNonContiguous;
+		else
+			RequestChroma = REQ_128BytesContiguous;
+
+	} else if (!IsVertical(SourceRotation)) {
+		if (req128_horz_wc_l == 0)
+			RequestLuma = REQ_256Bytes;
+		else if (segment_order_horz_contiguous_luma == 0)
+			RequestLuma = REQ_128BytesNonContiguous;
+		else
+			RequestLuma = REQ_128BytesContiguous;
+
+		if (req128_horz_wc_c == 0)
+			RequestChroma = REQ_256Bytes;
+		else if (segment_order_horz_contiguous_chroma == 0)
+			RequestChroma = REQ_128BytesNonContiguous;
+		else
+			RequestChroma = REQ_128BytesContiguous;
+
+	} else {
+		if (req128_vert_wc_l == 0)
+			RequestLuma = REQ_256Bytes;
+		else if (segment_order_vert_contiguous_luma == 0)
+			RequestLuma = REQ_128BytesNonContiguous;
+		else
+			RequestLuma = REQ_128BytesContiguous;
+
+		if (req128_vert_wc_c == 0)
+			RequestChroma = REQ_256Bytes;
+		else if (segment_order_vert_contiguous_chroma == 0)
+			RequestChroma = REQ_128BytesNonContiguous;
+		else
+			RequestChroma = REQ_128BytesContiguous;
+	}
+
+	if (RequestLuma == (enum RequestType) REQ_256Bytes) {
+		*MaxUncompressedBlockLuma = 256;
+		*MaxCompressedBlockLuma = 256;
+		*IndependentBlockLuma = 0;
+	} else if (RequestLuma == (enum RequestType) REQ_128BytesContiguous) {
+		*MaxUncompressedBlockLuma = 256;
+		*MaxCompressedBlockLuma = 128;
+		*IndependentBlockLuma = 128;
+	} else {
+		*MaxUncompressedBlockLuma = 256;
+		*MaxCompressedBlockLuma = 64;
+		*IndependentBlockLuma = 64;
+	}
+
+	if (RequestChroma == (enum RequestType) REQ_256Bytes) {
+		*MaxUncompressedBlockChroma = 256;
+		*MaxCompressedBlockChroma = 256;
+		*IndependentBlockChroma = 0;
+	} else if (RequestChroma == (enum RequestType) REQ_128BytesContiguous) {
+		*MaxUncompressedBlockChroma = 256;
+		*MaxCompressedBlockChroma = 128;
+		*IndependentBlockChroma = 128;
+	} else {
+		*MaxUncompressedBlockChroma = 256;
+		*MaxCompressedBlockChroma = 64;
+		*IndependentBlockChroma = 64;
+	}
+
+	if (DCCEnabled != true || BytePerPixelC == 0) {
+		*MaxUncompressedBlockChroma = 0;
+		*MaxCompressedBlockChroma = 0;
+		*IndependentBlockChroma = 0;
+	}
+
+	if (DCCEnabled != true) {
+		*MaxUncompressedBlockLuma = 0;
+		*MaxCompressedBlockLuma = 0;
+		*IndependentBlockLuma = 0;
+	}
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: MaxUncompressedBlockLuma = %d\n", __func__, *MaxUncompressedBlockLuma);
+	dml_print("DML::%s: MaxCompressedBlockLuma = %d\n", __func__, *MaxCompressedBlockLuma);
+	dml_print("DML::%s: IndependentBlockLuma = %d\n", __func__, *IndependentBlockLuma);
+	dml_print("DML::%s: MaxUncompressedBlockChroma = %d\n", __func__, *MaxUncompressedBlockChroma);
+	dml_print("DML::%s: MaxCompressedBlockChroma = %d\n", __func__, *MaxCompressedBlockChroma);
+	dml_print("DML::%s: IndependentBlockChroma = %d\n", __func__, *IndependentBlockChroma);
+#endif
+
+} // CalculateDCCConfiguration
+
+void dml32_CalculateStutterEfficiency(
+		unsigned int      CompressedBufferSizeInkByte,
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange[],
+		bool   UnboundedRequestEnabled,
+		unsigned int      MetaFIFOSizeInKEntries,
+		unsigned int      ZeroSizeBufferEntries,
+		unsigned int      PixelChunkSizeInKByte,
+		unsigned int   NumberOfActiveSurfaces,
+		unsigned int      ROBBufferSizeInKByte,
+		double    TotalDataReadBandwidth,
+		double    DCFCLK,
+		double    ReturnBW,
+		unsigned int      CompbufReservedSpace64B,
+		unsigned int      CompbufReservedSpaceZs,
+		double    SRExitTime,
+		double    SRExitZ8Time,
+		bool   SynchronizeTimingsFinal,
+		unsigned int   BlendingAndTiming[],
+		double    StutterEnterPlusExitWatermark,
+		double    Z8StutterEnterPlusExitWatermark,
+		bool   ProgressiveToInterlaceUnitInOPP,
+		bool   Interlace[],
+		double    MinTTUVBlank[],
+		unsigned int   DPPPerSurface[],
+		unsigned int      DETBufferSizeY[],
+		unsigned int   BytePerPixelY[],
+		double    BytePerPixelDETY[],
+		double      SwathWidthY[],
+		unsigned int   SwathHeightY[],
+		unsigned int   SwathHeightC[],
+		double    NetDCCRateLuma[],
+		double    NetDCCRateChroma[],
+		double    DCCFractionOfZeroSizeRequestsLuma[],
+		double    DCCFractionOfZeroSizeRequestsChroma[],
+		unsigned int      HTotal[],
+		unsigned int      VTotal[],
+		double    PixelClock[],
+		double    VRatio[],
+		enum dm_rotation_angle SourceRotation[],
+		unsigned int   BlockHeight256BytesY[],
+		unsigned int   BlockWidth256BytesY[],
+		unsigned int   BlockHeight256BytesC[],
+		unsigned int   BlockWidth256BytesC[],
+		unsigned int   DCCYMaxUncompressedBlock[],
+		unsigned int   DCCCMaxUncompressedBlock[],
+		unsigned int      VActive[],
+		bool   DCCEnable[],
+		bool   WritebackEnable[],
+		double    ReadBandwidthSurfaceLuma[],
+		double    ReadBandwidthSurfaceChroma[],
+		double    meta_row_bw[],
+		double    dpte_row_bw[],
+
+		/* Output */
+		double   *StutterEfficiencyNotIncludingVBlank,
+		double   *StutterEfficiency,
+		unsigned int     *NumberOfStutterBurstsPerFrame,
+		double   *Z8StutterEfficiencyNotIncludingVBlank,
+		double   *Z8StutterEfficiency,
+		unsigned int     *Z8NumberOfStutterBurstsPerFrame,
+		double   *StutterPeriod,
+		bool  *DCHUBBUB_ARB_CSTATE_MAX_CAP_MODE)
+{
+
+	bool FoundCriticalSurface = false;
+	unsigned int SwathSizeCriticalSurface = 0;
+	unsigned int LastChunkOfSwathSize;
+	unsigned int MissingPartOfLastSwathOfDETSize;
+	double LastZ8StutterPeriod = 0.0;
+	double LastStutterPeriod = 0.0;
+	unsigned int TotalNumberOfActiveOTG = 0;
+	double doublePixelClock;
+	unsigned int doubleHTotal;
+	unsigned int doubleVTotal;
+	bool SameTiming = true;
+	double DETBufferingTimeY;
+	double SwathWidthYCriticalSurface = 0.0;
+	double SwathHeightYCriticalSurface = 0.0;
+	double VActiveTimeCriticalSurface = 0.0;
+	double FrameTimeCriticalSurface = 0.0;
+	unsigned int BytePerPixelYCriticalSurface = 0;
+	double LinesToFinishSwathTransferStutterCriticalSurface = 0.0;
+	unsigned int DETBufferSizeYCriticalSurface = 0;
+	double MinTTUVBlankCriticalSurface = 0.0;
+	unsigned int BlockWidth256BytesYCriticalSurface = 0;
+	bool doublePlaneCriticalSurface = 0;
+	bool doublePipeCriticalSurface = 0;
+	double TotalCompressedReadBandwidth;
+	double TotalRowReadBandwidth;
+	double AverageDCCCompressionRate;
+	double EffectiveCompressedBufferSize;
+	double PartOfUncompressedPixelBurstThatFitsInROBAndCompressedBuffer;
+	double StutterBurstTime;
+	unsigned int TotalActiveWriteback;
+	double LinesInDETY;
+	double LinesInDETYRoundedDownToSwath;
+	double MaximumEffectiveCompressionLuma;
+	double MaximumEffectiveCompressionChroma;
+	double TotalZeroSizeRequestReadBandwidth;
+	double TotalZeroSizeCompressedReadBandwidth;
+	double AverageDCCZeroSizeFraction;
+	double AverageZeroSizeCompressionRate;
+	unsigned int k;
+
+	TotalZeroSizeRequestReadBandwidth = 0;
+	TotalZeroSizeCompressedReadBandwidth = 0;
+	TotalRowReadBandwidth = 0;
+	TotalCompressedReadBandwidth = 0;
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (UseMALLForPStateChange[k] != dm_use_mall_pstate_change_phantom_pipe) {
+			if (DCCEnable[k] == true) {
+				if ((IsVertical(SourceRotation[k]) && BlockWidth256BytesY[k] > SwathHeightY[k])
+						|| (!IsVertical(SourceRotation[k])
+								&& BlockHeight256BytesY[k] > SwathHeightY[k])
+						|| DCCYMaxUncompressedBlock[k] < 256) {
+					MaximumEffectiveCompressionLuma = 2;
+				} else {
+					MaximumEffectiveCompressionLuma = 4;
+				}
+				TotalCompressedReadBandwidth = TotalCompressedReadBandwidth
+						+ ReadBandwidthSurfaceLuma[k]
+								/ dml_min(NetDCCRateLuma[k],
+										MaximumEffectiveCompressionLuma);
+#ifdef __DML_VBA_DEBUG__
+				dml_print("DML::%s: k=%0d, ReadBandwidthSurfaceLuma = %f\n",
+						__func__, k, ReadBandwidthSurfaceLuma[k]);
+				dml_print("DML::%s: k=%0d, NetDCCRateLuma = %f\n",
+						__func__, k, NetDCCRateLuma[k]);
+				dml_print("DML::%s: k=%0d, MaximumEffectiveCompressionLuma = %f\n",
+						__func__, k, MaximumEffectiveCompressionLuma);
+#endif
+				TotalZeroSizeRequestReadBandwidth = TotalZeroSizeRequestReadBandwidth
+						+ ReadBandwidthSurfaceLuma[k] * DCCFractionOfZeroSizeRequestsLuma[k];
+				TotalZeroSizeCompressedReadBandwidth = TotalZeroSizeCompressedReadBandwidth
+						+ ReadBandwidthSurfaceLuma[k] * DCCFractionOfZeroSizeRequestsLuma[k]
+								/ MaximumEffectiveCompressionLuma;
+
+				if (ReadBandwidthSurfaceChroma[k] > 0) {
+					if ((IsVertical(SourceRotation[k]) && BlockWidth256BytesC[k] > SwathHeightC[k])
+							|| (!IsVertical(SourceRotation[k])
+									&& BlockHeight256BytesC[k] > SwathHeightC[k])
+							|| DCCCMaxUncompressedBlock[k] < 256) {
+						MaximumEffectiveCompressionChroma = 2;
+					} else {
+						MaximumEffectiveCompressionChroma = 4;
+					}
+					TotalCompressedReadBandwidth =
+							TotalCompressedReadBandwidth
+							+ ReadBandwidthSurfaceChroma[k]
+							/ dml_min(NetDCCRateChroma[k],
+							MaximumEffectiveCompressionChroma);
+#ifdef __DML_VBA_DEBUG__
+					dml_print("DML::%s: k=%0d, ReadBandwidthSurfaceChroma = %f\n",
+							__func__, k, ReadBandwidthSurfaceChroma[k]);
+					dml_print("DML::%s: k=%0d, NetDCCRateChroma = %f\n",
+							__func__, k, NetDCCRateChroma[k]);
+					dml_print("DML::%s: k=%0d, MaximumEffectiveCompressionChroma = %f\n",
+							__func__, k, MaximumEffectiveCompressionChroma);
+#endif
+					TotalZeroSizeRequestReadBandwidth = TotalZeroSizeRequestReadBandwidth
+							+ ReadBandwidthSurfaceChroma[k]
+									* DCCFractionOfZeroSizeRequestsChroma[k];
+					TotalZeroSizeCompressedReadBandwidth = TotalZeroSizeCompressedReadBandwidth
+							+ ReadBandwidthSurfaceChroma[k]
+									* DCCFractionOfZeroSizeRequestsChroma[k]
+									/ MaximumEffectiveCompressionChroma;
+				}
+			} else {
+				TotalCompressedReadBandwidth = TotalCompressedReadBandwidth
+						+ ReadBandwidthSurfaceLuma[k] + ReadBandwidthSurfaceChroma[k];
+			}
+			TotalRowReadBandwidth = TotalRowReadBandwidth
+					+ DPPPerSurface[k] * (meta_row_bw[k] + dpte_row_bw[k]);
+		}
+	}
+
+	AverageDCCCompressionRate = TotalDataReadBandwidth / TotalCompressedReadBandwidth;
+	AverageDCCZeroSizeFraction = TotalZeroSizeRequestReadBandwidth / TotalDataReadBandwidth;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: UnboundedRequestEnabled = %d\n", __func__, UnboundedRequestEnabled);
+	dml_print("DML::%s: TotalCompressedReadBandwidth = %f\n", __func__, TotalCompressedReadBandwidth);
+	dml_print("DML::%s: TotalZeroSizeRequestReadBandwidth = %f\n", __func__, TotalZeroSizeRequestReadBandwidth);
+	dml_print("DML::%s: TotalZeroSizeCompressedReadBandwidth = %f\n",
+			__func__, TotalZeroSizeCompressedReadBandwidth);
+	dml_print("DML::%s: MaximumEffectiveCompressionLuma = %f\n", __func__, MaximumEffectiveCompressionLuma);
+	dml_print("DML::%s: MaximumEffectiveCompressionChroma = %f\n", __func__, MaximumEffectiveCompressionChroma);
+	dml_print("DML::%s: AverageDCCCompressionRate = %f\n", __func__, AverageDCCCompressionRate);
+	dml_print("DML::%s: AverageDCCZeroSizeFraction = %f\n", __func__, AverageDCCZeroSizeFraction);
+	dml_print("DML::%s: CompbufReservedSpace64B = %d\n", __func__, CompbufReservedSpace64B);
+	dml_print("DML::%s: CompbufReservedSpaceZs = %d\n", __func__, CompbufReservedSpaceZs);
+	dml_print("DML::%s: CompressedBufferSizeInkByte = %d\n", __func__, CompressedBufferSizeInkByte);
+#endif
+	if (AverageDCCZeroSizeFraction == 1) {
+		AverageZeroSizeCompressionRate = TotalZeroSizeRequestReadBandwidth
+				/ TotalZeroSizeCompressedReadBandwidth;
+		EffectiveCompressedBufferSize = (double) MetaFIFOSizeInKEntries * 1024 * 64
+				* AverageZeroSizeCompressionRate
+				+ ((double) ZeroSizeBufferEntries - CompbufReservedSpaceZs) * 64
+						* AverageZeroSizeCompressionRate;
+	} else if (AverageDCCZeroSizeFraction > 0) {
+		AverageZeroSizeCompressionRate = TotalZeroSizeRequestReadBandwidth
+				/ TotalZeroSizeCompressedReadBandwidth;
+		EffectiveCompressedBufferSize = dml_min(
+				(double) CompressedBufferSizeInkByte * 1024 * AverageDCCCompressionRate,
+				(double) MetaFIFOSizeInKEntries * 1024 * 64
+					/ (AverageDCCZeroSizeFraction / AverageZeroSizeCompressionRate
+					+ 1 / AverageDCCCompressionRate))
+					+ dml_min(((double) ROBBufferSizeInKByte * 1024 - CompbufReservedSpace64B * 64)
+					* AverageDCCCompressionRate,
+					((double) ZeroSizeBufferEntries - CompbufReservedSpaceZs) * 64
+					/ (AverageDCCZeroSizeFraction / AverageZeroSizeCompressionRate));
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: min 1 = %f\n", __func__,
+				CompressedBufferSizeInkByte * 1024 * AverageDCCCompressionRate);
+		dml_print("DML::%s: min 2 = %f\n", __func__, MetaFIFOSizeInKEntries * 1024 * 64 /
+				(AverageDCCZeroSizeFraction / AverageZeroSizeCompressionRate + 1 /
+						AverageDCCCompressionRate));
+		dml_print("DML::%s: min 3 = %f\n", __func__, (ROBBufferSizeInKByte * 1024 -
+				CompbufReservedSpace64B * 64) * AverageDCCCompressionRate);
+		dml_print("DML::%s: min 4 = %f\n", __func__, (ZeroSizeBufferEntries - CompbufReservedSpaceZs) * 64 /
+				(AverageDCCZeroSizeFraction / AverageZeroSizeCompressionRate));
+#endif
+	} else {
+		EffectiveCompressedBufferSize = dml_min(
+				(double) CompressedBufferSizeInkByte * 1024 * AverageDCCCompressionRate,
+				(double) MetaFIFOSizeInKEntries * 1024 * 64 * AverageDCCCompressionRate)
+				+ ((double) ROBBufferSizeInKByte * 1024 - CompbufReservedSpace64B * 64)
+						* AverageDCCCompressionRate;
+
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: min 1 = %f\n", __func__,
+				CompressedBufferSizeInkByte * 1024 * AverageDCCCompressionRate);
+		dml_print("DML::%s: min 2 = %f\n", __func__,
+				MetaFIFOSizeInKEntries * 1024 * 64 * AverageDCCCompressionRate);
+#endif
+	}
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: MetaFIFOSizeInKEntries = %d\n", __func__, MetaFIFOSizeInKEntries);
+	dml_print("DML::%s: AverageZeroSizeCompressionRate = %f\n", __func__, AverageZeroSizeCompressionRate);
+	dml_print("DML::%s: EffectiveCompressedBufferSize = %f\n", __func__, EffectiveCompressedBufferSize);
+#endif
+
+	*StutterPeriod = 0;
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (UseMALLForPStateChange[k] != dm_use_mall_pstate_change_phantom_pipe) {
+			LinesInDETY = ((double) DETBufferSizeY[k]
+					+ (UnboundedRequestEnabled == true ? EffectiveCompressedBufferSize : 0)
+							* ReadBandwidthSurfaceLuma[k] / TotalDataReadBandwidth)
+					/ BytePerPixelDETY[k] / SwathWidthY[k];
+			LinesInDETYRoundedDownToSwath = dml_floor(LinesInDETY, SwathHeightY[k]);
+			DETBufferingTimeY = LinesInDETYRoundedDownToSwath * ((double) HTotal[k] / PixelClock[k])
+					/ VRatio[k];
+#ifdef __DML_VBA_DEBUG__
+			dml_print("DML::%s: k=%0d, DETBufferSizeY = %d\n", __func__, k, DETBufferSizeY[k]);
+			dml_print("DML::%s: k=%0d, BytePerPixelDETY = %f\n", __func__, k, BytePerPixelDETY[k]);
+			dml_print("DML::%s: k=%0d, SwathWidthY = %d\n", __func__, k, SwathWidthY[k]);
+			dml_print("DML::%s: k=%0d, ReadBandwidthSurfaceLuma = %f\n",
+					__func__, k, ReadBandwidthSurfaceLuma[k]);
+			dml_print("DML::%s: k=%0d, TotalDataReadBandwidth = %f\n", __func__, k, TotalDataReadBandwidth);
+			dml_print("DML::%s: k=%0d, LinesInDETY = %f\n", __func__, k, LinesInDETY);
+			dml_print("DML::%s: k=%0d, LinesInDETYRoundedDownToSwath = %f\n",
+					__func__, k, LinesInDETYRoundedDownToSwath);
+			dml_print("DML::%s: k=%0d, HTotal = %d\n", __func__, k, HTotal[k]);
+			dml_print("DML::%s: k=%0d, PixelClock = %f\n", __func__, k, PixelClock[k]);
+			dml_print("DML::%s: k=%0d, VRatio = %f\n", __func__, k, VRatio[k]);
+			dml_print("DML::%s: k=%0d, DETBufferingTimeY = %f\n", __func__, k, DETBufferingTimeY);
+			dml_print("DML::%s: k=%0d, PixelClock = %f\n", __func__, k, PixelClock[k]);
+#endif
+
+			if (!FoundCriticalSurface || DETBufferingTimeY < *StutterPeriod) {
+				bool isInterlaceTiming = Interlace[k] && !ProgressiveToInterlaceUnitInOPP;
+
+				FoundCriticalSurface = true;
+				*StutterPeriod = DETBufferingTimeY;
+				FrameTimeCriticalSurface = (
+						isInterlaceTiming ?
+								dml_floor((double) VTotal[k] / 2.0, 1.0) : VTotal[k])
+						* (double) HTotal[k] / PixelClock[k];
+				VActiveTimeCriticalSurface = (
+						isInterlaceTiming ?
+								dml_floor((double) VActive[k] / 2.0, 1.0) : VActive[k])
+						* (double) HTotal[k] / PixelClock[k];
+				BytePerPixelYCriticalSurface = BytePerPixelY[k];
+				SwathWidthYCriticalSurface = SwathWidthY[k];
+				SwathHeightYCriticalSurface = SwathHeightY[k];
+				BlockWidth256BytesYCriticalSurface = BlockWidth256BytesY[k];
+				LinesToFinishSwathTransferStutterCriticalSurface = SwathHeightY[k]
+						- (LinesInDETY - LinesInDETYRoundedDownToSwath);
+				DETBufferSizeYCriticalSurface = DETBufferSizeY[k];
+				MinTTUVBlankCriticalSurface = MinTTUVBlank[k];
+				doublePlaneCriticalSurface = (ReadBandwidthSurfaceChroma[k] == 0);
+				doublePipeCriticalSurface = (DPPPerSurface[k] == 1);
+
+#ifdef __DML_VBA_DEBUG__
+				dml_print("DML::%s: k=%0d, FoundCriticalSurface                = %d\n",
+						__func__, k, FoundCriticalSurface);
+				dml_print("DML::%s: k=%0d, StutterPeriod                       = %f\n",
+						__func__, k, *StutterPeriod);
+				dml_print("DML::%s: k=%0d, MinTTUVBlankCriticalSurface         = %f\n",
+						__func__, k, MinTTUVBlankCriticalSurface);
+				dml_print("DML::%s: k=%0d, FrameTimeCriticalSurface            = %f\n",
+						__func__, k, FrameTimeCriticalSurface);
+				dml_print("DML::%s: k=%0d, VActiveTimeCriticalSurface          = %f\n",
+						__func__, k, VActiveTimeCriticalSurface);
+				dml_print("DML::%s: k=%0d, BytePerPixelYCriticalSurface        = %d\n",
+						__func__, k, BytePerPixelYCriticalSurface);
+				dml_print("DML::%s: k=%0d, SwathWidthYCriticalSurface          = %f\n",
+						__func__, k, SwathWidthYCriticalSurface);
+				dml_print("DML::%s: k=%0d, SwathHeightYCriticalSurface         = %f\n",
+						__func__, k, SwathHeightYCriticalSurface);
+				dml_print("DML::%s: k=%0d, BlockWidth256BytesYCriticalSurface  = %d\n",
+						__func__, k, BlockWidth256BytesYCriticalSurface);
+				dml_print("DML::%s: k=%0d, doublePlaneCriticalSurface          = %d\n",
+						__func__, k, doublePlaneCriticalSurface);
+				dml_print("DML::%s: k=%0d, doublePipeCriticalSurface           = %d\n",
+						__func__, k, doublePipeCriticalSurface);
+				dml_print("DML::%s: k=%0d, LinesToFinishSwathTransferStutterCriticalSurface = %f\n",
+						__func__, k, LinesToFinishSwathTransferStutterCriticalSurface);
+#endif
+			}
+		}
+	}
+
+	PartOfUncompressedPixelBurstThatFitsInROBAndCompressedBuffer = dml_min(*StutterPeriod * TotalDataReadBandwidth,
+			EffectiveCompressedBufferSize);
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: ROBBufferSizeInKByte = %d\n", __func__, ROBBufferSizeInKByte);
+	dml_print("DML::%s: AverageDCCCompressionRate = %f\n", __func__, AverageDCCCompressionRate);
+	dml_print("DML::%s: StutterPeriod * TotalDataReadBandwidth = %f\n",
+			__func__, *StutterPeriod * TotalDataReadBandwidth);
+	dml_print("DML::%s: EffectiveCompressedBufferSize = %f\n", __func__, EffectiveCompressedBufferSize);
+	dml_print("DML::%s: PartOfUncompressedPixelBurstThatFitsInROBAndCompressedBuffer = %f\n", __func__,
+			PartOfUncompressedPixelBurstThatFitsInROBAndCompressedBuffer);
+	dml_print("DML::%s: ReturnBW = %f\n", __func__, ReturnBW);
+	dml_print("DML::%s: TotalDataReadBandwidth = %f\n", __func__, TotalDataReadBandwidth);
+	dml_print("DML::%s: TotalRowReadBandwidth = %f\n", __func__, TotalRowReadBandwidth);
+	dml_print("DML::%s: DCFCLK = %f\n", __func__, DCFCLK);
+#endif
+
+	StutterBurstTime = PartOfUncompressedPixelBurstThatFitsInROBAndCompressedBuffer / AverageDCCCompressionRate
+			/ ReturnBW
+			+ (*StutterPeriod * TotalDataReadBandwidth
+					- PartOfUncompressedPixelBurstThatFitsInROBAndCompressedBuffer) / (DCFCLK * 64)
+			+ *StutterPeriod * TotalRowReadBandwidth / ReturnBW;
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: Part 1 = %f\n", __func__, PartOfUncompressedPixelBurstThatFitsInROBAndCompressedBuffer /
+			AverageDCCCompressionRate / ReturnBW);
+	dml_print("DML::%s: StutterPeriod * TotalDataReadBandwidth = %f\n",
+			__func__, (*StutterPeriod * TotalDataReadBandwidth));
+	dml_print("DML::%s: Part 2 = %f\n", __func__, (*StutterPeriod * TotalDataReadBandwidth -
+			PartOfUncompressedPixelBurstThatFitsInROBAndCompressedBuffer) / (DCFCLK * 64));
+	dml_print("DML::%s: Part 3 = %f\n", __func__, *StutterPeriod * TotalRowReadBandwidth / ReturnBW);
+	dml_print("DML::%s: StutterBurstTime = %f\n", __func__, StutterBurstTime);
+#endif
+	StutterBurstTime = dml_max(StutterBurstTime,
+			LinesToFinishSwathTransferStutterCriticalSurface * BytePerPixelYCriticalSurface
+					* SwathWidthYCriticalSurface / ReturnBW);
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: Time to finish residue swath=%f\n",
+			__func__,
+			LinesToFinishSwathTransferStutterCriticalSurface *
+			BytePerPixelYCriticalSurface * SwathWidthYCriticalSurface / ReturnBW);
+#endif
+
+	TotalActiveWriteback = 0;
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (WritebackEnable[k])
+			TotalActiveWriteback = TotalActiveWriteback + 1;
+	}
+
+	if (TotalActiveWriteback == 0) {
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: SRExitTime = %f\n", __func__, SRExitTime);
+		dml_print("DML::%s: SRExitZ8Time = %f\n", __func__, SRExitZ8Time);
+		dml_print("DML::%s: StutterBurstTime = %f (final)\n", __func__, StutterBurstTime);
+		dml_print("DML::%s: StutterPeriod = %f\n", __func__, *StutterPeriod);
+#endif
+		*StutterEfficiencyNotIncludingVBlank = dml_max(0.,
+				1 - (SRExitTime + StutterBurstTime) / *StutterPeriod) * 100;
+		*Z8StutterEfficiencyNotIncludingVBlank = dml_max(0.,
+				1 - (SRExitZ8Time + StutterBurstTime) / *StutterPeriod) * 100;
+		*NumberOfStutterBurstsPerFrame = (
+				*StutterEfficiencyNotIncludingVBlank > 0 ?
+						dml_ceil(VActiveTimeCriticalSurface / *StutterPeriod, 1) : 0);
+		*Z8NumberOfStutterBurstsPerFrame = (
+				*Z8StutterEfficiencyNotIncludingVBlank > 0 ?
+						dml_ceil(VActiveTimeCriticalSurface / *StutterPeriod, 1) : 0);
+	} else {
+		*StutterEfficiencyNotIncludingVBlank = 0.;
+		*Z8StutterEfficiencyNotIncludingVBlank = 0.;
+		*NumberOfStutterBurstsPerFrame = 0;
+		*Z8NumberOfStutterBurstsPerFrame = 0;
+	}
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: VActiveTimeCriticalSurface = %f\n", __func__, VActiveTimeCriticalSurface);
+	dml_print("DML::%s: StutterEfficiencyNotIncludingVBlank = %f\n",
+			__func__, *StutterEfficiencyNotIncludingVBlank);
+	dml_print("DML::%s: Z8StutterEfficiencyNotIncludingVBlank = %f\n",
+			__func__, *Z8StutterEfficiencyNotIncludingVBlank);
+	dml_print("DML::%s: NumberOfStutterBurstsPerFrame = %d\n", __func__, *NumberOfStutterBurstsPerFrame);
+	dml_print("DML::%s: Z8NumberOfStutterBurstsPerFrame = %d\n", __func__, *Z8NumberOfStutterBurstsPerFrame);
+#endif
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (UseMALLForPStateChange[k] != dm_use_mall_pstate_change_phantom_pipe) {
+			if (BlendingAndTiming[k] == k) {
+				if (TotalNumberOfActiveOTG == 0) {
+					doublePixelClock = PixelClock[k];
+					doubleHTotal = HTotal[k];
+					doubleVTotal = VTotal[k];
+				} else if (doublePixelClock != PixelClock[k] || doubleHTotal != HTotal[k]
+						|| doubleVTotal != VTotal[k]) {
+					SameTiming = false;
+				}
+				TotalNumberOfActiveOTG = TotalNumberOfActiveOTG + 1;
+			}
+		}
+	}
+
+	if (*StutterEfficiencyNotIncludingVBlank > 0) {
+		LastStutterPeriod = VActiveTimeCriticalSurface - (*NumberOfStutterBurstsPerFrame - 1) * *StutterPeriod;
+
+		if ((SynchronizeTimingsFinal || TotalNumberOfActiveOTG == 1) && SameTiming
+				&& LastStutterPeriod + MinTTUVBlankCriticalSurface > StutterEnterPlusExitWatermark) {
+			*StutterEfficiency = (1 - (*NumberOfStutterBurstsPerFrame * SRExitTime
+						+ StutterBurstTime * VActiveTimeCriticalSurface
+						/ *StutterPeriod) / FrameTimeCriticalSurface) * 100;
+		} else {
+			*StutterEfficiency = *StutterEfficiencyNotIncludingVBlank;
+		}
+	} else {
+		*StutterEfficiency = 0;
+	}
+
+	if (*Z8StutterEfficiencyNotIncludingVBlank > 0) {
+		LastZ8StutterPeriod = VActiveTimeCriticalSurface
+				- (*NumberOfStutterBurstsPerFrame - 1) * *StutterPeriod;
+		if ((SynchronizeTimingsFinal || TotalNumberOfActiveOTG == 1) && SameTiming && LastZ8StutterPeriod +
+				MinTTUVBlankCriticalSurface > Z8StutterEnterPlusExitWatermark) {
+			*Z8StutterEfficiency = (1 - (*NumberOfStutterBurstsPerFrame * SRExitZ8Time + StutterBurstTime
+				* VActiveTimeCriticalSurface / *StutterPeriod) / FrameTimeCriticalSurface) * 100;
+		} else {
+			*Z8StutterEfficiency = *Z8StutterEfficiencyNotIncludingVBlank;
+		}
+	} else {
+		*Z8StutterEfficiency = 0.;
+	}
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: LastZ8StutterPeriod = %f\n", __func__, LastZ8StutterPeriod);
+	dml_print("DML::%s: Z8StutterEnterPlusExitWatermark = %f\n", __func__, Z8StutterEnterPlusExitWatermark);
+	dml_print("DML::%s: StutterBurstTime = %f\n", __func__, StutterBurstTime);
+	dml_print("DML::%s: StutterPeriod = %f\n", __func__, *StutterPeriod);
+	dml_print("DML::%s: StutterEfficiency = %f\n", __func__, *StutterEfficiency);
+	dml_print("DML::%s: Z8StutterEfficiency = %f\n", __func__, *Z8StutterEfficiency);
+	dml_print("DML::%s: StutterEfficiencyNotIncludingVBlank = %f\n",
+			__func__, *StutterEfficiencyNotIncludingVBlank);
+	dml_print("DML::%s: Z8NumberOfStutterBurstsPerFrame = %d\n", __func__, *Z8NumberOfStutterBurstsPerFrame);
+#endif
+
+	SwathSizeCriticalSurface = BytePerPixelYCriticalSurface * SwathHeightYCriticalSurface
+			* dml_ceil(SwathWidthYCriticalSurface, BlockWidth256BytesYCriticalSurface);
+	LastChunkOfSwathSize = SwathSizeCriticalSurface % (PixelChunkSizeInKByte * 1024);
+	MissingPartOfLastSwathOfDETSize = dml_ceil(DETBufferSizeYCriticalSurface, SwathSizeCriticalSurface)
+			- DETBufferSizeYCriticalSurface;
+
+	*DCHUBBUB_ARB_CSTATE_MAX_CAP_MODE = !(!UnboundedRequestEnabled && (NumberOfActiveSurfaces == 1)
+			&& doublePlaneCriticalSurface && doublePipeCriticalSurface && (LastChunkOfSwathSize > 0)
+			&& (LastChunkOfSwathSize <= 4096) && (MissingPartOfLastSwathOfDETSize > 0)
+			&& (MissingPartOfLastSwathOfDETSize <= LastChunkOfSwathSize));
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: SwathSizeCriticalSurface = %d\n", __func__, SwathSizeCriticalSurface);
+	dml_print("DML::%s: LastChunkOfSwathSize = %d\n", __func__, LastChunkOfSwathSize);
+	dml_print("DML::%s: MissingPartOfLastSwathOfDETSize = %d\n", __func__, MissingPartOfLastSwathOfDETSize);
+	dml_print("DML::%s: DCHUBBUB_ARB_CSTATE_MAX_CAP_MODE = %d\n", __func__, *DCHUBBUB_ARB_CSTATE_MAX_CAP_MODE);
+#endif
+} // CalculateStutterEfficiency
+
+void dml32_CalculateMaxDETAndMinCompressedBufferSize(
+		unsigned int    ConfigReturnBufferSizeInKByte,
+		unsigned int    ROBBufferSizeInKByte,
+		unsigned int MaxNumDPP,
+		bool nomDETInKByteOverrideEnable, // VBA_DELTA, allow DV to override default DET size
+		unsigned int nomDETInKByteOverrideValue,  // VBA_DELTA
+
+		/* Output */
+		unsigned int *MaxTotalDETInKByte,
+		unsigned int *nomDETInKByte,
+		unsigned int *MinCompressedBufferSizeInKByte)
+{
+	bool     det_buff_size_override_en  = nomDETInKByteOverrideEnable;
+	unsigned int        det_buff_size_override_val = nomDETInKByteOverrideValue;
+
+	*MaxTotalDETInKByte = dml_ceil(((double)ConfigReturnBufferSizeInKByte +
+			(double) ROBBufferSizeInKByte) * 4.0 / 5.0, 64);
+	*nomDETInKByte = dml_floor((double) *MaxTotalDETInKByte / (double) MaxNumDPP, 64);
+	*MinCompressedBufferSizeInKByte = ConfigReturnBufferSizeInKByte - *MaxTotalDETInKByte;
+
+#ifdef __DML_VBA_DEBUG__
+	dml_print("DML::%s: ConfigReturnBufferSizeInKByte = %0d\n", __func__, ConfigReturnBufferSizeInKByte);
+	dml_print("DML::%s: ROBBufferSizeInKByte = %0d\n", __func__, ROBBufferSizeInKByte);
+	dml_print("DML::%s: MaxNumDPP = %0d\n", __func__, MaxNumDPP);
+	dml_print("DML::%s: MaxTotalDETInKByte = %0d\n", __func__, *MaxTotalDETInKByte);
+	dml_print("DML::%s: nomDETInKByte = %0d\n", __func__, *nomDETInKByte);
+	dml_print("DML::%s: MinCompressedBufferSizeInKByte = %0d\n", __func__, *MinCompressedBufferSizeInKByte);
+#endif
+
+	if (det_buff_size_override_en) {
+		*nomDETInKByte = det_buff_size_override_val;
+#ifdef __DML_VBA_DEBUG__
+		dml_print("DML::%s: nomDETInKByte = %0d (override)\n", __func__, *nomDETInKByte);
+#endif
+	}
+} // CalculateMaxDETAndMinCompressedBufferSize
+
+bool dml32_CalculateVActiveBandwithSupport(unsigned int NumberOfActiveSurfaces,
+		double ReturnBW,
+		bool NotUrgentLatencyHiding[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		double cursor_bw[],
+		double meta_row_bandwidth[],
+		double dpte_row_bandwidth[],
+		unsigned int NumberOfDPP[],
+		double UrgentBurstFactorLuma[],
+		double UrgentBurstFactorChroma[],
+		double UrgentBurstFactorCursor[])
+{
+	unsigned int k;
+	bool NotEnoughUrgentLatencyHiding = false;
+	bool CalculateVActiveBandwithSupport_val = false;
+	double VActiveBandwith = 0;
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (NotUrgentLatencyHiding[k]) {
+			NotEnoughUrgentLatencyHiding = true;
+		}
+	}
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		VActiveBandwith = VActiveBandwith + ReadBandwidthLuma[k] * UrgentBurstFactorLuma[k] + ReadBandwidthChroma[k] * UrgentBurstFactorChroma[k] + cursor_bw[k] * UrgentBurstFactorCursor[k] + NumberOfDPP[k] * meta_row_bandwidth[k] + NumberOfDPP[k] * dpte_row_bandwidth[k];
+	}
+
+	CalculateVActiveBandwithSupport_val = (VActiveBandwith <= ReturnBW) && !NotEnoughUrgentLatencyHiding;
+
+#ifdef __DML_VBA_DEBUG__
+dml_print("DML::%s: NotEnoughUrgentLatencyHiding        = %d\n", __func__, NotEnoughUrgentLatencyHiding);
+dml_print("DML::%s: VActiveBandwith                     = %f\n", __func__, VActiveBandwith);
+dml_print("DML::%s: ReturnBW                            = %f\n", __func__, ReturnBW);
+dml_print("DML::%s: CalculateVActiveBandwithSupport_val = %d\n", __func__, CalculateVActiveBandwithSupport_val);
+#endif
+	return CalculateVActiveBandwithSupport_val;
+}
+
+void dml32_CalculatePrefetchBandwithSupport(unsigned int NumberOfActiveSurfaces,
+		double ReturnBW,
+		bool NotUrgentLatencyHiding[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		double PrefetchBandwidthLuma[],
+		double PrefetchBandwidthChroma[],
+		double cursor_bw[],
+		double meta_row_bandwidth[],
+		double dpte_row_bandwidth[],
+		double cursor_bw_pre[],
+		double prefetch_vmrow_bw[],
+		unsigned int NumberOfDPP[],
+		double UrgentBurstFactorLuma[],
+		double UrgentBurstFactorChroma[],
+		double UrgentBurstFactorCursor[],
+		double UrgentBurstFactorLumaPre[],
+		double UrgentBurstFactorChromaPre[],
+		double UrgentBurstFactorCursorPre[],
+
+		/* output */
+		double  *PrefetchBandwidth,
+		double  *FractionOfUrgentBandwidth,
+		bool *PrefetchBandwidthSupport)
+{
+	unsigned int k;
+	bool NotEnoughUrgentLatencyHiding = false;
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (NotUrgentLatencyHiding[k]) {
+			NotEnoughUrgentLatencyHiding = true;
+		}
+	}
+
+	*PrefetchBandwidth = 0;
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		*PrefetchBandwidth = *PrefetchBandwidth + dml_max3(NumberOfDPP[k] * prefetch_vmrow_bw[k],
+				ReadBandwidthLuma[k] * UrgentBurstFactorLuma[k] + ReadBandwidthChroma[k] * UrgentBurstFactorChroma[k] + cursor_bw[k] * UrgentBurstFactorCursor[k] + NumberOfDPP[k] * (meta_row_bandwidth[k] + dpte_row_bandwidth[k]),
+				NumberOfDPP[k] * (PrefetchBandwidthLuma[k] * UrgentBurstFactorLumaPre[k] + PrefetchBandwidthChroma[k] * UrgentBurstFactorChromaPre[k]) + cursor_bw_pre[k] * UrgentBurstFactorCursorPre[k]);
+	}
+
+	*PrefetchBandwidthSupport = (*PrefetchBandwidth <= ReturnBW) && !NotEnoughUrgentLatencyHiding;
+	*FractionOfUrgentBandwidth = *PrefetchBandwidth / ReturnBW;
+}
+
+double dml32_CalculateBandwidthAvailableForImmediateFlip(unsigned int NumberOfActiveSurfaces,
+		double ReturnBW,
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		double PrefetchBandwidthLuma[],
+		double PrefetchBandwidthChroma[],
+		double cursor_bw[],
+		double cursor_bw_pre[],
+		unsigned int NumberOfDPP[],
+		double UrgentBurstFactorLuma[],
+		double UrgentBurstFactorChroma[],
+		double UrgentBurstFactorCursor[],
+		double UrgentBurstFactorLumaPre[],
+		double UrgentBurstFactorChromaPre[],
+		double UrgentBurstFactorCursorPre[])
+{
+	unsigned int k;
+	double CalculateBandwidthAvailableForImmediateFlip_val = ReturnBW;
+
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		CalculateBandwidthAvailableForImmediateFlip_val = CalculateBandwidthAvailableForImmediateFlip_val - dml_max(ReadBandwidthLuma[k] * UrgentBurstFactorLuma[k] + ReadBandwidthChroma[k] * UrgentBurstFactorChroma[k] + cursor_bw[k] * UrgentBurstFactorCursor[k],
+				NumberOfDPP[k] * (PrefetchBandwidthLuma[k] * UrgentBurstFactorLumaPre[k] + PrefetchBandwidthChroma[k] * UrgentBurstFactorChromaPre[k]) + cursor_bw_pre[k] * UrgentBurstFactorCursorPre[k]);
+	}
+
+	return CalculateBandwidthAvailableForImmediateFlip_val;
+}
+
+void dml32_CalculateImmediateFlipBandwithSupport(unsigned int NumberOfActiveSurfaces,
+		double ReturnBW,
+		enum immediate_flip_requirement ImmediateFlipRequirement[],
+		double final_flip_bw[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		double PrefetchBandwidthLuma[],
+		double PrefetchBandwidthChroma[],
+		double cursor_bw[],
+		double meta_row_bandwidth[],
+		double dpte_row_bandwidth[],
+		double cursor_bw_pre[],
+		double prefetch_vmrow_bw[],
+		unsigned int NumberOfDPP[],
+		double UrgentBurstFactorLuma[],
+		double UrgentBurstFactorChroma[],
+		double UrgentBurstFactorCursor[],
+		double UrgentBurstFactorLumaPre[],
+		double UrgentBurstFactorChromaPre[],
+		double UrgentBurstFactorCursorPre[],
+
+		/* output */
+		double  *TotalBandwidth,
+		double  *FractionOfUrgentBandwidth,
+		bool *ImmediateFlipBandwidthSupport)
+{
+	unsigned int k;
+	*TotalBandwidth = 0;
+	for (k = 0; k < NumberOfActiveSurfaces; ++k) {
+		if (ImmediateFlipRequirement[k] != dm_immediate_flip_not_required) {
+			*TotalBandwidth = *TotalBandwidth + dml_max3(NumberOfDPP[k] * prefetch_vmrow_bw[k],
+					NumberOfDPP[k] * final_flip_bw[k] + ReadBandwidthLuma[k] * UrgentBurstFactorLuma[k] + ReadBandwidthChroma[k] * UrgentBurstFactorChroma[k] + cursor_bw[k] * UrgentBurstFactorCursor[k],
+					NumberOfDPP[k] * (final_flip_bw[k] + PrefetchBandwidthLuma[k] * UrgentBurstFactorLumaPre[k] + PrefetchBandwidthChroma[k] * UrgentBurstFactorChromaPre[k]) + cursor_bw_pre[k] * UrgentBurstFactorCursorPre[k]);
+		} else {
+			*TotalBandwidth = *TotalBandwidth + dml_max3(NumberOfDPP[k] * prefetch_vmrow_bw[k],
+					NumberOfDPP[k] * (meta_row_bandwidth[k] + dpte_row_bandwidth[k]) + ReadBandwidthLuma[k] * UrgentBurstFactorLuma[k] + ReadBandwidthChroma[k] * UrgentBurstFactorChroma[k] + cursor_bw[k] * UrgentBurstFactorCursor[k],
+					NumberOfDPP[k] * (PrefetchBandwidthLuma[k] * UrgentBurstFactorLumaPre[k] + PrefetchBandwidthChroma[k] * UrgentBurstFactorChromaPre[k]) + cursor_bw_pre[k] * UrgentBurstFactorCursorPre[k]);
+		}
+	}
+	*ImmediateFlipBandwidthSupport = (*TotalBandwidth <= ReturnBW);
+	*FractionOfUrgentBandwidth = *TotalBandwidth / ReturnBW;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.h b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.h
new file mode 100644
index 000000000000..72461b934ee0
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.h
@@ -0,0 +1,1175 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DML_DCN32_DISPLAY_MODE_VBA_UTIL_32_H__
+#define __DML_DCN32_DISPLAY_MODE_VBA_UTIL_32_H__
+
+#include "../display_mode_enums.h"
+#include "os_types.h"
+#include "../dc_features.h"
+#include "../display_mode_structs.h"
+
+unsigned int dml32_dscceComputeDelay(
+		unsigned int bpc,
+		double BPP,
+		unsigned int sliceWidth,
+		unsigned int numSlices,
+		enum output_format_class pixelFormat,
+		enum output_encoder_class Output);
+
+unsigned int dml32_dscComputeDelay(enum output_format_class pixelFormat, enum output_encoder_class Output);
+
+bool IsVertical(enum dm_rotation_angle Scan);
+
+void dml32_CalculateBytePerPixelAndBlockSizes(
+		enum source_format_class SourcePixelFormat,
+		enum dm_swizzle_mode SurfaceTiling,
+
+		/*Output*/
+		unsigned int *BytePerPixelY,
+		unsigned int *BytePerPixelC,
+		double           *BytePerPixelDETY,
+		double           *BytePerPixelDETC,
+		unsigned int *BlockHeight256BytesY,
+		unsigned int *BlockHeight256BytesC,
+		unsigned int *BlockWidth256BytesY,
+		unsigned int *BlockWidth256BytesC,
+		unsigned int *MacroTileHeightY,
+		unsigned int *MacroTileHeightC,
+		unsigned int *MacroTileWidthY,
+		unsigned int *MacroTileWidthC);
+
+void dml32_CalculateSinglePipeDPPCLKAndSCLThroughput(
+		double HRatio,
+		double HRatioChroma,
+		double VRatio,
+		double VRatioChroma,
+		double MaxDCHUBToPSCLThroughput,
+		double MaxPSCLToLBThroughput,
+		double PixelClock,
+		enum source_format_class SourcePixelFormat,
+		unsigned int HTaps,
+		unsigned int HTapsChroma,
+		unsigned int VTaps,
+		unsigned int VTapsChroma,
+
+		/* output */
+		double *PSCL_THROUGHPUT,
+		double *PSCL_THROUGHPUT_CHROMA,
+		double *DPPCLKUsingSingleDPP);
+
+void dml32_CalculateSwathAndDETConfiguration(
+		unsigned int DETSizeOverride[],
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange[],
+		unsigned int ConfigReturnBufferSizeInKByte,
+		unsigned int MaxTotalDETInKByte,
+		unsigned int MinCompressedBufferSizeInKByte,
+		double ForceSingleDPP,
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int nomDETInKByte,
+		enum unbounded_requesting_policy UseUnboundedRequestingFinal,
+		unsigned int CompressedBufferSegmentSizeInkByteFinal,
+		enum output_encoder_class Output[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		double MaximumSwathWidthLuma[],
+		double MaximumSwathWidthChroma[],
+		enum dm_rotation_angle SourceRotation[],
+		bool ViewportStationary[],
+		enum source_format_class SourcePixelFormat[],
+		enum dm_swizzle_mode SurfaceTiling[],
+		unsigned int ViewportWidth[],
+		unsigned int ViewportHeight[],
+		unsigned int ViewportXStart[],
+		unsigned int ViewportYStart[],
+		unsigned int ViewportXStartC[],
+		unsigned int ViewportYStartC[],
+		unsigned int SurfaceWidthY[],
+		unsigned int SurfaceWidthC[],
+		unsigned int SurfaceHeightY[],
+		unsigned int SurfaceHeightC[],
+		unsigned int Read256BytesBlockHeightY[],
+		unsigned int Read256BytesBlockHeightC[],
+		unsigned int Read256BytesBlockWidthY[],
+		unsigned int Read256BytesBlockWidthC[],
+		enum odm_combine_mode ODMMode[],
+		unsigned int BlendingAndTiming[],
+		unsigned int BytePerPixY[],
+		unsigned int BytePerPixC[],
+		double BytePerPixDETY[],
+		double BytePerPixDETC[],
+		unsigned int HActive[],
+		double HRatio[],
+		double HRatioChroma[],
+		unsigned int DPPPerSurface[],
+
+		/* Output */
+		unsigned int swath_width_luma_ub[],
+		unsigned int swath_width_chroma_ub[],
+		double SwathWidth[],
+		double SwathWidthChroma[],
+		unsigned int SwathHeightY[],
+		unsigned int SwathHeightC[],
+		unsigned int DETBufferSizeInKByte[],
+		unsigned int DETBufferSizeY[],
+		unsigned int DETBufferSizeC[],
+		bool *UnboundedRequestEnabled,
+		unsigned int *CompressedBufferSizeInkByte,
+		bool ViewportSizeSupportPerSurface[],
+		bool *ViewportSizeSupport);
+
+void dml32_CalculateSwathWidth(
+		bool ForceSingleDPP,
+		unsigned int NumberOfActiveSurfaces,
+		enum source_format_class SourcePixelFormat[],
+		enum dm_rotation_angle SourceScan[],
+		bool ViewportStationary[],
+		unsigned int ViewportWidth[],
+		unsigned int ViewportHeight[],
+		unsigned int ViewportXStart[],
+		unsigned int ViewportYStart[],
+		unsigned int ViewportXStartC[],
+		unsigned int ViewportYStartC[],
+		unsigned int SurfaceWidthY[],
+		unsigned int SurfaceWidthC[],
+		unsigned int SurfaceHeightY[],
+		unsigned int SurfaceHeightC[],
+		enum odm_combine_mode ODMMode[],
+		unsigned int BytePerPixY[],
+		unsigned int BytePerPixC[],
+		unsigned int Read256BytesBlockHeightY[],
+		unsigned int Read256BytesBlockHeightC[],
+		unsigned int Read256BytesBlockWidthY[],
+		unsigned int Read256BytesBlockWidthC[],
+		unsigned int BlendingAndTiming[],
+		unsigned int HActive[],
+		double HRatio[],
+		unsigned int DPPPerSurface[],
+
+		/* Output */
+		double SwathWidthdoubleDPPY[],
+		double SwathWidthdoubleDPPC[],
+		double SwathWidthY[], // per-pipe
+		double SwathWidthC[], // per-pipe
+		unsigned int MaximumSwathHeightY[],
+		unsigned int MaximumSwathHeightC[],
+		unsigned int swath_width_luma_ub[], // per-pipe
+		unsigned int swath_width_chroma_ub[]);
+
+bool dml32_UnboundedRequest(enum unbounded_requesting_policy UseUnboundedRequestingFinal,
+		unsigned int TotalNumberOfActiveDPP,
+		bool NoChroma,
+		enum output_encoder_class Output);
+
+void dml32_CalculateDETBufferSize(
+		unsigned int DETSizeOverride[],
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange[],
+		bool ForceSingleDPP,
+		unsigned int NumberOfActiveSurfaces,
+		bool UnboundedRequestEnabled,
+		unsigned int nomDETInKByte,
+		unsigned int MaxTotalDETInKByte,
+		unsigned int ConfigReturnBufferSizeInKByte,
+		unsigned int MinCompressedBufferSizeInKByte,
+		unsigned int CompressedBufferSegmentSizeInkByteFinal,
+		enum source_format_class SourcePixelFormat[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		unsigned int RoundedUpMaxSwathSizeBytesY[],
+		unsigned int RoundedUpMaxSwathSizeBytesC[],
+		unsigned int DPPPerSurface[],
+		/* Output */
+		unsigned int DETBufferSizeInKByte[],
+		unsigned int *CompressedBufferSizeInkByte);
+
+void dml32_CalculateODMMode(
+		unsigned int MaximumPixelsPerLinePerDSCUnit,
+		unsigned int HActive,
+		enum output_encoder_class Output,
+		enum odm_combine_policy ODMUse,
+		double StateDispclk,
+		double MaxDispclk,
+		bool DSCEnable,
+		unsigned int TotalNumberOfActiveDPP,
+		unsigned int MaxNumDPP,
+		double PixelClock,
+		double DISPCLKDPPCLKDSCCLKDownSpreading,
+		double DISPCLKRampingMargin,
+		double DISPCLKDPPCLKVCOSpeed,
+
+		/* Output */
+		bool *TotalAvailablePipesSupport,
+		unsigned int *NumberOfDPP,
+		enum odm_combine_mode *ODMMode,
+		double *RequiredDISPCLKPerSurface);
+
+double dml32_CalculateRequiredDispclk(
+		enum odm_combine_mode ODMMode,
+		double PixelClock,
+		double DISPCLKDPPCLKDSCCLKDownSpreading,
+		double DISPCLKRampingMargin,
+		double DISPCLKDPPCLKVCOSpeed,
+		double MaxDispclk);
+
+double dml32_RoundToDFSGranularity(double Clock, bool round_up, double VCOSpeed);
+
+void dml32_CalculateOutputLink(
+		double PHYCLKPerState,
+		double PHYCLKD18PerState,
+		double PHYCLKD32PerState,
+		double Downspreading,
+		bool IsMainSurfaceUsingTheIndicatedTiming,
+		enum output_encoder_class Output,
+		enum output_format_class OutputFormat,
+		unsigned int HTotal,
+		unsigned int HActive,
+		double PixelClockBackEnd,
+		double ForcedOutputLinkBPP,
+		unsigned int DSCInputBitPerComponent,
+		unsigned int NumberOfDSCSlices,
+		double AudioSampleRate,
+		unsigned int AudioSampleLayout,
+		enum odm_combine_mode ODMModeNoDSC,
+		enum odm_combine_mode ODMModeDSC,
+		bool DSCEnable,
+		unsigned int OutputLinkDPLanes,
+		enum dm_output_link_dp_rate OutputLinkDPRate,
+
+		/* Output */
+		bool *RequiresDSC,
+		double *RequiresFEC,
+		double  *OutBpp,
+		enum dm_output_type *OutputType,
+		enum dm_output_rate *OutputRate,
+		unsigned int *RequiredSlots);
+
+void dml32_CalculateDPPCLK(
+		unsigned int NumberOfActiveSurfaces,
+		double DISPCLKDPPCLKDSCCLKDownSpreading,
+		double DISPCLKDPPCLKVCOSpeed,
+		double DPPCLKUsingSingleDPP[],
+		unsigned int DPPPerSurface[],
+
+		/* output */
+		double *GlobalDPPCLK,
+		double Dppclk[]);
+
+double dml32_TruncToValidBPP(
+		double LinkBitRate,
+		unsigned int Lanes,
+		unsigned int HTotal,
+		unsigned int HActive,
+		double PixelClock,
+		double DesiredBPP,
+		bool DSCEnable,
+		enum output_encoder_class Output,
+		enum output_format_class Format,
+		unsigned int DSCInputBitPerComponent,
+		unsigned int DSCSlices,
+		unsigned int AudioRate,
+		unsigned int AudioLayout,
+		enum odm_combine_mode ODMModeNoDSC,
+		enum odm_combine_mode ODMModeDSC,
+		/* Output */
+		unsigned int *RequiredSlots);
+
+double dml32_RequiredDTBCLK(
+		bool              DSCEnable,
+		double               PixelClock,
+		enum output_format_class  OutputFormat,
+		double               OutputBpp,
+		unsigned int              DSCSlices,
+		unsigned int                 HTotal,
+		unsigned int                 HActive,
+		unsigned int              AudioRate,
+		unsigned int              AudioLayout);
+
+unsigned int dml32_DSCDelayRequirement(bool DSCEnabled,
+		enum odm_combine_mode ODMMode,
+		unsigned int DSCInputBitPerComponent,
+		double OutputBpp,
+		unsigned int HActive,
+		unsigned int HTotal,
+		unsigned int NumberOfDSCSlices,
+		enum output_format_class  OutputFormat,
+		enum output_encoder_class Output,
+		double PixelClock,
+		double PixelClockBackEnd);
+
+void dml32_CalculateSurfaceSizeInMall(
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int MALLAllocatedForDCN,
+		enum dm_use_mall_for_static_screen_mode UseMALLForStaticScreen[],
+		bool DCCEnable[],
+		bool ViewportStationary[],
+		unsigned int ViewportXStartY[],
+		unsigned int ViewportYStartY[],
+		unsigned int ViewportXStartC[],
+		unsigned int ViewportYStartC[],
+		unsigned int ViewportWidthY[],
+		unsigned int ViewportHeightY[],
+		unsigned int BytesPerPixelY[],
+		unsigned int ViewportWidthC[],
+		unsigned int ViewportHeightC[],
+		unsigned int BytesPerPixelC[],
+		unsigned int SurfaceWidthY[],
+		unsigned int SurfaceWidthC[],
+		unsigned int SurfaceHeightY[],
+		unsigned int SurfaceHeightC[],
+		unsigned int Read256BytesBlockWidthY[],
+		unsigned int Read256BytesBlockWidthC[],
+		unsigned int Read256BytesBlockHeightY[],
+		unsigned int Read256BytesBlockHeightC[],
+		unsigned int ReadBlockWidthY[],
+		unsigned int ReadBlockWidthC[],
+		unsigned int ReadBlockHeightY[],
+		unsigned int ReadBlockHeightC[],
+
+		/* Output */
+		unsigned int    SurfaceSizeInMALL[],
+		bool *ExceededMALLSize);
+
+void dml32_CalculateVMRowAndSwath(
+		unsigned int NumberOfActiveSurfaces,
+		DmlPipe myPipe[],
+		unsigned int SurfaceSizeInMALL[],
+		unsigned int PTEBufferSizeInRequestsLuma,
+		unsigned int PTEBufferSizeInRequestsChroma,
+		unsigned int DCCMetaBufferSizeBytes,
+		enum dm_use_mall_for_static_screen_mode UseMALLForStaticScreen[],
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange[],
+		unsigned int MALLAllocatedForDCN,
+		double SwathWidthY[],
+		double SwathWidthC[],
+		bool GPUVMEnable,
+		bool HostVMEnable,
+		unsigned int HostVMMaxNonCachedPageTableLevels,
+		unsigned int GPUVMMaxPageTableLevels,
+		unsigned int GPUVMMinPageSizeKBytes[],
+		unsigned int HostVMMinPageSize,
+
+		/* Output */
+		bool PTEBufferSizeNotExceeded[],
+		bool DCCMetaBufferSizeNotExceeded[],
+		unsigned int dpte_row_width_luma_ub[],
+		unsigned int dpte_row_width_chroma_ub[],
+		unsigned int dpte_row_height_luma[],
+		unsigned int dpte_row_height_chroma[],
+		unsigned int dpte_row_height_linear_luma[],     // VBA_DELTA
+		unsigned int dpte_row_height_linear_chroma[],   // VBA_DELTA
+		unsigned int meta_req_width[],
+		unsigned int meta_req_width_chroma[],
+		unsigned int meta_req_height[],
+		unsigned int meta_req_height_chroma[],
+		unsigned int meta_row_width[],
+		unsigned int meta_row_width_chroma[],
+		unsigned int meta_row_height[],
+		unsigned int meta_row_height_chroma[],
+		unsigned int vm_group_bytes[],
+		unsigned int dpte_group_bytes[],
+		unsigned int PixelPTEReqWidthY[],
+		unsigned int PixelPTEReqHeightY[],
+		unsigned int PTERequestSizeY[],
+		unsigned int PixelPTEReqWidthC[],
+		unsigned int PixelPTEReqHeightC[],
+		unsigned int PTERequestSizeC[],
+		unsigned int dpde0_bytes_per_frame_ub_l[],
+		unsigned int meta_pte_bytes_per_frame_ub_l[],
+		unsigned int dpde0_bytes_per_frame_ub_c[],
+		unsigned int meta_pte_bytes_per_frame_ub_c[],
+		double PrefetchSourceLinesY[],
+		double PrefetchSourceLinesC[],
+		double VInitPreFillY[],
+		double VInitPreFillC[],
+		unsigned int MaxNumSwathY[],
+		unsigned int MaxNumSwathC[],
+		double meta_row_bw[],
+		double dpte_row_bw[],
+		double PixelPTEBytesPerRow[],
+		double PDEAndMetaPTEBytesFrame[],
+		double MetaRowByte[],
+		bool use_one_row_for_frame[],
+		bool use_one_row_for_frame_flip[],
+		bool UsesMALLForStaticScreen[],
+		bool PTE_BUFFER_MODE[],
+		unsigned int BIGK_FRAGMENT_SIZE[]);
+
+unsigned int dml32_CalculateVMAndRowBytes(
+		bool ViewportStationary,
+		bool DCCEnable,
+		unsigned int NumberOfDPPs,
+		unsigned int BlockHeight256Bytes,
+		unsigned int BlockWidth256Bytes,
+		enum source_format_class SourcePixelFormat,
+		unsigned int SurfaceTiling,
+		unsigned int BytePerPixel,
+		enum dm_rotation_angle SourceScan,
+		double SwathWidth,
+		unsigned int ViewportHeight,
+		unsigned int    ViewportXStart,
+		unsigned int    ViewportYStart,
+		bool GPUVMEnable,
+		bool HostVMEnable,
+		unsigned int HostVMMaxNonCachedPageTableLevels,
+		unsigned int GPUVMMaxPageTableLevels,
+		unsigned int GPUVMMinPageSizeKBytes,
+		unsigned int HostVMMinPageSize,
+		unsigned int PTEBufferSizeInRequests,
+		unsigned int Pitch,
+		unsigned int DCCMetaPitch,
+		unsigned int MacroTileWidth,
+		unsigned int MacroTileHeight,
+
+		/* Output */
+		unsigned int *MetaRowByte,
+		unsigned int *PixelPTEBytesPerRow,
+		unsigned int    *dpte_row_width_ub,
+		unsigned int *dpte_row_height,
+		unsigned int *dpte_row_height_linear,
+		unsigned int    *PixelPTEBytesPerRow_one_row_per_frame,
+		unsigned int    *dpte_row_width_ub_one_row_per_frame,
+		unsigned int    *dpte_row_height_one_row_per_frame,
+		unsigned int *MetaRequestWidth,
+		unsigned int *MetaRequestHeight,
+		unsigned int *meta_row_width,
+		unsigned int *meta_row_height,
+		unsigned int *PixelPTEReqWidth,
+		unsigned int *PixelPTEReqHeight,
+		unsigned int *PTERequestSize,
+		unsigned int    *DPDE0BytesFrame,
+		unsigned int    *MetaPTEBytesFrame);
+
+double dml32_CalculatePrefetchSourceLines(
+		double VRatio,
+		unsigned int VTaps,
+		bool Interlace,
+		bool ProgressiveToInterlaceUnitInOPP,
+		unsigned int SwathHeight,
+		enum dm_rotation_angle SourceRotation,
+		bool ViewportStationary,
+		double SwathWidth,
+		unsigned int ViewportHeight,
+		unsigned int ViewportXStart,
+		unsigned int ViewportYStart,
+
+		/* Output */
+		double *VInitPreFill,
+		unsigned int *MaxNumSwath);
+
+void dml32_CalculateMALLUseForStaticScreen(
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int MALLAllocatedForDCNFinal,
+		enum dm_use_mall_for_static_screen_mode *UseMALLForStaticScreen,
+		unsigned int SurfaceSizeInMALL[],
+		bool one_row_per_frame_fits_in_buffer[],
+
+		/* output */
+		bool UsesMALLForStaticScreen[]);
+
+void dml32_CalculateRowBandwidth(
+		bool GPUVMEnable,
+		enum source_format_class SourcePixelFormat,
+		double VRatio,
+		double VRatioChroma,
+		bool DCCEnable,
+		double LineTime,
+		unsigned int MetaRowByteLuma,
+		unsigned int MetaRowByteChroma,
+		unsigned int meta_row_height_luma,
+		unsigned int meta_row_height_chroma,
+		unsigned int PixelPTEBytesPerRowLuma,
+		unsigned int PixelPTEBytesPerRowChroma,
+		unsigned int dpte_row_height_luma,
+		unsigned int dpte_row_height_chroma,
+		/* Output */
+		double *meta_row_bw,
+		double *dpte_row_bw);
+
+double dml32_CalculateUrgentLatency(
+		double UrgentLatencyPixelDataOnly,
+		double UrgentLatencyPixelMixedWithVMData,
+		double UrgentLatencyVMDataOnly,
+		bool   DoUrgentLatencyAdjustment,
+		double UrgentLatencyAdjustmentFabricClockComponent,
+		double UrgentLatencyAdjustmentFabricClockReference,
+		double FabricClock);
+
+void dml32_CalculateUrgentBurstFactor(
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange,
+		unsigned int    swath_width_luma_ub,
+		unsigned int    swath_width_chroma_ub,
+		unsigned int SwathHeightY,
+		unsigned int SwathHeightC,
+		double  LineTime,
+		double  UrgentLatency,
+		double  CursorBufferSize,
+		unsigned int CursorWidth,
+		unsigned int CursorBPP,
+		double  VRatio,
+		double  VRatioC,
+		double  BytePerPixelInDETY,
+		double  BytePerPixelInDETC,
+		unsigned int    DETBufferSizeY,
+		unsigned int    DETBufferSizeC,
+		/* Output */
+		double *UrgentBurstFactorCursor,
+		double *UrgentBurstFactorLuma,
+		double *UrgentBurstFactorChroma,
+		bool   *NotEnoughUrgentLatencyHiding);
+
+void dml32_CalculateDCFCLKDeepSleep(
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int BytePerPixelY[],
+		unsigned int BytePerPixelC[],
+		double VRatio[],
+		double VRatioChroma[],
+		double SwathWidthY[],
+		double SwathWidthC[],
+		unsigned int DPPPerSurface[],
+		double HRatio[],
+		double HRatioChroma[],
+		double PixelClock[],
+		double PSCL_THROUGHPUT[],
+		double PSCL_THROUGHPUT_CHROMA[],
+		double Dppclk[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		unsigned int ReturnBusWidth,
+
+		/* Output */
+		double *DCFClkDeepSleep);
+
+double dml32_CalculateWriteBackDelay(
+		enum source_format_class WritebackPixelFormat,
+		double WritebackHRatio,
+		double WritebackVRatio,
+		unsigned int WritebackVTaps,
+		unsigned int         WritebackDestinationWidth,
+		unsigned int         WritebackDestinationHeight,
+		unsigned int         WritebackSourceHeight,
+		unsigned int HTotal);
+
+void dml32_UseMinimumDCFCLK(
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange[],
+		bool DRRDisplay[],
+		bool SynchronizeDRRDisplaysForUCLKPStateChangeFinal,
+		unsigned int MaxInterDCNTileRepeaters,
+		unsigned int MaxPrefetchMode,
+		double DRAMClockChangeLatencyFinal,
+		double FCLKChangeLatency,
+		double SREnterPlusExitTime,
+		unsigned int ReturnBusWidth,
+		unsigned int RoundTripPingLatencyCycles,
+		unsigned int ReorderingBytes,
+		unsigned int PixelChunkSizeInKByte,
+		unsigned int MetaChunkSize,
+		bool GPUVMEnable,
+		unsigned int GPUVMMaxPageTableLevels,
+		bool HostVMEnable,
+		unsigned int NumberOfActiveSurfaces,
+		double HostVMMinPageSize,
+		unsigned int HostVMMaxNonCachedPageTableLevels,
+		bool DynamicMetadataVMEnabled,
+		bool ImmediateFlipRequirement,
+		bool ProgressiveToInterlaceUnitInOPP,
+		double MaxAveragePercentOfIdealSDPPortBWDisplayCanUseInNormalSystemOperation,
+		double PercentOfIdealSDPPortBWReceivedAfterUrgLatency,
+		unsigned int VTotal[],
+		unsigned int VActive[],
+		unsigned int DynamicMetadataTransmittedBytes[],
+		unsigned int DynamicMetadataLinesBeforeActiveRequired[],
+		bool Interlace[],
+		double RequiredDPPCLKPerSurface[][2][DC__NUM_DPP__MAX],
+		double RequiredDISPCLK[][2],
+		double UrgLatency[],
+		unsigned int NoOfDPP[][2][DC__NUM_DPP__MAX],
+		double ProjectedDCFClkDeepSleep[][2],
+		double MaximumVStartup[][2][DC__NUM_DPP__MAX],
+		unsigned int TotalNumberOfActiveDPP[][2],
+		unsigned int TotalNumberOfDCCActiveDPP[][2],
+		unsigned int dpte_group_bytes[],
+		double PrefetchLinesY[][2][DC__NUM_DPP__MAX],
+		double PrefetchLinesC[][2][DC__NUM_DPP__MAX],
+		unsigned int swath_width_luma_ub_all_states[][2][DC__NUM_DPP__MAX],
+		unsigned int swath_width_chroma_ub_all_states[][2][DC__NUM_DPP__MAX],
+		unsigned int BytePerPixelY[],
+		unsigned int BytePerPixelC[],
+		unsigned int HTotal[],
+		double PixelClock[],
+		double PDEAndMetaPTEBytesPerFrame[][2][DC__NUM_DPP__MAX],
+		double DPTEBytesPerRow[][2][DC__NUM_DPP__MAX],
+		double MetaRowBytes[][2][DC__NUM_DPP__MAX],
+		bool DynamicMetadataEnable[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		double DCFCLKPerState[],
+		/* Output */
+		double DCFCLKState[][2]);
+
+unsigned int dml32_CalculateExtraLatencyBytes(unsigned int ReorderingBytes,
+		unsigned int TotalNumberOfActiveDPP,
+		unsigned int PixelChunkSizeInKByte,
+		unsigned int TotalNumberOfDCCActiveDPP,
+		unsigned int MetaChunkSize,
+		bool GPUVMEnable,
+		bool HostVMEnable,
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int NumberOfDPP[],
+		unsigned int dpte_group_bytes[],
+		double HostVMInefficiencyFactor,
+		double HostVMMinPageSize,
+		unsigned int HostVMMaxNonCachedPageTableLevels);
+
+void dml32_CalculateVUpdateAndDynamicMetadataParameters(
+		unsigned int MaxInterDCNTileRepeaters,
+		double Dppclk,
+		double Dispclk,
+		double DCFClkDeepSleep,
+		double PixelClock,
+		unsigned int HTotal,
+		unsigned int VBlank,
+		unsigned int DynamicMetadataTransmittedBytes,
+		unsigned int DynamicMetadataLinesBeforeActiveRequired,
+		unsigned int InterlaceEnable,
+		bool ProgressiveToInterlaceUnitInOPP,
+		double *TSetup,
+		double *Tdmbf,
+		double *Tdmec,
+		double *Tdmsks,
+		unsigned int *VUpdateOffsetPix,
+		double *VUpdateWidthPix,
+		double *VReadyOffsetPix);
+
+double dml32_CalculateTWait(
+		unsigned int PrefetchMode,
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange,
+		bool SynchronizeDRRDisplaysForUCLKPStateChangeFinal,
+		bool DRRDisplay,
+		double DRAMClockChangeLatency,
+		double FCLKChangeLatency,
+		double UrgentLatency,
+		double SREnterPlusExitTime);
+
+double dml32_get_return_bw_mbps(const soc_bounding_box_st *soc,
+		const int VoltageLevel,
+		const bool HostVMEnable,
+		const double DCFCLK,
+		const double FabricClock,
+		const double DRAMSpeed);
+
+double dml32_get_return_bw_mbps_vm_only(const soc_bounding_box_st *soc,
+		const int VoltageLevel,
+		const double DCFCLK,
+		const double FabricClock,
+		const double DRAMSpeed);
+
+double dml32_CalculateExtraLatency(
+		unsigned int RoundTripPingLatencyCycles,
+		unsigned int ReorderingBytes,
+		double DCFCLK,
+		unsigned int TotalNumberOfActiveDPP,
+		unsigned int PixelChunkSizeInKByte,
+		unsigned int TotalNumberOfDCCActiveDPP,
+		unsigned int MetaChunkSize,
+		double ReturnBW,
+		bool GPUVMEnable,
+		bool HostVMEnable,
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int NumberOfDPP[],
+		unsigned int dpte_group_bytes[],
+		double HostVMInefficiencyFactor,
+		double HostVMMinPageSize,
+		unsigned int HostVMMaxNonCachedPageTableLevels);
+
+bool dml32_CalculatePrefetchSchedule(
+		double HostVMInefficiencyFactor,
+		DmlPipe *myPipe,
+		unsigned int DSCDelay,
+		double DPPCLKDelaySubtotalPlusCNVCFormater,
+		double DPPCLKDelaySCL,
+		double DPPCLKDelaySCLLBOnly,
+		double DPPCLKDelayCNVCCursor,
+		double DISPCLKDelaySubtotal,
+		unsigned int DPP_RECOUT_WIDTH,
+		enum output_format_class OutputFormat,
+		unsigned int MaxInterDCNTileRepeaters,
+		unsigned int VStartup,
+		unsigned int MaxVStartup,
+		unsigned int GPUVMPageTableLevels,
+		bool GPUVMEnable,
+		bool HostVMEnable,
+		unsigned int HostVMMaxNonCachedPageTableLevels,
+		double HostVMMinPageSize,
+		bool DynamicMetadataEnable,
+		bool DynamicMetadataVMEnabled,
+		int DynamicMetadataLinesBeforeActiveRequired,
+		unsigned int DynamicMetadataTransmittedBytes,
+		double UrgentLatency,
+		double UrgentExtraLatency,
+		double TCalc,
+		unsigned int PDEAndMetaPTEBytesFrame,
+		unsigned int MetaRowByte,
+		unsigned int PixelPTEBytesPerRow,
+		double PrefetchSourceLinesY,
+		unsigned int SwathWidthY,
+		unsigned int VInitPreFillY,
+		unsigned int MaxNumSwathY,
+		double PrefetchSourceLinesC,
+		unsigned int SwathWidthC,
+		unsigned int VInitPreFillC,
+		unsigned int MaxNumSwathC,
+		unsigned int swath_width_luma_ub,
+		unsigned int swath_width_chroma_ub,
+		unsigned int SwathHeightY,
+		unsigned int SwathHeightC,
+		double TWait,
+		/* Output */
+		double   *DSTXAfterScaler,
+		double   *DSTYAfterScaler,
+		double *DestinationLinesForPrefetch,
+		double *PrefetchBandwidth,
+		double *DestinationLinesToRequestVMInVBlank,
+		double *DestinationLinesToRequestRowInVBlank,
+		double *VRatioPrefetchY,
+		double *VRatioPrefetchC,
+		double *RequiredPrefetchPixDataBWLuma,
+		double *RequiredPrefetchPixDataBWChroma,
+		bool   *NotEnoughTimeForDynamicMetadata,
+		double *Tno_bw,
+		double *prefetch_vmrow_bw,
+		double *Tdmdl_vm,
+		double *Tdmdl,
+		double *TSetup,
+		unsigned int   *VUpdateOffsetPix,
+		double   *VUpdateWidthPix,
+		double   *VReadyOffsetPix);
+
+void dml32_CalculateFlipSchedule(
+		double HostVMInefficiencyFactor,
+		double UrgentExtraLatency,
+		double UrgentLatency,
+		unsigned int GPUVMMaxPageTableLevels,
+		bool HostVMEnable,
+		unsigned int HostVMMaxNonCachedPageTableLevels,
+		bool GPUVMEnable,
+		double HostVMMinPageSize,
+		double PDEAndMetaPTEBytesPerFrame,
+		double MetaRowBytes,
+		double DPTEBytesPerRow,
+		double BandwidthAvailableForImmediateFlip,
+		unsigned int TotImmediateFlipBytes,
+		enum source_format_class SourcePixelFormat,
+		double LineTime,
+		double VRatio,
+		double VRatioChroma,
+		double Tno_bw,
+		bool DCCEnable,
+		unsigned int dpte_row_height,
+		unsigned int meta_row_height,
+		unsigned int dpte_row_height_chroma,
+		unsigned int meta_row_height_chroma,
+		bool    use_one_row_for_frame_flip,
+
+		/* Output */
+		double *DestinationLinesToRequestVMInImmediateFlip,
+		double *DestinationLinesToRequestRowInImmediateFlip,
+		double *final_flip_bw,
+		bool *ImmediateFlipSupportedForPipe);
+
+void dml32_CalculateWatermarksMALLUseAndDRAMSpeedChangeSupport(
+		bool USRRetrainingRequiredFinal,
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange[],
+		unsigned int PrefetchMode,
+		unsigned int NumberOfActiveSurfaces,
+		unsigned int MaxLineBufferLines,
+		unsigned int LineBufferSize,
+		unsigned int WritebackInterfaceBufferSize,
+		double DCFCLK,
+		double ReturnBW,
+		bool SynchronizeTimingsFinal,
+		bool SynchronizeDRRDisplaysForUCLKPStateChangeFinal,
+		bool DRRDisplay[],
+		unsigned int dpte_group_bytes[],
+		unsigned int meta_row_height[],
+		unsigned int meta_row_height_chroma[],
+		SOCParametersList mmSOCParameters,
+		unsigned int WritebackChunkSize,
+		double SOCCLK,
+		double DCFClkDeepSleep,
+		unsigned int DETBufferSizeY[],
+		unsigned int DETBufferSizeC[],
+		unsigned int SwathHeightY[],
+		unsigned int SwathHeightC[],
+		unsigned int LBBitPerPixel[],
+		double SwathWidthY[],
+		double SwathWidthC[],
+		double HRatio[],
+		double HRatioChroma[],
+		unsigned int VTaps[],
+		unsigned int VTapsChroma[],
+		double VRatio[],
+		double VRatioChroma[],
+		unsigned int HTotal[],
+		unsigned int VTotal[],
+		unsigned int VActive[],
+		double PixelClock[],
+		unsigned int BlendingAndTiming[],
+		unsigned int DPPPerSurface[],
+		double BytePerPixelDETY[],
+		double BytePerPixelDETC[],
+		double DSTXAfterScaler[],
+		double DSTYAfterScaler[],
+		bool WritebackEnable[],
+		enum source_format_class WritebackPixelFormat[],
+		double WritebackDestinationWidth[],
+		double WritebackDestinationHeight[],
+		double WritebackSourceHeight[],
+		bool UnboundedRequestEnabled,
+		unsigned int CompressedBufferSizeInkByte,
+
+		/* Output */
+		Watermarks *Watermark,
+		enum clock_change_support *DRAMClockChangeSupport,
+		double MaxActiveDRAMClockChangeLatencySupported[],
+		unsigned int SubViewportLinesNeededInMALL[],
+		enum dm_fclock_change_support *FCLKChangeSupport,
+		double *MinActiveFCLKChangeLatencySupported,
+		bool *USRRetrainingSupport,
+		double ActiveDRAMClockChangeLatencyMargin[]);
+
+double dml32_CalculateWriteBackDISPCLK(
+		enum source_format_class WritebackPixelFormat,
+		double PixelClock,
+		double WritebackHRatio,
+		double WritebackVRatio,
+		unsigned int WritebackHTaps,
+		unsigned int WritebackVTaps,
+		unsigned int   WritebackSourceWidth,
+		unsigned int   WritebackDestinationWidth,
+		unsigned int HTotal,
+		unsigned int WritebackLineBufferSize,
+		double DISPCLKDPPCLKVCOSpeed);
+
+void dml32_CalculateMinAndMaxPrefetchMode(
+		enum dm_prefetch_modes   AllowForPStateChangeOrStutterInVBlankFinal,
+		unsigned int             *MinPrefetchMode,
+		unsigned int             *MaxPrefetchMode);
+
+void dml32_CalculatePixelDeliveryTimes(
+		unsigned int             NumberOfActiveSurfaces,
+		double              VRatio[],
+		double              VRatioChroma[],
+		double              VRatioPrefetchY[],
+		double              VRatioPrefetchC[],
+		unsigned int             swath_width_luma_ub[],
+		unsigned int             swath_width_chroma_ub[],
+		unsigned int             DPPPerSurface[],
+		double              HRatio[],
+		double              HRatioChroma[],
+		double              PixelClock[],
+		double              PSCL_THROUGHPUT[],
+		double              PSCL_THROUGHPUT_CHROMA[],
+		double              Dppclk[],
+		unsigned int             BytePerPixelC[],
+		enum dm_rotation_angle   SourceRotation[],
+		unsigned int             NumberOfCursors[],
+		unsigned int             CursorWidth[][DC__NUM_CURSOR__MAX],
+		unsigned int             CursorBPP[][DC__NUM_CURSOR__MAX],
+		unsigned int             BlockWidth256BytesY[],
+		unsigned int             BlockHeight256BytesY[],
+		unsigned int             BlockWidth256BytesC[],
+		unsigned int             BlockHeight256BytesC[],
+
+		/* Output */
+		double              DisplayPipeLineDeliveryTimeLuma[],
+		double              DisplayPipeLineDeliveryTimeChroma[],
+		double              DisplayPipeLineDeliveryTimeLumaPrefetch[],
+		double              DisplayPipeLineDeliveryTimeChromaPrefetch[],
+		double              DisplayPipeRequestDeliveryTimeLuma[],
+		double              DisplayPipeRequestDeliveryTimeChroma[],
+		double              DisplayPipeRequestDeliveryTimeLumaPrefetch[],
+		double              DisplayPipeRequestDeliveryTimeChromaPrefetch[],
+		double              CursorRequestDeliveryTime[],
+		double              CursorRequestDeliveryTimePrefetch[]);
+
+void dml32_CalculateMetaAndPTETimes(
+		bool use_one_row_for_frame[],
+		unsigned int NumberOfActiveSurfaces,
+		bool GPUVMEnable,
+		unsigned int MetaChunkSize,
+		unsigned int MinMetaChunkSizeBytes,
+		unsigned int    HTotal[],
+		double  VRatio[],
+		double  VRatioChroma[],
+		double  DestinationLinesToRequestRowInVBlank[],
+		double  DestinationLinesToRequestRowInImmediateFlip[],
+		bool DCCEnable[],
+		double  PixelClock[],
+		unsigned int BytePerPixelY[],
+		unsigned int BytePerPixelC[],
+		enum dm_rotation_angle SourceRotation[],
+		unsigned int dpte_row_height[],
+		unsigned int dpte_row_height_chroma[],
+		unsigned int meta_row_width[],
+		unsigned int meta_row_width_chroma[],
+		unsigned int meta_row_height[],
+		unsigned int meta_row_height_chroma[],
+		unsigned int meta_req_width[],
+		unsigned int meta_req_width_chroma[],
+		unsigned int meta_req_height[],
+		unsigned int meta_req_height_chroma[],
+		unsigned int dpte_group_bytes[],
+		unsigned int    PTERequestSizeY[],
+		unsigned int    PTERequestSizeC[],
+		unsigned int    PixelPTEReqWidthY[],
+		unsigned int    PixelPTEReqHeightY[],
+		unsigned int    PixelPTEReqWidthC[],
+		unsigned int    PixelPTEReqHeightC[],
+		unsigned int    dpte_row_width_luma_ub[],
+		unsigned int    dpte_row_width_chroma_ub[],
+
+		/* Output */
+		double DST_Y_PER_PTE_ROW_NOM_L[],
+		double DST_Y_PER_PTE_ROW_NOM_C[],
+		double DST_Y_PER_META_ROW_NOM_L[],
+		double DST_Y_PER_META_ROW_NOM_C[],
+		double TimePerMetaChunkNominal[],
+		double TimePerChromaMetaChunkNominal[],
+		double TimePerMetaChunkVBlank[],
+		double TimePerChromaMetaChunkVBlank[],
+		double TimePerMetaChunkFlip[],
+		double TimePerChromaMetaChunkFlip[],
+		double time_per_pte_group_nom_luma[],
+		double time_per_pte_group_vblank_luma[],
+		double time_per_pte_group_flip_luma[],
+		double time_per_pte_group_nom_chroma[],
+		double time_per_pte_group_vblank_chroma[],
+		double time_per_pte_group_flip_chroma[]);
+
+void dml32_CalculateVMGroupAndRequestTimes(
+		unsigned int     NumberOfActiveSurfaces,
+		bool     GPUVMEnable,
+		unsigned int     GPUVMMaxPageTableLevels,
+		unsigned int     HTotal[],
+		unsigned int     BytePerPixelC[],
+		double      DestinationLinesToRequestVMInVBlank[],
+		double      DestinationLinesToRequestVMInImmediateFlip[],
+		bool     DCCEnable[],
+		double      PixelClock[],
+		unsigned int        dpte_row_width_luma_ub[],
+		unsigned int        dpte_row_width_chroma_ub[],
+		unsigned int     vm_group_bytes[],
+		unsigned int     dpde0_bytes_per_frame_ub_l[],
+		unsigned int     dpde0_bytes_per_frame_ub_c[],
+		unsigned int        meta_pte_bytes_per_frame_ub_l[],
+		unsigned int        meta_pte_bytes_per_frame_ub_c[],
+
+		/* Output */
+		double      TimePerVMGroupVBlank[],
+		double      TimePerVMGroupFlip[],
+		double      TimePerVMRequestVBlank[],
+		double      TimePerVMRequestFlip[]);
+
+void dml32_CalculateDCCConfiguration(
+		bool             DCCEnabled,
+		bool             DCCProgrammingAssumesScanDirectionUnknown,
+		enum source_format_class SourcePixelFormat,
+		unsigned int             SurfaceWidthLuma,
+		unsigned int             SurfaceWidthChroma,
+		unsigned int             SurfaceHeightLuma,
+		unsigned int             SurfaceHeightChroma,
+		unsigned int                nomDETInKByte,
+		unsigned int             RequestHeight256ByteLuma,
+		unsigned int             RequestHeight256ByteChroma,
+		enum dm_swizzle_mode     TilingFormat,
+		unsigned int             BytePerPixelY,
+		unsigned int             BytePerPixelC,
+		double              BytePerPixelDETY,
+		double              BytePerPixelDETC,
+		enum dm_rotation_angle   SourceRotation,
+		/* Output */
+		unsigned int        *MaxUncompressedBlockLuma,
+		unsigned int        *MaxUncompressedBlockChroma,
+		unsigned int        *MaxCompressedBlockLuma,
+		unsigned int        *MaxCompressedBlockChroma,
+		unsigned int        *IndependentBlockLuma,
+		unsigned int        *IndependentBlockChroma);
+
+void dml32_CalculateStutterEfficiency(
+		unsigned int      CompressedBufferSizeInkByte,
+		enum dm_use_mall_for_pstate_change_mode UseMALLForPStateChange[],
+		bool   UnboundedRequestEnabled,
+		unsigned int      MetaFIFOSizeInKEntries,
+		unsigned int      ZeroSizeBufferEntries,
+		unsigned int      PixelChunkSizeInKByte,
+		unsigned int   NumberOfActiveSurfaces,
+		unsigned int      ROBBufferSizeInKByte,
+		double    TotalDataReadBandwidth,
+		double    DCFCLK,
+		double    ReturnBW,
+		unsigned int      CompbufReservedSpace64B,
+		unsigned int      CompbufReservedSpaceZs,
+		double    SRExitTime,
+		double    SRExitZ8Time,
+		bool   SynchronizeTimingsFinal,
+		unsigned int   BlendingAndTiming[],
+		double    StutterEnterPlusExitWatermark,
+		double    Z8StutterEnterPlusExitWatermark,
+		bool   ProgressiveToInterlaceUnitInOPP,
+		bool   Interlace[],
+		double    MinTTUVBlank[],
+		unsigned int   DPPPerSurface[],
+		unsigned int      DETBufferSizeY[],
+		unsigned int   BytePerPixelY[],
+		double    BytePerPixelDETY[],
+		double      SwathWidthY[],
+		unsigned int   SwathHeightY[],
+		unsigned int   SwathHeightC[],
+		double    NetDCCRateLuma[],
+		double    NetDCCRateChroma[],
+		double    DCCFractionOfZeroSizeRequestsLuma[],
+		double    DCCFractionOfZeroSizeRequestsChroma[],
+		unsigned int      HTotal[],
+		unsigned int      VTotal[],
+		double    PixelClock[],
+		double    VRatio[],
+		enum dm_rotation_angle SourceRotation[],
+		unsigned int   BlockHeight256BytesY[],
+		unsigned int   BlockWidth256BytesY[],
+		unsigned int   BlockHeight256BytesC[],
+		unsigned int   BlockWidth256BytesC[],
+		unsigned int   DCCYMaxUncompressedBlock[],
+		unsigned int   DCCCMaxUncompressedBlock[],
+		unsigned int      VActive[],
+		bool   DCCEnable[],
+		bool   WritebackEnable[],
+		double    ReadBandwidthSurfaceLuma[],
+		double    ReadBandwidthSurfaceChroma[],
+		double    meta_row_bw[],
+		double    dpte_row_bw[],
+
+		/* Output */
+		double   *StutterEfficiencyNotIncludingVBlank,
+		double   *StutterEfficiency,
+		unsigned int     *NumberOfStutterBurstsPerFrame,
+		double   *Z8StutterEfficiencyNotIncludingVBlank,
+		double   *Z8StutterEfficiency,
+		unsigned int     *Z8NumberOfStutterBurstsPerFrame,
+		double   *StutterPeriod,
+		bool  *DCHUBBUB_ARB_CSTATE_MAX_CAP_MODE);
+
+void dml32_CalculateMaxDETAndMinCompressedBufferSize(
+		unsigned int    ConfigReturnBufferSizeInKByte,
+		unsigned int    ROBBufferSizeInKByte,
+		unsigned int MaxNumDPP,
+		bool nomDETInKByteOverrideEnable, // VBA_DELTA, allow DV to override default DET size
+		unsigned int nomDETInKByteOverrideValue,  // VBA_DELTA
+
+		/* Output */
+		unsigned int *MaxTotalDETInKByte,
+		unsigned int *nomDETInKByte,
+		unsigned int *MinCompressedBufferSizeInKByte);
+
+bool dml32_CalculateVActiveBandwithSupport(unsigned int NumberOfActiveSurfaces,
+		double ReturnBW,
+		bool NotUrgentLatencyHiding[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		double cursor_bw[],
+		double meta_row_bandwidth[],
+		double dpte_row_bandwidth[],
+		unsigned int NumberOfDPP[],
+		double UrgentBurstFactorLuma[],
+		double UrgentBurstFactorChroma[],
+		double UrgentBurstFactorCursor[]);
+
+void dml32_CalculatePrefetchBandwithSupport(unsigned int NumberOfActiveSurfaces,
+		double ReturnBW,
+		bool NotUrgentLatencyHiding[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		double PrefetchBandwidthLuma[],
+		double PrefetchBandwidthChroma[],
+		double cursor_bw[],
+		double meta_row_bandwidth[],
+		double dpte_row_bandwidth[],
+		double cursor_bw_pre[],
+		double prefetch_vmrow_bw[],
+		unsigned int NumberOfDPP[],
+		double UrgentBurstFactorLuma[],
+		double UrgentBurstFactorChroma[],
+		double UrgentBurstFactorCursor[],
+		double UrgentBurstFactorLumaPre[],
+		double UrgentBurstFactorChromaPre[],
+		double UrgentBurstFactorCursorPre[],
+
+		/* output */
+		double  *PrefetchBandwidth,
+		double  *FractionOfUrgentBandwidth,
+		bool *PrefetchBandwidthSupport);
+
+double dml32_CalculateBandwidthAvailableForImmediateFlip(unsigned int NumberOfActiveSurfaces,
+		double ReturnBW,
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		double PrefetchBandwidthLuma[],
+		double PrefetchBandwidthChroma[],
+		double cursor_bw[],
+		double cursor_bw_pre[],
+		unsigned int NumberOfDPP[],
+		double UrgentBurstFactorLuma[],
+		double UrgentBurstFactorChroma[],
+		double UrgentBurstFactorCursor[],
+		double UrgentBurstFactorLumaPre[],
+		double UrgentBurstFactorChromaPre[],
+		double UrgentBurstFactorCursorPre[]);
+
+void dml32_CalculateImmediateFlipBandwithSupport(unsigned int NumberOfActiveSurfaces,
+		double ReturnBW,
+		enum immediate_flip_requirement ImmediateFlipRequirement[],
+		double final_flip_bw[],
+		double ReadBandwidthLuma[],
+		double ReadBandwidthChroma[],
+		double PrefetchBandwidthLuma[],
+		double PrefetchBandwidthChroma[],
+		double cursor_bw[],
+		double meta_row_bandwidth[],
+		double dpte_row_bandwidth[],
+		double cursor_bw_pre[],
+		double prefetch_vmrow_bw[],
+		unsigned int NumberOfDPP[],
+		double UrgentBurstFactorLuma[],
+		double UrgentBurstFactorChroma[],
+		double UrgentBurstFactorCursor[],
+		double UrgentBurstFactorLumaPre[],
+		double UrgentBurstFactorChromaPre[],
+		double UrgentBurstFactorCursorPre[],
+
+		/* output */
+		double  *TotalBandwidth,
+		double  *FractionOfUrgentBandwidth,
+		bool *ImmediateFlipBandwidthSupport);
+
+#endif
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.c
new file mode 100644
index 000000000000..269bdfc4bc40
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.c
@@ -0,0 +1,616 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+
+#include "../display_mode_lib.h"
+#include "../display_mode_vba.h"
+#include "../dml_inline_defs.h"
+#include "display_rq_dlg_calc_32.h"
+
+static bool is_dual_plane(enum source_format_class source_format)
+{
+	bool ret_val = 0;
+
+	if ((source_format == dm_420_12) || (source_format == dm_420_8) || (source_format == dm_420_10)
+		|| (source_format == dm_rgbe_alpha))
+		ret_val = 1;
+
+	return ret_val;
+}
+
+void dml32_rq_dlg_get_rq_reg(display_rq_regs_st *rq_regs,
+		struct display_mode_lib *mode_lib,
+		const display_e2e_pipe_params_st *e2e_pipe_param,
+		const unsigned int num_pipes,
+		const unsigned int pipe_idx)
+{
+	const display_pipe_source_params_st *src = &e2e_pipe_param[pipe_idx].pipe.src;
+	bool dual_plane = is_dual_plane((enum source_format_class) (src->source_format));
+    double stored_swath_l_bytes;
+    double stored_swath_c_bytes;
+    bool is_phantom_pipe;
+	uint32_t pixel_chunk_bytes = 0;
+	uint32_t min_pixel_chunk_bytes = 0;
+	uint32_t meta_chunk_bytes = 0;
+	uint32_t min_meta_chunk_bytes = 0;
+	uint32_t dpte_group_bytes = 0;
+	uint32_t mpte_group_bytes = 0;
+
+	uint32_t p1_pixel_chunk_bytes = 0;
+	uint32_t p1_min_pixel_chunk_bytes = 0;
+	uint32_t p1_meta_chunk_bytes = 0;
+	uint32_t p1_min_meta_chunk_bytes = 0;
+	uint32_t p1_dpte_group_bytes = 0;
+	uint32_t p1_mpte_group_bytes = 0;
+
+    unsigned int detile_buf_size_in_bytes;
+    unsigned int detile_buf_plane1_addr;
+    unsigned int pte_row_height_linear;
+
+	memset(rq_regs, 0, sizeof(*rq_regs));
+
+	dml_print("DML_DLG::%s: Calculation for pipe[%d] start, num_pipes=%d\n", __func__, pipe_idx, num_pipes);
+
+	pixel_chunk_bytes = get_pixel_chunk_size_in_kbyte(mode_lib, e2e_pipe_param, num_pipes) * 1024; // From VBA
+	min_pixel_chunk_bytes = get_min_pixel_chunk_size_in_byte(mode_lib, e2e_pipe_param, num_pipes); // From VBA
+
+	if (pixel_chunk_bytes == 64 * 1024)
+		min_pixel_chunk_bytes = 0;
+
+	meta_chunk_bytes = get_meta_chunk_size_in_kbyte(mode_lib, e2e_pipe_param, num_pipes) * 1024; // From VBA
+	min_meta_chunk_bytes = get_min_meta_chunk_size_in_byte(mode_lib, e2e_pipe_param, num_pipes); // From VBA
+
+	dpte_group_bytes = get_dpte_group_size_in_bytes(mode_lib, e2e_pipe_param, num_pipes, pipe_idx); // From VBA
+	mpte_group_bytes = get_vm_group_size_in_bytes(mode_lib, e2e_pipe_param, num_pipes, pipe_idx); // From VBA
+
+	p1_pixel_chunk_bytes = pixel_chunk_bytes;
+	p1_min_pixel_chunk_bytes = min_pixel_chunk_bytes;
+	p1_meta_chunk_bytes = meta_chunk_bytes;
+	p1_min_meta_chunk_bytes = min_meta_chunk_bytes;
+	p1_dpte_group_bytes = dpte_group_bytes;
+	p1_mpte_group_bytes = mpte_group_bytes;
+
+	if ((enum source_format_class) src->source_format == dm_rgbe_alpha)
+		p1_pixel_chunk_bytes = get_alpha_pixel_chunk_size_in_kbyte(mode_lib, e2e_pipe_param, num_pipes) * 1024;
+
+	rq_regs->rq_regs_l.chunk_size = dml_log2(pixel_chunk_bytes) - 10;
+	rq_regs->rq_regs_c.chunk_size = dml_log2(p1_pixel_chunk_bytes) - 10;
+
+	if (min_pixel_chunk_bytes == 0)
+		rq_regs->rq_regs_l.min_chunk_size = 0;
+	else
+		rq_regs->rq_regs_l.min_chunk_size = dml_log2(min_pixel_chunk_bytes) - 8 + 1;
+
+	if (p1_min_pixel_chunk_bytes == 0)
+		rq_regs->rq_regs_c.min_chunk_size = 0;
+	else
+		rq_regs->rq_regs_c.min_chunk_size = dml_log2(p1_min_pixel_chunk_bytes) - 8 + 1;
+
+	rq_regs->rq_regs_l.meta_chunk_size = dml_log2(meta_chunk_bytes) - 10;
+	rq_regs->rq_regs_c.meta_chunk_size = dml_log2(p1_meta_chunk_bytes) - 10;
+
+	if (min_meta_chunk_bytes == 0)
+		rq_regs->rq_regs_l.min_meta_chunk_size = 0;
+	else
+		rq_regs->rq_regs_l.min_meta_chunk_size = dml_log2(min_meta_chunk_bytes) - 6 + 1;
+
+	if (min_meta_chunk_bytes == 0)
+		rq_regs->rq_regs_c.min_meta_chunk_size = 0;
+	else
+		rq_regs->rq_regs_c.min_meta_chunk_size = dml_log2(p1_min_meta_chunk_bytes) - 6 + 1;
+
+	rq_regs->rq_regs_l.dpte_group_size = dml_log2(dpte_group_bytes) - 6;
+	rq_regs->rq_regs_l.mpte_group_size = dml_log2(mpte_group_bytes) - 6;
+	rq_regs->rq_regs_c.dpte_group_size = dml_log2(p1_dpte_group_bytes) - 6;
+	rq_regs->rq_regs_c.mpte_group_size = dml_log2(p1_mpte_group_bytes) - 6;
+
+	detile_buf_size_in_bytes = get_det_buffer_size_kbytes(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * 1024;
+	detile_buf_plane1_addr = 0;
+	pte_row_height_linear = get_dpte_row_height_linear_l(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx);
+
+	if (src->sw_mode == dm_sw_linear)
+		ASSERT(pte_row_height_linear >= 8);
+
+	rq_regs->rq_regs_l.pte_row_height_linear = dml_floor(dml_log2(pte_row_height_linear), 1) - 3;
+
+	if (dual_plane) {
+		unsigned int p1_pte_row_height_linear = get_dpte_row_height_linear_c(mode_lib, e2e_pipe_param,
+				num_pipes, pipe_idx);
+		;
+		if (src->sw_mode == dm_sw_linear)
+			ASSERT(p1_pte_row_height_linear >= 8);
+
+		rq_regs->rq_regs_c.pte_row_height_linear = dml_floor(dml_log2(p1_pte_row_height_linear), 1) - 3;
+	}
+
+	rq_regs->rq_regs_l.swath_height = dml_log2(get_swath_height_l(mode_lib, e2e_pipe_param, num_pipes, pipe_idx));
+	rq_regs->rq_regs_c.swath_height = dml_log2(get_swath_height_c(mode_lib, e2e_pipe_param, num_pipes, pipe_idx));
+
+	// FIXME: take the max between luma, chroma chunk size?
+	// okay for now, as we are setting pixel_chunk_bytes to 8kb anyways
+	if (pixel_chunk_bytes >= 32 * 1024 || (dual_plane && p1_pixel_chunk_bytes >= 32 * 1024)) { //32kb
+		rq_regs->drq_expansion_mode = 0;
+	} else {
+		rq_regs->drq_expansion_mode = 2;
+	}
+	rq_regs->prq_expansion_mode = 1;
+	rq_regs->mrq_expansion_mode = 1;
+	rq_regs->crq_expansion_mode = 1;
+
+	stored_swath_l_bytes = get_det_stored_buffer_size_l_bytes(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx);
+	stored_swath_c_bytes = get_det_stored_buffer_size_c_bytes(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx);
+	is_phantom_pipe = get_is_phantom_pipe(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
+
+	// Note: detile_buf_plane1_addr is in unit of 1KB
+	if (dual_plane) {
+		if (is_phantom_pipe) {
+			detile_buf_plane1_addr = ((1024.0 * 1024.0) / 2.0 / 1024.0); // half to chroma
+		} else {
+			if (stored_swath_l_bytes / stored_swath_c_bytes <= 1.5) {
+				detile_buf_plane1_addr = (detile_buf_size_in_bytes / 2.0 / 1024.0); // half to chroma
+#ifdef __DML_RQ_DLG_CALC_DEBUG__
+				dml_print("DML_DLG: %s: detile_buf_plane1_addr = %d (1/2 to chroma)\n",
+						__func__, detile_buf_plane1_addr);
+#endif
+			} else {
+				detile_buf_plane1_addr =
+						dml_round_to_multiple(
+								(unsigned int) ((2.0 * detile_buf_size_in_bytes) / 3.0),
+								1024, 0) / 1024.0; // 2/3 to luma
+#ifdef __DML_RQ_DLG_CALC_DEBUG__
+				dml_print("DML_DLG: %s: detile_buf_plane1_addr = %d (1/3 chroma)\n",
+						__func__, detile_buf_plane1_addr);
+#endif
+			}
+		}
+	}
+	rq_regs->plane1_base_address = detile_buf_plane1_addr;
+
+#ifdef __DML_RQ_DLG_CALC_DEBUG__
+	dml_print("DML_DLG: %s: is_phantom_pipe = %d\n", __func__, is_phantom_pipe);
+	dml_print("DML_DLG: %s: stored_swath_l_bytes = %f\n", __func__, stored_swath_l_bytes);
+	dml_print("DML_DLG: %s: stored_swath_c_bytes = %f\n", __func__, stored_swath_c_bytes);
+	dml_print("DML_DLG: %s: detile_buf_size_in_bytes = %d\n", __func__, detile_buf_size_in_bytes);
+	dml_print("DML_DLG: %s: detile_buf_plane1_addr = %d\n", __func__, detile_buf_plane1_addr);
+	dml_print("DML_DLG: %s: plane1_base_address = %d\n", __func__, rq_regs->plane1_base_address);
+#endif
+	print__rq_regs_st(mode_lib, rq_regs);
+	dml_print("DML_DLG::%s: Calculation for pipe[%d] done, num_pipes=%d\n", __func__, pipe_idx, num_pipes);
+}
+
+void dml32_rq_dlg_get_dlg_reg(struct display_mode_lib *mode_lib,
+		display_dlg_regs_st *dlg_regs,
+		display_ttu_regs_st *ttu_regs,
+		display_e2e_pipe_params_st *e2e_pipe_param,
+		const unsigned int num_pipes,
+		const unsigned int pipe_idx)
+{
+	const display_pipe_source_params_st *src = &e2e_pipe_param[pipe_idx].pipe.src;
+	const display_pipe_dest_params_st *dst = &e2e_pipe_param[pipe_idx].pipe.dest;
+	const display_clocks_and_cfg_st *clks = &e2e_pipe_param[pipe_idx].clks_cfg;
+	double refcyc_per_req_delivery_pre_cur0 = 0.;
+	double refcyc_per_req_delivery_cur0 = 0.;
+	double refcyc_per_req_delivery_pre_c = 0.;
+	double refcyc_per_req_delivery_c = 0.;
+    double refcyc_per_req_delivery_pre_l;
+    double refcyc_per_req_delivery_l;
+	double refcyc_per_line_delivery_pre_c = 0.;
+	double refcyc_per_line_delivery_c = 0.;
+    double refcyc_per_line_delivery_pre_l;
+    double refcyc_per_line_delivery_l;
+    double min_ttu_vblank;
+    double vratio_pre_l;
+    double vratio_pre_c;
+    unsigned int min_dst_y_next_start;
+	unsigned int htotal = dst->htotal;
+	unsigned int hblank_end = dst->hblank_end;
+	unsigned int vblank_end = dst->vblank_end;
+	bool interlaced = dst->interlaced;
+	double pclk_freq_in_mhz = dst->pixel_rate_mhz;
+    unsigned int vready_after_vcount0;
+	double refclk_freq_in_mhz = clks->refclk_mhz;
+	double ref_freq_to_pix_freq = refclk_freq_in_mhz / pclk_freq_in_mhz;
+	bool dual_plane = 0;
+	unsigned int pipe_index_in_combine[DC__NUM_PIPES__MAX];
+    int unsigned dst_x_after_scaler;
+    int unsigned dst_y_after_scaler;
+    double dst_y_prefetch;
+    double dst_y_per_vm_vblank;
+    double dst_y_per_row_vblank;
+    double dst_y_per_vm_flip;
+    double dst_y_per_row_flip;
+    double max_dst_y_per_vm_vblank = 32.0;
+    double max_dst_y_per_row_vblank = 16.0;
+
+    double dst_y_per_pte_row_nom_l;
+    double dst_y_per_pte_row_nom_c;
+    double dst_y_per_meta_row_nom_l;
+    double dst_y_per_meta_row_nom_c;
+    double refcyc_per_pte_group_nom_l;
+    double refcyc_per_pte_group_nom_c;
+    double refcyc_per_pte_group_vblank_l;
+    double refcyc_per_pte_group_vblank_c;
+    double refcyc_per_pte_group_flip_l; 
+    double refcyc_per_pte_group_flip_c; 
+    double refcyc_per_meta_chunk_nom_l;
+    double refcyc_per_meta_chunk_nom_c;
+    double refcyc_per_meta_chunk_vblank_l;
+    double refcyc_per_meta_chunk_vblank_c;
+    double refcyc_per_meta_chunk_flip_l;
+    double refcyc_per_meta_chunk_flip_c;
+
+	memset(dlg_regs, 0, sizeof(*dlg_regs));
+	memset(ttu_regs, 0, sizeof(*ttu_regs));
+	dml_print("DML_DLG::%s: Calculation for pipe[%d] starts, num_pipes=%d\n", __func__, pipe_idx, num_pipes);
+	dml_print("DML_DLG: %s: refclk_freq_in_mhz     = %3.2f\n", __func__, refclk_freq_in_mhz);
+	dml_print("DML_DLG: %s: pclk_freq_in_mhz = %3.2f\n", __func__, pclk_freq_in_mhz);
+	dml_print("DML_DLG: %s: ref_freq_to_pix_freq   = %3.2f\n", __func__, ref_freq_to_pix_freq);
+	dml_print("DML_DLG: %s: interlaced = %d\n", __func__, interlaced);
+	ASSERT(ref_freq_to_pix_freq < 4.0);
+
+	dlg_regs->ref_freq_to_pix_freq = (unsigned int) (ref_freq_to_pix_freq * dml_pow(2, 19));
+	dlg_regs->refcyc_per_htotal = (unsigned int) (ref_freq_to_pix_freq * (double) htotal * dml_pow(2, 8));
+	dlg_regs->dlg_vblank_end = interlaced ? (vblank_end / 2) : vblank_end; // 15 bits
+
+	min_ttu_vblank = get_min_ttu_vblank_in_us(mode_lib, e2e_pipe_param, num_pipes, pipe_idx); // From VBA
+	min_dst_y_next_start = get_min_dst_y_next_start(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
+
+	dml_print("DML_DLG: %s: min_ttu_vblank (us)    = %3.2f\n", __func__, min_ttu_vblank);
+	dml_print("DML_DLG: %s: min_dst_y_next_start = %d\n", __func__, min_dst_y_next_start);
+	dml_print("DML_DLG: %s: ref_freq_to_pix_freq   = %3.2f\n", __func__, ref_freq_to_pix_freq);
+
+	dual_plane = is_dual_plane((enum source_format_class) (src->source_format));
+
+	vready_after_vcount0 = get_vready_at_or_after_vsync(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx); // From VBA
+	dlg_regs->vready_after_vcount0 = vready_after_vcount0;
+
+	dml_print("DML_DLG: %s: vready_after_vcount0 = %d\n", __func__, dlg_regs->vready_after_vcount0);
+
+	dst_x_after_scaler = get_dst_x_after_scaler(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
+	dst_y_after_scaler = get_dst_y_after_scaler(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
+
+	// do some adjustment on the dst_after scaler to account for odm combine mode
+	dml_print("DML_DLG: %s: input dst_x_after_scaler   = %d\n", __func__, dst_x_after_scaler);
+	dml_print("DML_DLG: %s: input dst_y_after_scaler   = %d\n", __func__, dst_y_after_scaler);
+
+	// need to figure out which side of odm combine we're in
+	if (dst->odm_combine == dm_odm_combine_mode_2to1 || dst->odm_combine == dm_odm_combine_mode_4to1) {
+		// figure out which pipes go together
+		bool visited[DC__NUM_PIPES__MAX];
+		unsigned int i, j, k;
+
+		for (k = 0; k < num_pipes; ++k) {
+			visited[k] = false;
+			pipe_index_in_combine[k] = 0;
+		}
+
+		for (i = 0; i < num_pipes; i++) {
+			if (e2e_pipe_param[i].pipe.src.is_hsplit && !visited[i]) {
+
+				unsigned int grp = e2e_pipe_param[i].pipe.src.hsplit_grp;
+				unsigned int grp_idx = 0;
+
+				for (j = i; j < num_pipes; j++) {
+					if (e2e_pipe_param[j].pipe.src.hsplit_grp == grp
+						&& e2e_pipe_param[j].pipe.src.is_hsplit && !visited[j]) {
+						pipe_index_in_combine[j] = grp_idx;
+						dml_print("DML_DLG: %s: pipe[%d] is in grp %d idx %d\n",
+								__func__, j, grp, grp_idx);
+						grp_idx++;
+						visited[j] = true;
+					}
+				}
+			}
+		}
+	}
+
+	if (dst->odm_combine == dm_odm_combine_mode_disabled) {
+		// FIXME how about ODM split??
+		dlg_regs->refcyc_h_blank_end = (unsigned int) ((double) hblank_end * ref_freq_to_pix_freq);
+	} else {
+		if (dst->odm_combine == dm_odm_combine_mode_2to1 || dst->odm_combine == dm_odm_combine_mode_4to1) {
+			// TODO: We should really check that 4to1 is supported before setting it to 4
+			unsigned int odm_combine_factor = (dst->odm_combine == dm_odm_combine_mode_2to1 ? 2 : 4);
+			unsigned int odm_pipe_index = pipe_index_in_combine[pipe_idx];
+
+			dlg_regs->refcyc_h_blank_end = (unsigned int) (((double) hblank_end
+				+ odm_pipe_index * (double) dst->hactive / odm_combine_factor) * ref_freq_to_pix_freq);
+		}
+	}
+	ASSERT(dlg_regs->refcyc_h_blank_end < (unsigned int)dml_pow(2, 13));
+
+	dml_print("DML_DLG: %s: htotal= %d\n", __func__, htotal);
+	dml_print("DML_DLG: %s: dst_x_after_scaler[%d]= %d\n", __func__, pipe_idx, dst_x_after_scaler);
+	dml_print("DML_DLG: %s: dst_y_after_scaler[%d] = %d\n", __func__, pipe_idx, dst_y_after_scaler);
+
+	dst_y_prefetch = get_dst_y_prefetch(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);        // From VBA
+	// From VBA
+	dst_y_per_vm_vblank = get_dst_y_per_vm_vblank(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
+	// From VBA
+	dst_y_per_row_vblank = get_dst_y_per_row_vblank(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
+	dst_y_per_vm_flip = get_dst_y_per_vm_flip(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);    // From VBA
+	dst_y_per_row_flip = get_dst_y_per_row_flip(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);  // From VBA
+
+	// magic!
+	if (htotal <= 75) {
+		max_dst_y_per_vm_vblank = 100.0;
+		max_dst_y_per_row_vblank = 100.0;
+	}
+
+	dml_print("DML_DLG: %s: dst_y_prefetch (after rnd) = %3.2f\n", __func__, dst_y_prefetch);
+	dml_print("DML_DLG: %s: dst_y_per_vm_flip    = %3.2f\n", __func__, dst_y_per_vm_flip);
+	dml_print("DML_DLG: %s: dst_y_per_row_flip   = %3.2f\n", __func__, dst_y_per_row_flip);
+	dml_print("DML_DLG: %s: dst_y_per_vm_vblank  = %3.2f\n", __func__, dst_y_per_vm_vblank);
+	dml_print("DML_DLG: %s: dst_y_per_row_vblank = %3.2f\n", __func__, dst_y_per_row_vblank);
+
+	ASSERT(dst_y_per_vm_vblank < max_dst_y_per_vm_vblank);
+	ASSERT(dst_y_per_row_vblank < max_dst_y_per_row_vblank);
+	ASSERT(dst_y_prefetch > (dst_y_per_vm_vblank + dst_y_per_row_vblank));
+
+	vratio_pre_l = get_vratio_prefetch_l(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);    // From VBA
+	vratio_pre_c = get_vratio_prefetch_c(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);    // From VBA
+
+	dml_print("DML_DLG: %s: vratio_pre_l = %3.2f\n", __func__, vratio_pre_l);
+	dml_print("DML_DLG: %s: vratio_pre_c = %3.2f\n", __func__, vratio_pre_c);
+
+	// Active
+	refcyc_per_line_delivery_pre_l = get_refcyc_per_line_delivery_pre_l_in_us(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz;   // From VBA
+	refcyc_per_line_delivery_l = get_refcyc_per_line_delivery_l_in_us(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx) * refclk_freq_in_mhz;       // From VBA
+
+	dml_print("DML_DLG: %s: refcyc_per_line_delivery_pre_l = %3.2f\n", __func__, refcyc_per_line_delivery_pre_l);
+	dml_print("DML_DLG: %s: refcyc_per_line_delivery_l     = %3.2f\n", __func__, refcyc_per_line_delivery_l);
+
+	if (dual_plane) {
+		refcyc_per_line_delivery_pre_c = get_refcyc_per_line_delivery_pre_c_in_us(mode_lib, e2e_pipe_param,
+				num_pipes, pipe_idx) * refclk_freq_in_mhz;     // From VBA
+		refcyc_per_line_delivery_c = get_refcyc_per_line_delivery_c_in_us(mode_lib, e2e_pipe_param, num_pipes,
+				pipe_idx) * refclk_freq_in_mhz; // From VBA
+
+		dml_print("DML_DLG: %s: refcyc_per_line_delivery_pre_c = %3.2f\n",
+				__func__, refcyc_per_line_delivery_pre_c);
+		dml_print("DML_DLG: %s: refcyc_per_line_delivery_c     = %3.2f\n",
+				__func__, refcyc_per_line_delivery_c);
+	}
+
+	if (src->dynamic_metadata_enable && src->gpuvm)
+		dlg_regs->refcyc_per_vm_dmdata = get_refcyc_per_vm_dmdata_in_us(mode_lib, e2e_pipe_param, num_pipes,
+				pipe_idx) * refclk_freq_in_mhz; // From VBA
+
+	dlg_regs->dmdata_dl_delta = get_dmdata_dl_delta_in_us(mode_lib, e2e_pipe_param, num_pipes, pipe_idx)
+		* refclk_freq_in_mhz; // From VBA
+
+	refcyc_per_req_delivery_pre_l = get_refcyc_per_req_delivery_pre_l_in_us(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz; // From VBA
+	refcyc_per_req_delivery_l = get_refcyc_per_req_delivery_l_in_us(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx) * refclk_freq_in_mhz;     // From VBA
+
+	dml_print("DML_DLG: %s: refcyc_per_req_delivery_pre_l = %3.2f\n", __func__, refcyc_per_req_delivery_pre_l);
+	dml_print("DML_DLG: %s: refcyc_per_req_delivery_l     = %3.2f\n", __func__, refcyc_per_req_delivery_l);
+
+	if (dual_plane) {
+		refcyc_per_req_delivery_pre_c = get_refcyc_per_req_delivery_pre_c_in_us(mode_lib, e2e_pipe_param,
+				num_pipes, pipe_idx) * refclk_freq_in_mhz;  // From VBA
+		refcyc_per_req_delivery_c = get_refcyc_per_req_delivery_c_in_us(mode_lib, e2e_pipe_param, num_pipes,
+				pipe_idx) * refclk_freq_in_mhz;      // From VBA
+
+		dml_print("DML_DLG: %s: refcyc_per_req_delivery_pre_c = %3.2f\n",
+				__func__, refcyc_per_req_delivery_pre_c);
+		dml_print("DML_DLG: %s: refcyc_per_req_delivery_c     = %3.2f\n", __func__, refcyc_per_req_delivery_c);
+	}
+
+	// TTU - Cursor
+	ASSERT(src->num_cursors <= 1);
+	if (src->num_cursors > 0) {
+		refcyc_per_req_delivery_pre_cur0 = get_refcyc_per_cursor_req_delivery_pre_in_us(mode_lib,
+				e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz;  // From VBA
+		refcyc_per_req_delivery_cur0 = get_refcyc_per_cursor_req_delivery_in_us(mode_lib, e2e_pipe_param,
+				num_pipes, pipe_idx) * refclk_freq_in_mhz;      // From VBA
+
+		dml_print("DML_DLG: %s: refcyc_per_req_delivery_pre_cur0 = %3.2f\n",
+				__func__, refcyc_per_req_delivery_pre_cur0);
+		dml_print("DML_DLG: %s: refcyc_per_req_delivery_cur0     = %3.2f\n",
+				__func__, refcyc_per_req_delivery_cur0);
+	}
+
+	// Assign to register structures
+	dlg_regs->min_dst_y_next_start = min_dst_y_next_start * dml_pow(2, 2);
+	ASSERT(dlg_regs->min_dst_y_next_start < (unsigned int)dml_pow(2, 18));
+
+	dlg_regs->dst_y_after_scaler = dst_y_after_scaler; // in terms of line
+	dlg_regs->refcyc_x_after_scaler = dst_x_after_scaler * ref_freq_to_pix_freq; // in terms of refclk
+	dlg_regs->dst_y_prefetch = (unsigned int) (dst_y_prefetch * dml_pow(2, 2));
+	dlg_regs->dst_y_per_vm_vblank = (unsigned int) (dst_y_per_vm_vblank * dml_pow(2, 2));
+	dlg_regs->dst_y_per_row_vblank = (unsigned int) (dst_y_per_row_vblank * dml_pow(2, 2));
+	dlg_regs->dst_y_per_vm_flip = (unsigned int) (dst_y_per_vm_flip * dml_pow(2, 2));
+	dlg_regs->dst_y_per_row_flip = (unsigned int) (dst_y_per_row_flip * dml_pow(2, 2));
+
+	dlg_regs->vratio_prefetch = (unsigned int) (vratio_pre_l * dml_pow(2, 19));
+	dlg_regs->vratio_prefetch_c = (unsigned int) (vratio_pre_c * dml_pow(2, 19));
+
+	dml_print("DML_DLG: %s: dlg_regs->dst_y_per_vm_vblank  = 0x%x\n", __func__, dlg_regs->dst_y_per_vm_vblank);
+	dml_print("DML_DLG: %s: dlg_regs->dst_y_per_row_vblank = 0x%x\n", __func__, dlg_regs->dst_y_per_row_vblank);
+	dml_print("DML_DLG: %s: dlg_regs->dst_y_per_vm_flip    = 0x%x\n", __func__, dlg_regs->dst_y_per_vm_flip);
+	dml_print("DML_DLG: %s: dlg_regs->dst_y_per_row_flip   = 0x%x\n", __func__, dlg_regs->dst_y_per_row_flip);
+
+	dlg_regs->refcyc_per_vm_group_vblank = get_refcyc_per_vm_group_vblank_in_us(mode_lib, e2e_pipe_param,
+			num_pipes, pipe_idx) * refclk_freq_in_mhz;               // From VBA
+	dlg_regs->refcyc_per_vm_group_flip = get_refcyc_per_vm_group_flip_in_us(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx) * refclk_freq_in_mhz;                 // From VBA
+	dlg_regs->refcyc_per_vm_req_vblank = get_refcyc_per_vm_req_vblank_in_us(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx) * refclk_freq_in_mhz * dml_pow(2, 10);                 // From VBA
+	dlg_regs->refcyc_per_vm_req_flip = get_refcyc_per_vm_req_flip_in_us(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx) * refclk_freq_in_mhz * dml_pow(2, 10);  // From VBA
+
+	// From VBA
+	dst_y_per_pte_row_nom_l = get_dst_y_per_pte_row_nom_l(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
+	// From VBA
+	dst_y_per_pte_row_nom_c = get_dst_y_per_pte_row_nom_c(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
+	// From VBA
+	dst_y_per_meta_row_nom_l = get_dst_y_per_meta_row_nom_l(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
+	// From VBA
+	dst_y_per_meta_row_nom_c = get_dst_y_per_meta_row_nom_c(mode_lib, e2e_pipe_param, num_pipes, pipe_idx);
+
+	refcyc_per_pte_group_nom_l = get_refcyc_per_pte_group_nom_l_in_us(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx) * refclk_freq_in_mhz;         // From VBA
+	refcyc_per_pte_group_nom_c = get_refcyc_per_pte_group_nom_c_in_us(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx) * refclk_freq_in_mhz;         // From VBA
+	refcyc_per_pte_group_vblank_l = get_refcyc_per_pte_group_vblank_l_in_us(mode_lib, e2e_pipe_param,
+			num_pipes, pipe_idx) * refclk_freq_in_mhz;      // From VBA
+	refcyc_per_pte_group_vblank_c = get_refcyc_per_pte_group_vblank_c_in_us(mode_lib, e2e_pipe_param,
+			num_pipes, pipe_idx) * refclk_freq_in_mhz;      // From VBA
+	refcyc_per_pte_group_flip_l = get_refcyc_per_pte_group_flip_l_in_us(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx) * refclk_freq_in_mhz;        // From VBA
+	refcyc_per_pte_group_flip_c = get_refcyc_per_pte_group_flip_c_in_us(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx) * refclk_freq_in_mhz;        // From VBA
+
+	refcyc_per_meta_chunk_nom_l = get_refcyc_per_meta_chunk_nom_l_in_us(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx) * refclk_freq_in_mhz;        // From VBA
+	refcyc_per_meta_chunk_nom_c = get_refcyc_per_meta_chunk_nom_c_in_us(mode_lib, e2e_pipe_param, num_pipes,
+			pipe_idx) * refclk_freq_in_mhz;        // From VBA
+	refcyc_per_meta_chunk_vblank_l = get_refcyc_per_meta_chunk_vblank_l_in_us(mode_lib, e2e_pipe_param,
+			num_pipes, pipe_idx) * refclk_freq_in_mhz;     // From VBA
+	refcyc_per_meta_chunk_vblank_c = get_refcyc_per_meta_chunk_vblank_c_in_us(mode_lib, e2e_pipe_param,
+			num_pipes, pipe_idx) * refclk_freq_in_mhz;     // From VBA
+	refcyc_per_meta_chunk_flip_l = get_refcyc_per_meta_chunk_flip_l_in_us(mode_lib, e2e_pipe_param,
+			num_pipes, pipe_idx) * refclk_freq_in_mhz;       // From VBA
+	refcyc_per_meta_chunk_flip_c = get_refcyc_per_meta_chunk_flip_c_in_us(mode_lib, e2e_pipe_param,
+			num_pipes, pipe_idx) * refclk_freq_in_mhz;       // From VBA
+
+	dlg_regs->dst_y_per_pte_row_nom_l = dst_y_per_pte_row_nom_l * dml_pow(2, 2);
+	dlg_regs->dst_y_per_pte_row_nom_c = dst_y_per_pte_row_nom_c * dml_pow(2, 2);
+	dlg_regs->dst_y_per_meta_row_nom_l = dst_y_per_meta_row_nom_l * dml_pow(2, 2);
+	dlg_regs->dst_y_per_meta_row_nom_c = dst_y_per_meta_row_nom_c * dml_pow(2, 2);
+	dlg_regs->refcyc_per_pte_group_nom_l = refcyc_per_pte_group_nom_l;
+	dlg_regs->refcyc_per_pte_group_nom_c = refcyc_per_pte_group_nom_c;
+	dlg_regs->refcyc_per_pte_group_vblank_l = refcyc_per_pte_group_vblank_l;
+	dlg_regs->refcyc_per_pte_group_vblank_c = refcyc_per_pte_group_vblank_c;
+	dlg_regs->refcyc_per_pte_group_flip_l = refcyc_per_pte_group_flip_l;
+	dlg_regs->refcyc_per_pte_group_flip_c = refcyc_per_pte_group_flip_c;
+	dlg_regs->refcyc_per_meta_chunk_nom_l = refcyc_per_meta_chunk_nom_l;
+	dlg_regs->refcyc_per_meta_chunk_nom_c = refcyc_per_meta_chunk_nom_c;
+	dlg_regs->refcyc_per_meta_chunk_vblank_l = refcyc_per_meta_chunk_vblank_l;
+	dlg_regs->refcyc_per_meta_chunk_vblank_c = refcyc_per_meta_chunk_vblank_c;
+	dlg_regs->refcyc_per_meta_chunk_flip_l = refcyc_per_meta_chunk_flip_l;
+	dlg_regs->refcyc_per_meta_chunk_flip_c = refcyc_per_meta_chunk_flip_c;
+	dlg_regs->refcyc_per_line_delivery_pre_l = (unsigned int) dml_floor(refcyc_per_line_delivery_pre_l, 1);
+	dlg_regs->refcyc_per_line_delivery_l = (unsigned int) dml_floor(refcyc_per_line_delivery_l, 1);
+	dlg_regs->refcyc_per_line_delivery_pre_c = (unsigned int) dml_floor(refcyc_per_line_delivery_pre_c, 1);
+	dlg_regs->refcyc_per_line_delivery_c = (unsigned int) dml_floor(refcyc_per_line_delivery_c, 1);
+
+	dlg_regs->chunk_hdl_adjust_cur0 = 3;
+	dlg_regs->dst_y_offset_cur0 = 0;
+	dlg_regs->chunk_hdl_adjust_cur1 = 3;
+	dlg_regs->dst_y_offset_cur1 = 0;
+
+	dlg_regs->dst_y_delta_drq_limit = 0x7fff; // off
+
+	ttu_regs->refcyc_per_req_delivery_pre_l = (unsigned int) (refcyc_per_req_delivery_pre_l * dml_pow(2, 10));
+	ttu_regs->refcyc_per_req_delivery_l = (unsigned int) (refcyc_per_req_delivery_l * dml_pow(2, 10));
+	ttu_regs->refcyc_per_req_delivery_pre_c = (unsigned int) (refcyc_per_req_delivery_pre_c * dml_pow(2, 10));
+	ttu_regs->refcyc_per_req_delivery_c = (unsigned int) (refcyc_per_req_delivery_c * dml_pow(2, 10));
+	ttu_regs->refcyc_per_req_delivery_pre_cur0 =
+			(unsigned int) (refcyc_per_req_delivery_pre_cur0 * dml_pow(2, 10));
+	ttu_regs->refcyc_per_req_delivery_cur0 = (unsigned int) (refcyc_per_req_delivery_cur0 * dml_pow(2, 10));
+	ttu_regs->refcyc_per_req_delivery_pre_cur1 = 0;
+	ttu_regs->refcyc_per_req_delivery_cur1 = 0;
+	ttu_regs->qos_level_low_wm = 0;
+
+	ttu_regs->qos_level_high_wm = (unsigned int) (4.0 * (double) htotal * ref_freq_to_pix_freq);
+
+	ttu_regs->qos_level_flip = 14;
+	ttu_regs->qos_level_fixed_l = 8;
+	ttu_regs->qos_level_fixed_c = 8;
+	ttu_regs->qos_level_fixed_cur0 = 8;
+	ttu_regs->qos_ramp_disable_l = 0;
+	ttu_regs->qos_ramp_disable_c = 0;
+	ttu_regs->qos_ramp_disable_cur0 = 0;
+	ttu_regs->min_ttu_vblank = min_ttu_vblank * refclk_freq_in_mhz;
+
+	// CHECK for HW registers' range, assert or clamp
+	ASSERT(refcyc_per_req_delivery_pre_l < dml_pow(2, 13));
+	ASSERT(refcyc_per_req_delivery_l < dml_pow(2, 13));
+	ASSERT(refcyc_per_req_delivery_pre_c < dml_pow(2, 13));
+	ASSERT(refcyc_per_req_delivery_c < dml_pow(2, 13));
+	if (dlg_regs->refcyc_per_vm_group_vblank >= (unsigned int) dml_pow(2, 23))
+		dlg_regs->refcyc_per_vm_group_vblank = dml_pow(2, 23) - 1;
+
+	if (dlg_regs->refcyc_per_vm_group_flip >= (unsigned int) dml_pow(2, 23))
+		dlg_regs->refcyc_per_vm_group_flip = dml_pow(2, 23) - 1;
+
+	if (dlg_regs->refcyc_per_vm_req_vblank >= (unsigned int) dml_pow(2, 23))
+		dlg_regs->refcyc_per_vm_req_vblank = dml_pow(2, 23) - 1;
+
+	if (dlg_regs->refcyc_per_vm_req_flip >= (unsigned int) dml_pow(2, 23))
+		dlg_regs->refcyc_per_vm_req_flip = dml_pow(2, 23) - 1;
+
+	ASSERT(dlg_regs->dst_y_after_scaler < (unsigned int) 8);
+	ASSERT(dlg_regs->refcyc_x_after_scaler < (unsigned int)dml_pow(2, 13));
+	ASSERT(dlg_regs->dst_y_per_pte_row_nom_l < (unsigned int)dml_pow(2, 17));
+	if (dual_plane) {
+		if (dlg_regs->dst_y_per_pte_row_nom_c >= (unsigned int) dml_pow(2, 17)) {
+			// FIXME what so special about chroma, can we just assert?
+			dml_print("DML_DLG: %s: Warning dst_y_per_pte_row_nom_c %u > register max U15.2 %u\n",
+					__func__, dlg_regs->dst_y_per_pte_row_nom_c, (unsigned int)dml_pow(2, 17) - 1);
+		}
+	}
+	ASSERT(dlg_regs->dst_y_per_meta_row_nom_l < (unsigned int)dml_pow(2, 17));
+	ASSERT(dlg_regs->dst_y_per_meta_row_nom_c < (unsigned int)dml_pow(2, 17));
+
+	if (dlg_regs->refcyc_per_pte_group_nom_l >= (unsigned int) dml_pow(2, 23))
+		dlg_regs->refcyc_per_pte_group_nom_l = dml_pow(2, 23) - 1;
+	if (dual_plane) {
+		if (dlg_regs->refcyc_per_pte_group_nom_c >= (unsigned int) dml_pow(2, 23))
+			dlg_regs->refcyc_per_pte_group_nom_c = dml_pow(2, 23) - 1;
+	}
+	ASSERT(dlg_regs->refcyc_per_pte_group_vblank_l < (unsigned int)dml_pow(2, 13));
+	if (dual_plane) {
+		ASSERT(dlg_regs->refcyc_per_pte_group_vblank_c < (unsigned int)dml_pow(2, 13));
+	}
+
+	if (dlg_regs->refcyc_per_meta_chunk_nom_l >= (unsigned int) dml_pow(2, 23))
+		dlg_regs->refcyc_per_meta_chunk_nom_l = dml_pow(2, 23) - 1;
+	if (dual_plane) {
+		if (dlg_regs->refcyc_per_meta_chunk_nom_c >= (unsigned int) dml_pow(2, 23))
+			dlg_regs->refcyc_per_meta_chunk_nom_c = dml_pow(2, 23) - 1;
+	}
+	ASSERT(dlg_regs->refcyc_per_meta_chunk_vblank_l < (unsigned int)dml_pow(2, 13));
+	ASSERT(dlg_regs->refcyc_per_meta_chunk_vblank_c < (unsigned int)dml_pow(2, 13));
+	ASSERT(dlg_regs->refcyc_per_line_delivery_pre_l < (unsigned int)dml_pow(2, 13));
+	ASSERT(dlg_regs->refcyc_per_line_delivery_l < (unsigned int)dml_pow(2, 13));
+	ASSERT(dlg_regs->refcyc_per_line_delivery_pre_c < (unsigned int)dml_pow(2, 13));
+	ASSERT(dlg_regs->refcyc_per_line_delivery_c < (unsigned int)dml_pow(2, 13));
+	ASSERT(ttu_regs->qos_level_low_wm < dml_pow(2, 14));
+	ASSERT(ttu_regs->qos_level_high_wm < dml_pow(2, 14));
+	ASSERT(ttu_regs->min_ttu_vblank < dml_pow(2, 24));
+
+	print__ttu_regs_st(mode_lib, ttu_regs);
+	print__dlg_regs_st(mode_lib, dlg_regs);
+	dml_print("DML_DLG::%s: Calculation for pipe[%d] done, num_pipes=%d\n", __func__, pipe_idx, num_pipes);
+}
+
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.h b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.h
new file mode 100644
index 000000000000..ebee365293cd
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.h
@@ -0,0 +1,70 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DML32_DISPLAY_RQ_DLG_CALC_H__
+#define __DML32_DISPLAY_RQ_DLG_CALC_H__
+
+#include "../display_rq_dlg_helpers.h"
+
+struct display_mode_lib;
+
+/*
+* Function: dml_rq_dlg_get_rq_reg
+*  Main entry point for test to get the register values out of this DML class.
+*  This function calls <get_rq_param> and <extract_rq_regs> functions to calculate
+*  and then populate the rq_regs struct
+* Input:
+*  pipe_param - pipe source configuration (e.g. vp, pitch, scaling, dest, etc.)
+* Output:
+*  rq_regs - struct that holds all the RQ registers field value.
+*            See also: <display_rq_regs_st>
+*/
+void dml32_rq_dlg_get_rq_reg(display_rq_regs_st *rq_regs,
+		struct display_mode_lib *mode_lib,
+		const display_e2e_pipe_params_st *e2e_pipe_param,
+		const unsigned int num_pipes,
+		const unsigned int pipe_idx);
+
+/*
+* Function: dml_rq_dlg_get_dlg_reg
+*   Calculate and return DLG and TTU register struct given the system setting
+* Output:
+*  dlg_regs - output DLG register struct
+*  ttu_regs - output DLG TTU register struct
+* Input:
+*  e2e_pipe_param - "compacted" array of e2e pipe param struct
+*  num_pipes - num of active "pipe" or "route"
+*  pipe_idx - index that identifies the e2e_pipe_param that corresponding to this dlg
+*  cstate - 0: when calculate min_ttu_vblank it is assumed cstate is not required. 1: Normal mode, cstate is considered.
+*           Added for legacy or unrealistic timing tests.
+*/
+void dml32_rq_dlg_get_dlg_reg(struct display_mode_lib *mode_lib,
+		display_dlg_regs_st *dlg_regs,
+		display_ttu_regs_st *ttu_regs,
+		display_e2e_pipe_params_st *e2e_pipe_param,
+		const unsigned int num_pipes,
+		const unsigned int pipe_idx);
+
+#endif
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h
index edb9f7567d6d..f394b3f3922a 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_enums.h
@@ -26,7 +26,11 @@
 #define __DISPLAY_MODE_ENUMS_H__
 
 enum output_encoder_class {
-	dm_dp = 0, dm_hdmi = 1, dm_wb = 2, dm_edp
+	dm_dp = 0,
+	dm_hdmi = 1,
+	dm_wb = 2,
+	dm_edp = 3,
+	dm_dp2p0 = 5,
 };
 enum output_format_class {
 	dm_444 = 0, dm_420 = 1, dm_n422, dm_s422
@@ -105,6 +109,10 @@ enum clock_change_support {
 	dm_dram_clock_change_uninitialized = 0,
 	dm_dram_clock_change_vactive,
 	dm_dram_clock_change_vblank,
+	dm_dram_clock_change_vactive_w_mall_full_frame,
+	dm_dram_clock_change_vactive_w_mall_sub_vp,
+	dm_dram_clock_change_vblank_w_mall_full_frame,
+	dm_dram_clock_change_vblank_w_mall_sub_vp,
 	dm_dram_clock_change_unsupported
 };
 
@@ -169,6 +177,9 @@ enum odm_combine_mode {
 	dm_odm_combine_mode_disabled,
 	dm_odm_combine_mode_2to1,
 	dm_odm_combine_mode_4to1,
+	dm_odm_split_mode_1to2,
+	dm_odm_mode_mso_1to2,
+	dm_odm_mode_mso_1to4
 };
 
 enum odm_combine_policy {
@@ -176,11 +187,15 @@ enum odm_combine_policy {
 	dm_odm_combine_policy_none,
 	dm_odm_combine_policy_2to1,
 	dm_odm_combine_policy_4to1,
+	dm_odm_split_policy_1to2,
+	dm_odm_mso_policy_1to2,
+	dm_odm_mso_policy_1to4,
 };
 
 enum immediate_flip_requirement {
 	dm_immediate_flip_not_required,
 	dm_immediate_flip_required,
+	dm_immediate_flip_opportunistic,
 };
 
 enum unbounded_requesting_policy {
@@ -189,4 +204,75 @@ enum unbounded_requesting_policy {
 	dm_unbounded_requesting_disable
 };
 
+enum dm_rotation_angle {
+	dm_rotation_0,
+	dm_rotation_90,
+	dm_rotation_180,
+	dm_rotation_270,
+	dm_rotation_0m,
+	dm_rotation_90m,
+	dm_rotation_180m,
+	dm_rotation_270m,
+};
+
+enum dm_use_mall_for_pstate_change_mode {
+	dm_use_mall_pstate_change_disable,
+	dm_use_mall_pstate_change_full_frame,
+	dm_use_mall_pstate_change_sub_viewport,
+	dm_use_mall_pstate_change_phantom_pipe
+};
+
+enum dm_use_mall_for_static_screen_mode {
+	dm_use_mall_static_screen_disable,
+	dm_use_mall_static_screen_optimize,
+	dm_use_mall_static_screen_enable,
+};
+
+enum dm_output_link_dp_rate {
+	dm_dp_rate_na,
+	dm_dp_rate_hbr,
+	dm_dp_rate_hbr2,
+	dm_dp_rate_hbr3,
+	dm_dp_rate_uhbr10,
+	dm_dp_rate_uhbr13p5,
+	dm_dp_rate_uhbr20,
+};
+
+enum dm_fclock_change_support {
+	dm_fclock_change_vactive,
+	dm_fclock_change_vblank,
+	dm_fclock_change_unsupported,
+};
+
+enum dm_prefetch_modes {
+	dm_prefetch_support_uclk_fclk_and_stutter_if_possible,
+	dm_prefetch_support_uclk_fclk_and_stutter,
+	dm_prefetch_support_fclk_and_stutter,
+	dm_prefetch_support_stutter,
+	dm_prefetch_support_none,
+};
+enum dm_output_type {
+	dm_output_type_unknown,
+	dm_output_type_dp,
+	dm_output_type_edp,
+	dm_output_type_dp2p0,
+	dm_output_type_hdmi,
+	dm_output_type_hdmifrl,
+};
+
+enum dm_output_rate {
+	dm_output_rate_unknown,
+	dm_output_rate_dp_rate_hbr,
+	dm_output_rate_dp_rate_hbr2,
+	dm_output_rate_dp_rate_hbr3,
+	dm_output_rate_dp_rate_uhbr10,
+	dm_output_rate_dp_rate_uhbr13p5,
+	dm_output_rate_dp_rate_uhbr20,
+	dm_output_rate_hdmi_rate_3x3,
+	dm_output_rate_hdmi_rate_6x3,
+	dm_output_rate_hdmi_rate_6x4,
+	dm_output_rate_hdmi_rate_8x4,
+	dm_output_rate_hdmi_rate_10x4,
+	dm_output_rate_hdmi_rate_12x4,
+};
 #endif
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.c b/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.c
index 30db51fbd8cd..5d27ff0ebb5f 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.c
@@ -35,6 +35,8 @@
 #include "dcn30/display_rq_dlg_calc_30.h"
 #include "dcn31/display_mode_vba_31.h"
 #include "dcn31/display_rq_dlg_calc_31.h"
+#include "dcn32/display_mode_vba_32.h"
+#include "dcn32/display_rq_dlg_calc_32.h"
 #include "dml_logger.h"
 
 const struct dml_funcs dml20_funcs = {
@@ -72,6 +74,13 @@ const struct dml_funcs dml31_funcs = {
 	.rq_dlg_get_rq_reg = dml31_rq_dlg_get_rq_reg
 };
 
+const struct dml_funcs dml32_funcs = {
+	.validate = dml32_ModeSupportAndSystemConfigurationFull,
+    .recalculate = dml32_recalculate,
+	.rq_dlg_get_dlg_reg_v2 = dml32_rq_dlg_get_dlg_reg,
+	.rq_dlg_get_rq_reg_v2 = dml32_rq_dlg_get_rq_reg
+};
+
 void dml_init_instance(struct display_mode_lib *lib,
 		const struct _vcs_dpi_soc_bounding_box_st *soc_bb,
 		const struct _vcs_dpi_ip_params_st *ip_params,
@@ -98,6 +107,9 @@ void dml_init_instance(struct display_mode_lib *lib,
 	case DML_PROJECT_DCN31_FPGA:
 		lib->funcs = dml31_funcs;
 		break;
+	case DML_PROJECT_DCN32:
+		lib->funcs = dml32_funcs;
+		break;
 
 	default:
 		break;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.h
index d76251fd1566..2bdd6ed22611 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_lib.h
@@ -41,6 +41,7 @@ enum dml_project {
 	DML_PROJECT_DCN30,
 	DML_PROJECT_DCN31,
 	DML_PROJECT_DCN31_FPGA,
+	DML_PROJECT_DCN32,
 };
 
 struct display_mode_lib;
@@ -62,6 +63,20 @@ struct dml_funcs {
 		struct display_mode_lib *mode_lib,
 		display_rq_regs_st *rq_regs,
 		const display_pipe_params_st *pipe_param);
+	// DLG interfaces have different function parameters in DCN32.
+	// Create new function pointers to address the changes
+	void (*rq_dlg_get_dlg_reg_v2)(
+			struct display_mode_lib *mode_lib,
+			display_dlg_regs_st *dlg_regs,
+			display_ttu_regs_st *ttu_regs,
+			display_e2e_pipe_params_st *e2e_pipe_param,
+			const unsigned int num_pipes,
+			const unsigned int pipe_idx);
+	void (*rq_dlg_get_rq_reg_v2)(display_rq_regs_st *rq_regs,
+			struct display_mode_lib *mode_lib,
+			const display_e2e_pipe_params_st *e2e_pipe_param,
+			const unsigned int num_pipes,
+			const unsigned int pipe_idx);
 	void (*recalculate)(struct display_mode_lib *mode_lib);
 	void (*validate)(struct display_mode_lib *mode_lib);
 };
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
index 2df660cd8801..967d3e1ce886 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_structs.h
@@ -54,12 +54,102 @@ typedef struct _vcs_dpi_display_rq_regs_st display_rq_regs_st;
 typedef struct _vcs_dpi_display_dlg_sys_params_st display_dlg_sys_params_st;
 typedef struct _vcs_dpi_display_arb_params_st display_arb_params_st;
 
+typedef struct {
+	double UrgentWatermark;
+	double WritebackUrgentWatermark;
+	double DRAMClockChangeWatermark;
+	double FCLKChangeWatermark;
+	double WritebackDRAMClockChangeWatermark;
+	double WritebackFCLKChangeWatermark;
+	double StutterExitWatermark;
+	double StutterEnterPlusExitWatermark;
+	double Z8StutterExitWatermark;
+	double Z8StutterEnterPlusExitWatermark;
+	double USRRetrainingWatermark;
+} Watermarks;
+
+typedef struct {
+	double UrgentLatency;
+	double ExtraLatency;
+	double WritebackLatency;
+	double DRAMClockChangeLatency;
+	double FCLKChangeLatency;
+	double SRExitTime;
+	double SREnterPlusExitTime;
+	double SRExitZ8Time;
+	double SREnterPlusExitZ8Time;
+	double USRRetrainingLatencyPlusSMNLatency;
+} Latencies;
+
+typedef struct {
+	double Dppclk;
+	double Dispclk;
+	double PixelClock;
+	double DCFClkDeepSleep;
+	unsigned int DPPPerSurface;
+	bool ScalerEnabled;
+	enum dm_rotation_angle SourceRotation;
+	unsigned int ViewportHeight;
+	unsigned int ViewportHeightChroma;
+	unsigned int BlockWidth256BytesY;
+	unsigned int BlockHeight256BytesY;
+	unsigned int BlockWidth256BytesC;
+	unsigned int BlockHeight256BytesC;
+	unsigned int BlockWidthY;
+	unsigned int BlockHeightY;
+	unsigned int BlockWidthC;
+	unsigned int BlockHeightC;
+	unsigned int InterlaceEnable;
+	unsigned int NumberOfCursors;
+	unsigned int VBlank;
+	unsigned int HTotal;
+	unsigned int HActive;
+	bool DCCEnable;
+	enum odm_combine_mode ODMMode;
+	enum source_format_class SourcePixelFormat;
+	enum dm_swizzle_mode SurfaceTiling;
+	unsigned int BytePerPixelY;
+	unsigned int BytePerPixelC;
+	bool ProgressiveToInterlaceUnitInOPP;
+	double VRatio;
+	double VRatioChroma;
+	unsigned int VTaps;
+	unsigned int VTapsChroma;
+	unsigned int PitchY;
+	unsigned int DCCMetaPitchY;
+	unsigned int PitchC;
+	unsigned int DCCMetaPitchC;
+	bool ViewportStationary;
+	unsigned int ViewportXStart;
+	unsigned int ViewportYStart;
+	unsigned int ViewportXStartC;
+	unsigned int ViewportYStartC;
+	bool FORCE_ONE_ROW_FOR_FRAME;
+	unsigned int SwathHeightY;
+	unsigned int SwathHeightC;
+} DmlPipe;
+
+typedef struct {
+	double UrgentLatency;
+	double ExtraLatency;
+	double WritebackLatency;
+	double DRAMClockChangeLatency;
+	double FCLKChangeLatency;
+	double SRExitTime;
+	double SREnterPlusExitTime;
+	double SRExitZ8Time;
+	double SREnterPlusExitZ8Time;
+	double USRRetrainingLatency;
+	double SMNLatency;
+} SOCParametersList;
+
 struct _vcs_dpi_voltage_scaling_st {
 	int state;
 	double dscclk_mhz;
 	double dcfclk_mhz;
 	double socclk_mhz;
 	double phyclk_d18_mhz;
+	double phyclk_d32_mhz;
 	double dram_speed_mts;
 	double fabricclk_mhz;
 	double dispclk_mhz;
@@ -80,6 +170,15 @@ struct _vcs_dpi_soc_bounding_box_st {
 	double urgent_latency_pixel_data_only_us;
 	double urgent_latency_pixel_mixed_with_vm_data_us;
 	double urgent_latency_vm_data_only_us;
+	double usr_retraining_latency_us;
+	double smn_latency_us;
+	double fclk_change_latency_us;
+	double mall_allocated_for_dcn_mbytes;
+	double pct_ideal_fabric_bw_after_urgent;
+	double pct_ideal_dram_bw_after_urgent_strobe;
+	double max_avg_fabric_bw_use_normal_percent;
+	double max_avg_dram_bw_use_normal_strobe_percent;
+	enum dm_prefetch_modes allow_for_pstate_or_stutter_in_vblank_final;
 	double writeback_latency_us;
 	double ideal_dram_bw_after_urgent_percent;
 	double pct_ideal_dram_sdp_bw_after_urgent_pixel_only; // PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelDataOnly
@@ -148,6 +247,9 @@ struct _vcs_dpi_ip_params_st {
 	unsigned int dpp_output_buffer_pixels;
 	unsigned int opp_output_buffer_lines;
 	unsigned int pixel_chunk_size_kbytes;
+	unsigned int alpha_pixel_chunk_size_kbytes;
+	unsigned int min_pixel_chunk_size_bytes;
+	unsigned int dcc_meta_buffer_size_bytes;
 	unsigned char pte_enable;
 	unsigned int pte_chunk_size_kbytes;
 	unsigned int meta_chunk_size_kbytes;
@@ -168,6 +270,7 @@ struct _vcs_dpi_ip_params_st {
 	double writeback_min_hscl_ratio;
 	double writeback_min_vscl_ratio;
 	unsigned int maximum_dsc_bits_per_component;
+	unsigned int maximum_pixels_per_line_per_dsc_unit;
 	unsigned int writeback_max_hscl_taps;
 	unsigned int writeback_max_vscl_taps;
 	unsigned int writeback_line_buffer_luma_buffer_size;
@@ -224,6 +327,8 @@ struct _vcs_dpi_ip_params_st {
 	unsigned int can_vstartup_lines_exceed_vsync_plus_back_porch_lines_minus_one;
 	unsigned int bug_forcing_LC_req_same_size_fixed;
 	unsigned int number_of_cursors;
+	unsigned int max_num_dp2p0_outputs;
+	unsigned int max_num_dp2p0_streams;
 };
 
 struct _vcs_dpi_display_xfc_params_st {
@@ -250,6 +355,7 @@ struct _vcs_dpi_display_pipe_source_params_st {
 	bool hostvm_levels_force_en;
 	unsigned int hostvm_levels_force;
 	int source_scan;
+	int source_rotation; // new in dml32
 	int sw_mode;
 	int macro_tile_size;
 	unsigned int surface_width_y;
@@ -264,6 +370,15 @@ struct _vcs_dpi_display_pipe_source_params_st {
 	unsigned int viewport_height_c;
 	unsigned int viewport_width_max;
 	unsigned int viewport_height_max;
+	unsigned int viewport_x_y;
+	unsigned int viewport_x_c;
+	bool viewport_stationary;
+	unsigned int dcc_rate_luma;
+	unsigned int gpuvm_min_page_size_kbytes;
+	unsigned int use_mall_for_pstate_change;
+	unsigned int use_mall_for_static_screen;
+	bool force_one_row_for_frame;
+	bool pte_buffer_mode;
 	unsigned int data_pitch;
 	unsigned int data_pitch_c;
 	unsigned int meta_pitch;
@@ -296,10 +411,17 @@ struct writeback_st {
 	int wb_vtaps_luma;
 	int wb_htaps_chroma;
 	int wb_vtaps_chroma;
+	unsigned int wb_htaps;
+	unsigned int wb_vtaps;
 	double wb_hratio;
 	double wb_vratio;
 };
 
+struct display_audio_params_st {
+	unsigned int   audio_sample_rate_khz;
+	int    		   audio_sample_layout;
+};
+
 struct _vcs_dpi_display_output_params_st {
 	int dp_lanes;
 	double output_bpp;
@@ -313,6 +435,11 @@ struct _vcs_dpi_display_output_params_st {
 	int dsc_slices;
 	int max_audio_sample_rate;
 	struct writeback_st wb;
+	struct display_audio_params_st audio;
+	unsigned int output_bpc;
+	int dp_rate;
+	unsigned int dp_multistream_id;
+	bool dp_multistream_en;
 };
 
 struct _vcs_dpi_scaler_ratio_depth_st {
@@ -361,6 +488,8 @@ struct _vcs_dpi_display_pipe_dest_params_st {
 	unsigned char use_maximum_vstartup;
 	unsigned int vtotal_max;
 	unsigned int vtotal_min;
+	unsigned int refresh_rate;
+	bool synchronize_timings;
 };
 
 struct _vcs_dpi_display_pipe_params_st {
@@ -558,6 +687,9 @@ struct _vcs_dpi_display_arb_params_st {
 	int max_req_outstanding;
 	int min_req_outstanding;
 	int sat_level_us;
+	int hvm_min_req_outstand_commit_threshold;
+	int hvm_max_qos_commit_threshold;
+	int compbuf_reserved_space_kbytes;
 };
 
 #endif /*__DISPLAY_MODE_STRUCTS_H__*/
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
index c0740dbdcc2e..2676710a5f2b 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.c
@@ -110,6 +110,15 @@ dml_get_attr_func(return_bw, mode_lib->vba.ReturnBW);
 dml_get_attr_func(tcalc, mode_lib->vba.TCalc);
 dml_get_attr_func(fraction_of_urgent_bandwidth, mode_lib->vba.FractionOfUrgentBandwidth);
 dml_get_attr_func(fraction_of_urgent_bandwidth_imm_flip, mode_lib->vba.FractionOfUrgentBandwidthImmediateFlip);
+dml_get_attr_func(cstate_max_cap_mode, mode_lib->vba.DCHUBBUB_ARB_CSTATE_MAX_CAP_MODE);
+dml_get_attr_func(comp_buffer_size_kbytes, mode_lib->vba.CompressedBufferSizeInkByte);
+dml_get_attr_func(pixel_chunk_size_in_kbyte, mode_lib->vba.PixelChunkSizeInKByte);
+dml_get_attr_func(alpha_pixel_chunk_size_in_kbyte, mode_lib->vba.AlphaPixelChunkSizeInKByte);
+dml_get_attr_func(meta_chunk_size_in_kbyte, mode_lib->vba.MetaChunkSize);
+dml_get_attr_func(min_pixel_chunk_size_in_byte, mode_lib->vba.MinPixelChunkSizeBytes);
+dml_get_attr_func(min_meta_chunk_size_in_byte, mode_lib->vba.MinMetaChunkSizeBytes);
+dml_get_attr_func(fclk_watermark, mode_lib->vba.Watermark.FCLKChangeWatermark);
+dml_get_attr_func(usr_retraining_watermark, mode_lib->vba.Watermark.USRRetrainingWatermark);
 
 #define dml_get_pipe_attr_func(attr, var)  double get_##attr(struct display_mode_lib *mode_lib, const display_e2e_pipe_params_st *pipes, unsigned int num_pipes, unsigned int which_pipe) \
 {\
@@ -165,6 +174,27 @@ dml_get_pipe_attr_func(vupdate_width, mode_lib->vba.VUpdateWidthPix);
 dml_get_pipe_attr_func(vready_offset, mode_lib->vba.VReadyOffsetPix);
 dml_get_pipe_attr_func(vready_at_or_after_vsync, mode_lib->vba.VREADY_AT_OR_AFTER_VSYNC);
 dml_get_pipe_attr_func(min_dst_y_next_start, mode_lib->vba.MIN_DST_Y_NEXT_START);
+dml_get_pipe_attr_func(dst_y_per_pte_row_nom_l, mode_lib->vba.DST_Y_PER_PTE_ROW_NOM_L);
+dml_get_pipe_attr_func(dst_y_per_pte_row_nom_c, mode_lib->vba.DST_Y_PER_PTE_ROW_NOM_C);
+dml_get_pipe_attr_func(dst_y_per_meta_row_nom_l, mode_lib->vba.DST_Y_PER_META_ROW_NOM_L);
+dml_get_pipe_attr_func(dst_y_per_meta_row_nom_c, mode_lib->vba.DST_Y_PER_META_ROW_NOM_C);
+dml_get_pipe_attr_func(refcyc_per_pte_group_nom_l_in_us, mode_lib->vba.time_per_pte_group_nom_luma);
+dml_get_pipe_attr_func(refcyc_per_pte_group_nom_c_in_us, mode_lib->vba.time_per_pte_group_nom_chroma);
+dml_get_pipe_attr_func(refcyc_per_pte_group_vblank_l_in_us, mode_lib->vba.time_per_pte_group_vblank_luma);
+dml_get_pipe_attr_func(refcyc_per_pte_group_vblank_c_in_us, mode_lib->vba.time_per_pte_group_vblank_chroma);
+dml_get_pipe_attr_func(refcyc_per_pte_group_flip_l_in_us, mode_lib->vba.time_per_pte_group_flip_luma);
+dml_get_pipe_attr_func(refcyc_per_pte_group_flip_c_in_us, mode_lib->vba.time_per_pte_group_flip_chroma);
+dml_get_pipe_attr_func(vstartup_calculated, mode_lib->vba.VStartup);
+dml_get_pipe_attr_func(dpte_row_height_linear_c, mode_lib->vba.dpte_row_height_linear_chroma);
+dml_get_pipe_attr_func(swath_height_l, mode_lib->vba.SwathHeightY);
+dml_get_pipe_attr_func(swath_height_c, mode_lib->vba.SwathHeightC);
+dml_get_pipe_attr_func(det_stored_buffer_size_l_bytes, mode_lib->vba.DETBufferSizeY);
+dml_get_pipe_attr_func(det_stored_buffer_size_c_bytes, mode_lib->vba.DETBufferSizeC);
+dml_get_pipe_attr_func(dpte_group_size_in_bytes, mode_lib->vba.dpte_group_bytes);
+dml_get_pipe_attr_func(vm_group_size_in_bytes, mode_lib->vba.vm_group_bytes);
+dml_get_pipe_attr_func(dpte_row_height_linear_l, mode_lib->vba.dpte_row_height_linear);
+dml_get_pipe_attr_func(pte_buffer_mode, mode_lib->vba.PTE_BUFFER_MODE);
+dml_get_pipe_attr_func(subviewport_lines_needed_in_mall, mode_lib->vba.SubViewportLinesNeededInMALL);
 
 double get_total_immediate_flip_bytes(
 		struct display_mode_lib *mode_lib,
@@ -202,6 +232,67 @@ double get_total_prefetch_bw(
 	return total_prefetch_bw;
 }
 
+unsigned int get_total_surface_size_in_mall_bytes(
+		struct display_mode_lib *mode_lib,
+		const display_e2e_pipe_params_st *pipes,
+		unsigned int num_pipes)
+{
+	unsigned int k;
+	unsigned int size = 0.0;
+	recalculate_params(mode_lib, pipes, num_pipes);
+	for (k = 0; k < mode_lib->vba.NumberOfActiveSurfaces; ++k)
+		size += mode_lib->vba.SurfaceSizeInMALL[k];
+	return size;
+}
+
+unsigned int get_pipe_idx(struct display_mode_lib *mode_lib, unsigned int plane_idx)
+{
+	int pipe_idx = -1;
+	int i;
+
+	ASSERT(plane_idx < DC__NUM_DPP__MAX);
+
+	for (i = 0; i < DC__NUM_DPP__MAX ; i++) {
+		if (plane_idx == mode_lib->vba.pipe_plane[i]) {
+			pipe_idx = i;
+			break;
+		}
+	}
+	ASSERT(pipe_idx >= 0);
+
+	return pipe_idx;
+}
+
+
+double get_det_buffer_size_kbytes(struct display_mode_lib *mode_lib, const display_e2e_pipe_params_st *pipes,
+		unsigned int num_pipes, unsigned int pipe_idx)
+{
+	unsigned int plane_idx;
+	double det_buf_size_kbytes;
+
+	recalculate_params(mode_lib, pipes, num_pipes);
+	plane_idx = mode_lib->vba.pipe_plane[pipe_idx];
+
+	dml_print("DML::%s: num_pipes=%d pipe_idx=%d plane_idx=%0d\n", __func__, num_pipes, pipe_idx, plane_idx);
+	det_buf_size_kbytes = mode_lib->vba.DETBufferSizeInKByte[plane_idx]; // per hubp DET buffer size
+
+	dml_print("DML::%s: det_buf_size_kbytes=%3.2f\n", __func__, det_buf_size_kbytes);
+
+	return det_buf_size_kbytes;
+}
+
+bool get_is_phantom_pipe(struct display_mode_lib *mode_lib, const display_e2e_pipe_params_st *pipes,
+		unsigned int num_pipes, unsigned int pipe_idx)
+{
+	unsigned int plane_idx;
+
+	recalculate_params(mode_lib, pipes, num_pipes);
+	plane_idx = mode_lib->vba.pipe_plane[pipe_idx];
+	dml_print("DML::%s: num_pipes=%d pipe_idx=%d UseMALLForPStateChange=%0d\n", __func__, num_pipes, pipe_idx,
+			mode_lib->vba.UsesMALLForPStateChange[plane_idx]);
+	return (mode_lib->vba.UsesMALLForPStateChange[plane_idx] == dm_use_mall_pstate_change_phantom_pipe);
+}
+
 static void fetch_socbb_params(struct display_mode_lib *mode_lib)
 {
 	soc_bounding_box_st *soc = &mode_lib->vba.soc;
@@ -241,6 +332,22 @@ static void fetch_socbb_params(struct display_mode_lib *mode_lib)
 			soc->max_avg_sdp_bw_use_normal_percent;
 	mode_lib->vba.SRExitZ8Time = soc->sr_exit_z8_time_us;
 	mode_lib->vba.SREnterPlusExitZ8Time = soc->sr_enter_plus_exit_z8_time_us;
+	mode_lib->vba.FCLKChangeLatency = soc->fclk_change_latency_us;
+	mode_lib->vba.USRRetrainingLatency = soc->usr_retraining_latency_us;
+	mode_lib->vba.SMNLatency = soc->smn_latency_us;
+	mode_lib->vba.MALLAllocatedForDCNFinal = soc->mall_allocated_for_dcn_mbytes;
+
+	mode_lib->vba.PercentOfIdealDRAMBWReceivedAfterUrgLatencySTROBE = soc->pct_ideal_dram_bw_after_urgent_strobe;
+	mode_lib->vba.MaxAveragePercentOfIdealFabricBWDisplayCanUseInNormalSystemOperation =
+			soc->max_avg_fabric_bw_use_normal_percent;
+	mode_lib->vba.MaxAveragePercentOfIdealDRAMBWDisplayCanUseInNormalSystemOperationSTROBE =
+			soc->max_avg_dram_bw_use_normal_strobe_percent;
+
+	mode_lib->vba.DRAMClockChangeRequirementFinal = 1;
+	mode_lib->vba.FCLKChangeRequirementFinal = 1;
+	mode_lib->vba.USRRetrainingRequiredFinal = 1;
+	mode_lib->vba.ConfigurableDETSizeEnFinal = 0;
+	mode_lib->vba.AllowForPStateChangeOrStutterInVBlankFinal = soc->allow_for_pstate_or_stutter_in_vblank_final;
 	mode_lib->vba.DRAMClockChangeLatency = soc->dram_clock_change_latency_us;
 	mode_lib->vba.DummyPStateCheck = soc->dram_clock_change_latency_us == soc->dummy_pstate_latency_us;
 	mode_lib->vba.DRAMClockChangeSupportsVActive = !soc->disable_dram_clock_change_vactive_support ||
@@ -283,6 +390,7 @@ static void fetch_socbb_params(struct display_mode_lib *mode_lib)
 		mode_lib->vba.SOCCLKPerState[i] = soc->clock_limits[i].socclk_mhz;
 		mode_lib->vba.PHYCLKPerState[i] = soc->clock_limits[i].phyclk_mhz;
 		mode_lib->vba.PHYCLKD18PerState[i] = soc->clock_limits[i].phyclk_d18_mhz;
+		mode_lib->vba.PHYCLKD32PerState[i] = soc->clock_limits[i].phyclk_d32_mhz;
 		mode_lib->vba.MaxDppclk[i] = soc->clock_limits[i].dppclk_mhz;
 		mode_lib->vba.MaxDSCCLK[i] = soc->clock_limits[i].dscclk_mhz;
 		mode_lib->vba.DRAMSpeedPerState[i] = soc->clock_limits[i].dram_speed_mts;
@@ -325,6 +433,18 @@ static void fetch_ip_params(struct display_mode_lib *mode_lib)
 	mode_lib->vba.COMPBUF_RESERVED_SPACE_ZS = ip->compbuf_reserved_space_zs;
 	mode_lib->vba.MaximumDSCBitsPerComponent = ip->maximum_dsc_bits_per_component;
 	mode_lib->vba.DSC422NativeSupport = ip->dsc422_native_support;
+    /* In DCN3.2, nomDETInKByte should be initialized correctly. */
+	mode_lib->vba.nomDETInKByte = ip->det_buffer_size_kbytes;
+	mode_lib->vba.CompbufReservedSpace64B  = ip->compbuf_reserved_space_64b;
+	mode_lib->vba.CompbufReservedSpaceZs = ip->compbuf_reserved_space_zs;
+	mode_lib->vba.CompressedBufferSegmentSizeInkByteFinal = ip->compressed_buffer_segment_size_in_kbytes;
+	mode_lib->vba.LineBufferSizeFinal = ip->line_buffer_size_bits;
+	mode_lib->vba.AlphaPixelChunkSizeInKByte = ip->alpha_pixel_chunk_size_kbytes; // not ysed
+	mode_lib->vba.MinPixelChunkSizeBytes = ip->min_pixel_chunk_size_bytes; // not used
+	mode_lib->vba.MaximumPixelsPerLinePerDSCUnit = ip->maximum_pixels_per_line_per_dsc_unit;
+	mode_lib->vba.MaxNumDP2p0Outputs = ip->max_num_dp2p0_outputs;
+	mode_lib->vba.MaxNumDP2p0Streams = ip->max_num_dp2p0_streams;
+	mode_lib->vba.DCCMetaBufferSizeBytes = ip->dcc_meta_buffer_size_bytes;
 
 	mode_lib->vba.PixelChunkSizeInKByte = ip->pixel_chunk_size_kbytes;
 	mode_lib->vba.MetaChunkSize = ip->meta_chunk_size_kbytes;
@@ -399,6 +519,7 @@ static void fetch_pipe_params(struct display_mode_lib *mode_lib)
 		visited[k] = false;
 
 	mode_lib->vba.NumberOfActivePlanes = 0;
+	mode_lib->vba.NumberOfActiveSurfaces = 0;
 	mode_lib->vba.ImmediateFlipSupport = false;
 	for (j = 0; j < mode_lib->vba.cache_num_pipes; ++j) {
 		display_pipe_source_params_st *src = &pipes[j].pipe.src;
@@ -429,6 +550,21 @@ static void fetch_pipe_params(struct display_mode_lib *mode_lib)
 				src->viewport_y_y;
 		mode_lib->vba.ViewportYStartC[mode_lib->vba.NumberOfActivePlanes] =
 				src->viewport_y_c;
+		mode_lib->vba.SourceRotation[mode_lib->vba.NumberOfActiveSurfaces] = src->source_rotation;
+		mode_lib->vba.ViewportXStartY[mode_lib->vba.NumberOfActiveSurfaces] = src->viewport_x_y;
+		mode_lib->vba.ViewportXStartC[mode_lib->vba.NumberOfActiveSurfaces] = src->viewport_x_c;
+		// TODO: Assign correct value to viewport_stationary
+		mode_lib->vba.ViewportStationary[mode_lib->vba.NumberOfActivePlanes] =
+				src->viewport_stationary;
+		mode_lib->vba.UsesMALLForPStateChange[mode_lib->vba.NumberOfActivePlanes] = src->use_mall_for_pstate_change;
+		mode_lib->vba.UseMALLForStaticScreen[mode_lib->vba.NumberOfActivePlanes] = src->use_mall_for_static_screen;
+		mode_lib->vba.GPUVMMinPageSizeKBytes[mode_lib->vba.NumberOfActivePlanes] = src->gpuvm_min_page_size_kbytes;
+		mode_lib->vba.RefreshRate[mode_lib->vba.NumberOfActivePlanes] = dst->refresh_rate; //todo remove this
+		mode_lib->vba.OutputLinkDPRate[mode_lib->vba.NumberOfActivePlanes] = dout->dp_rate;
+		mode_lib->vba.ODMUse[mode_lib->vba.NumberOfActivePlanes] = dst->odm_combine;
+		//TODO: Need to assign correct values to dp_multistream vars
+		mode_lib->vba.OutputMultistreamEn[mode_lib->vba.NumberOfActiveSurfaces] = dout->dp_multistream_en;
+		mode_lib->vba.OutputMultistreamId[mode_lib->vba.NumberOfActiveSurfaces] = dout->dp_multistream_id;
 		mode_lib->vba.PitchY[mode_lib->vba.NumberOfActivePlanes] = src->data_pitch;
 		mode_lib->vba.SurfaceWidthY[mode_lib->vba.NumberOfActivePlanes] = src->surface_width_y;
 		mode_lib->vba.SurfaceHeightY[mode_lib->vba.NumberOfActivePlanes] = src->surface_height_y;
@@ -677,6 +813,7 @@ static void fetch_pipe_params(struct display_mode_lib *mode_lib)
 		}
 
 		mode_lib->vba.NumberOfActivePlanes++;
+		mode_lib->vba.NumberOfActiveSurfaces++;
 	}
 
 	// handle overlays through BlendingAndTiming
@@ -702,6 +839,8 @@ static void fetch_pipe_params(struct display_mode_lib *mode_lib)
 		}
 	}
 
+	mode_lib->vba.SynchronizeTimingsFinal = pipes[0].pipe.dest.synchronize_timings;
+	mode_lib->vba.DCCProgrammingAssumesScanDirectionUnknownFinal = false;
 	mode_lib->vba.UseUnboundedRequesting = dm_unbounded_requesting;
 	for (k = 0; k < mode_lib->vba.cache_num_pipes; ++k) {
 		if (pipes[k].pipe.src.unbounded_req_mode == 0)
@@ -745,6 +884,32 @@ static void fetch_pipe_params(struct display_mode_lib *mode_lib)
 
 	mode_lib->vba.GPUVMEnable = mode_lib->vba.GPUVMEnable && !!ip->gpuvm_enable;
 	mode_lib->vba.HostVMEnable = mode_lib->vba.HostVMEnable && !!ip->hostvm_enable;
+
+	for (k = 0; k < mode_lib->vba.cache_num_pipes; ++k) {
+		mode_lib->vba.ForceOneRowForFrame[k] = pipes[k].pipe.src.force_one_row_for_frame;
+		mode_lib->vba.PteBufferMode[k] = pipes[k].pipe.src.pte_buffer_mode;
+
+		if (mode_lib->vba.PteBufferMode[k] == 0 && mode_lib->vba.GPUVMEnable) {
+			if (mode_lib->vba.ForceOneRowForFrame[k] ||
+				(mode_lib->vba.GPUVMMinPageSizeKBytes[k] > 64*1024) ||
+				(mode_lib->vba.UsesMALLForPStateChange[k] != dm_use_mall_pstate_change_disable) ||
+				(mode_lib->vba.UseMALLForStaticScreen[k] != dm_use_mall_static_screen_disable)) {
+#ifdef __DML_VBA_DEBUG__
+				dml_print("DML::%s: ERROR: Invalid PteBufferMode=%d for plane %0d!\n",
+						__func__, mode_lib->vba.PteBufferMode[k], k);
+				dml_print("DML::%s:  -  ForceOneRowForFrame     = %d\n",
+						__func__, mode_lib->vba.ForceOneRowForFrame[k]);
+				dml_print("DML::%s:  -  GPUVMMinPageSizeKBytes  = %d\n",
+						__func__, mode_lib->vba.GPUVMMinPageSizeKBytes[k]);
+				dml_print("DML::%s:  -  UseMALLForPStateChange  = %d\n",
+						__func__, (int) mode_lib->vba.UsesMALLForPStateChange[k]);
+				dml_print("DML::%s:  -  UseMALLForStaticScreen  = %d\n",
+						__func__, (int) mode_lib->vba.UseMALLForStaticScreen[k]);
+#endif
+				ASSERT(0);
+			}
+		}
+	}
 }
 
 /**
@@ -896,6 +1061,7 @@ void ModeSupportAndSystemConfiguration(struct display_mode_lib *mode_lib)
 	soc_bounding_box_st *soc = &mode_lib->vba.soc;
 	unsigned int k;
 	unsigned int total_pipes = 0;
+	unsigned int pipe_idx = 0;
 
 	mode_lib->vba.VoltageLevel = mode_lib->vba.cache_pipes[0].clks_cfg.voltage;
 	mode_lib->vba.ReturnBW = mode_lib->vba.ReturnBWPerState[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb];
@@ -917,6 +1083,11 @@ void ModeSupportAndSystemConfiguration(struct display_mode_lib *mode_lib)
 	// Total Available Pipes Support Check
 	for (k = 0; k < mode_lib->vba.NumberOfActivePlanes; ++k) {
 		total_pipes += mode_lib->vba.DPPPerPlane[k];
+		pipe_idx = get_pipe_idx(mode_lib, k);
+		if (mode_lib->vba.cache_pipes[pipe_idx].clks_cfg.dppclk_mhz > 0.0)
+			mode_lib->vba.DPPCLK[k] = mode_lib->vba.cache_pipes[pipe_idx].clks_cfg.dppclk_mhz;
+		else
+			mode_lib->vba.DPPCLK[k] = soc->clock_limits[mode_lib->vba.VoltageLevel].dppclk_mhz;
 	}
 	ASSERT(total_pipes <= DC__NUM_DPP__MAX);
 }
diff --git a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h
index 0603b32971a6..ddf8b19c490e 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/display_mode_vba.h
@@ -58,6 +58,15 @@ dml_get_attr_decl(return_bw);
 dml_get_attr_decl(tcalc);
 dml_get_attr_decl(fraction_of_urgent_bandwidth);
 dml_get_attr_decl(fraction_of_urgent_bandwidth_imm_flip);
+dml_get_attr_decl(cstate_max_cap_mode);
+dml_get_attr_decl(comp_buffer_size_kbytes);
+dml_get_attr_decl(pixel_chunk_size_in_kbyte);
+dml_get_attr_decl(alpha_pixel_chunk_size_in_kbyte);
+dml_get_attr_decl(meta_chunk_size_in_kbyte);
+dml_get_attr_decl(min_pixel_chunk_size_in_byte);
+dml_get_attr_decl(min_meta_chunk_size_in_byte);
+dml_get_attr_decl(fclk_watermark);
+dml_get_attr_decl(usr_retraining_watermark);
 
 #define dml_get_pipe_attr_decl(attr) double get_##attr(struct display_mode_lib *mode_lib, const display_e2e_pipe_params_st *pipes, unsigned int num_pipes, unsigned int which_pipe)
 
@@ -75,6 +84,26 @@ dml_get_pipe_attr_decl(dst_y_per_row_vblank);
 dml_get_pipe_attr_decl(dst_y_prefetch);
 dml_get_pipe_attr_decl(dst_y_per_vm_flip);
 dml_get_pipe_attr_decl(dst_y_per_row_flip);
+dml_get_pipe_attr_decl(dst_y_per_pte_row_nom_l);
+dml_get_pipe_attr_decl(dst_y_per_pte_row_nom_c);
+dml_get_pipe_attr_decl(dst_y_per_meta_row_nom_l);
+dml_get_pipe_attr_decl(dst_y_per_meta_row_nom_c);
+dml_get_pipe_attr_decl(dpte_row_height_linear_c);
+dml_get_pipe_attr_decl(swath_height_l);
+dml_get_pipe_attr_decl(swath_height_c);
+dml_get_pipe_attr_decl(det_stored_buffer_size_l_bytes);
+dml_get_pipe_attr_decl(det_stored_buffer_size_c_bytes);
+dml_get_pipe_attr_decl(dpte_group_size_in_bytes);
+dml_get_pipe_attr_decl(vm_group_size_in_bytes);
+dml_get_pipe_attr_decl(det_buffer_size_kbytes);
+dml_get_pipe_attr_decl(dpte_row_height_linear_l);
+dml_get_pipe_attr_decl(refcyc_per_pte_group_nom_l_in_us);
+dml_get_pipe_attr_decl(refcyc_per_pte_group_nom_c_in_us);
+dml_get_pipe_attr_decl(refcyc_per_pte_group_vblank_l_in_us);
+dml_get_pipe_attr_decl(refcyc_per_pte_group_vblank_c_in_us);
+dml_get_pipe_attr_decl(refcyc_per_pte_group_flip_l_in_us);
+dml_get_pipe_attr_decl(refcyc_per_pte_group_flip_c_in_us);
+dml_get_pipe_attr_decl(pte_buffer_mode);
 dml_get_pipe_attr_decl(refcyc_per_vm_group_vblank);
 dml_get_pipe_attr_decl(refcyc_per_vm_group_flip);
 dml_get_pipe_attr_decl(refcyc_per_vm_req_vblank);
@@ -108,6 +137,8 @@ dml_get_pipe_attr_decl(vupdate_width);
 dml_get_pipe_attr_decl(vready_offset);
 dml_get_pipe_attr_decl(vready_at_or_after_vsync);
 dml_get_pipe_attr_decl(min_dst_y_next_start);
+dml_get_pipe_attr_decl(vstartup_calculated);
+dml_get_pipe_attr_decl(subviewport_lines_needed_in_mall);
 
 double get_total_immediate_flip_bytes(
 		struct display_mode_lib *mode_lib,
@@ -126,6 +157,16 @@ unsigned int dml_get_voltage_level(
 		const display_e2e_pipe_params_st *pipes,
 		unsigned int num_pipes);
 
+unsigned int get_total_surface_size_in_mall_bytes(
+		struct display_mode_lib *mode_lib,
+		const display_e2e_pipe_params_st *pipes,
+		unsigned int num_pipes);
+unsigned int get_pipe_idx(struct display_mode_lib *mode_lib, unsigned int plane_idx);
+
+bool get_is_phantom_pipe(struct display_mode_lib *mode_lib,
+		const display_e2e_pipe_params_st *pipes,
+		unsigned int num_pipes,
+		unsigned int pipe_idx);
 void PixelClockAdjustmentForProgressiveToInterlaceUnit(struct display_mode_lib *mode_lib);
 
 bool Calculate256BBlockSizes(
@@ -138,6 +179,39 @@ bool Calculate256BBlockSizes(
 		unsigned int *BlockWidth256BytesY,
 		unsigned int *BlockWidth256BytesC);
 
+struct DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation {
+	unsigned int dummy_integer_array[2][DC__NUM_DPP__MAX];
+	double dummy_single_array[2][DC__NUM_DPP__MAX];
+	unsigned int dummy_long_array[2][DC__NUM_DPP__MAX];
+	double dummy_double_array[2][DC__NUM_DPP__MAX];
+	bool dummy_boolean_array[DC__NUM_DPP__MAX];
+	bool dummy_boolean;
+	bool dummy_boolean2;
+	enum output_encoder_class dummy_output_encoder_array[DC__NUM_DPP__MAX];
+	DmlPipe SurfaceParameters[DC__NUM_DPP__MAX];
+	bool dummy_boolean_array2[2][DC__NUM_DPP__MAX];
+	unsigned int ReorderBytes;
+	unsigned int VMDataOnlyReturnBW;
+	double HostVMInefficiencyFactor;
+};
+
+struct dml32_ModeSupportAndSystemConfigurationFull {
+	unsigned int dummy_integer_array[8][DC__NUM_DPP__MAX];
+	double dummy_double_array[2][DC__NUM_DPP__MAX];
+	DmlPipe SurfParameters[DC__NUM_DPP__MAX];
+	double dummy_single[5];
+	double dummy_single2[5];
+	SOCParametersList mSOCParameters;
+	unsigned int MaximumSwathWidthSupportLuma;
+	unsigned int MaximumSwathWidthSupportChroma;
+};
+
+struct dummy_vars {
+	struct DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation
+	DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerformanceCalculation;
+	struct dml32_ModeSupportAndSystemConfigurationFull dml32_ModeSupportAndSystemConfigurationFull;
+};
+
 struct vba_vars_st {
 	ip_params_st ip;
 	soc_bounding_box_st soc;
@@ -154,6 +228,7 @@ struct vba_vars_st {
 	double DISPCLKWithRampingRoundedToDFSGranularity;
 	double DISPCLKWithoutRampingRoundedToDFSGranularity;
 	double MaxDispclkRoundedToDFSGranularity;
+	double MaxDppclkRoundedToDFSGranularity;
 	bool DCCEnabledAnyPlane;
 	double ReturnBandwidthToDCN;
 	unsigned int TotalActiveDPP;
@@ -169,6 +244,8 @@ struct vba_vars_st {
 	double NextMaxVStartup;
 	double VBlankTime;
 	double SmallestVBlank;
+	enum dm_prefetch_modes AllowForPStateChangeOrStutterInVBlankFinal; // Mode Support only
+	double DCFCLKDeepSleepPerSurface[DC__NUM_DPP__MAX];
 	double DCFCLKDeepSleepPerPlane[DC__NUM_DPP__MAX];
 	double EffectiveDETPlusLBLinesLuma;
 	double EffectiveDETPlusLBLinesChroma;
@@ -212,6 +289,14 @@ struct vba_vars_st {
 	double UrgentLatencyPixelMixedWithVMData;
 	double UrgentLatencyVMDataOnly;
 	double UrgentLatency; // max of the above three
+	double USRRetrainingLatency;
+	double SMNLatency;
+	double FCLKChangeLatency;
+	unsigned int MALLAllocatedForDCNFinal;
+	double DefaultGPUVMMinPageSizeKBytes; // Default for the project
+	double MaxAveragePercentOfIdealFabricBWDisplayCanUseInNormalSystemOperation;
+	double MaxAveragePercentOfIdealDRAMBWDisplayCanUseInNormalSystemOperationSTROBE;
+	double PercentOfIdealDRAMBWReceivedAfterUrgLatencySTROBE;
 	double WritebackLatency;
 	double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelDataOnly; // Mode Support
 	double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData; // Mode Support
@@ -284,6 +369,14 @@ struct vba_vars_st {
 	double DPPCLKDelayCNVCCursor;
 	double DISPCLKDelaySubtotal;
 	bool ProgressiveToInterlaceUnitInOPP;
+	unsigned int CompressedBufferSegmentSizeInkByteFinal;
+	unsigned int CompbufReservedSpace64B;
+	unsigned int CompbufReservedSpaceZs;
+	unsigned int LineBufferSizeFinal;
+	unsigned int MaximumPixelsPerLinePerDSCUnit;
+	unsigned int AlphaPixelChunkSizeInKByte;
+	double MinPixelChunkSizeBytes;
+	unsigned int DCCMetaBufferSizeBytes;
 	// Pipe/Plane Parameters
 	int VoltageLevel;
 	double FabricClock;
@@ -291,6 +384,23 @@ struct vba_vars_st {
 	double DISPCLK;
 	double SOCCLK;
 	double DCFCLK;
+	unsigned int MaxTotalDETInKByte;
+	unsigned int MinCompressedBufferSizeInKByte;
+	unsigned int NumberOfActiveSurfaces;
+	bool ViewportStationary[DC__NUM_DPP__MAX];
+	unsigned int RefreshRate[DC__NUM_DPP__MAX];
+	double       OutputBPP[DC__NUM_DPP__MAX];
+	unsigned int GPUVMMinPageSizeKBytes[DC__NUM_DPP__MAX];
+	bool SynchronizeTimingsFinal;
+	bool SynchronizeDRRDisplaysForUCLKPStateChangeFinal;
+	bool ForceOneRowForFrame[DC__NUM_DPP__MAX];
+	unsigned int ViewportXStartY[DC__NUM_DPP__MAX];
+	unsigned int ViewportXStartC[DC__NUM_DPP__MAX];
+	enum dm_rotation_angle SourceRotation[DC__NUM_DPP__MAX];
+	bool DRRDisplay[DC__NUM_DPP__MAX];
+	bool PteBufferMode[DC__NUM_DPP__MAX];
+	enum dm_output_type OutputType[DC__NUM_DPP__MAX];
+	enum dm_output_rate OutputRate[DC__NUM_DPP__MAX];
 
 	unsigned int NumberOfActivePlanes;
 	unsigned int NumberOfDSCSlices[DC__NUM_DPP__MAX];
@@ -392,6 +502,12 @@ struct vba_vars_st {
 	double StutterEfficiencyNotIncludingVBlank;
 	double NonUrgentLatencyTolerance;
 	double MinActiveDRAMClockChangeLatencySupported;
+	double Z8StutterEfficiencyBestCase;
+	unsigned int Z8NumberOfStutterBurstsPerFrameBestCase;
+	double Z8StutterEfficiencyNotIncludingVBlankBestCase;
+	double StutterPeriodBestCase;
+	Watermarks      Watermark;
+	bool DCHUBBUB_ARB_CSTATE_MAX_CAP_MODE;
 
 	// These are the clocks calcuated by the library but they are not actually
 	// used explicitly. They are fetched by tests and then possibly used. The
@@ -399,6 +515,10 @@ struct vba_vars_st {
 	double DISPCLK_calculated;
 	double DPPCLK_calculated[DC__NUM_DPP__MAX];
 
+	bool ImmediateFlipSupportedSurface[DC__NUM_DPP__MAX];
+
+	bool Use_One_Row_For_Frame[DC__NUM_DPP__MAX];
+	bool Use_One_Row_For_Frame_Flip[DC__NUM_DPP__MAX];
 	unsigned int VUpdateOffsetPix[DC__NUM_DPP__MAX];
 	double VUpdateWidthPix[DC__NUM_DPP__MAX];
 	double VReadyOffsetPix[DC__NUM_DPP__MAX];
@@ -429,6 +549,7 @@ struct vba_vars_st {
 	double DRAMSpeedPerState[DC__VOLTAGE_STATES];
 	double MaxDispclk[DC__VOLTAGE_STATES];
 	int VoltageOverrideLevel;
+	double PHYCLKD32PerState[DC__VOLTAGE_STATES];
 
 	/*outputs*/
 	bool ScaleRatioAndTapsSupport;
@@ -452,6 +573,51 @@ struct vba_vars_st {
 	bool PitchSupport;
 	enum dm_validation_status ValidationStatus[DC__VOLTAGE_STATES];
 
+	/* Mode Support Reason */
+	bool P2IWith420;
+	bool DSCOnlyIfNecessaryWithBPP;
+	bool DSC422NativeNotSupported;
+	bool LinkRateDoesNotMatchDPVersion;
+	bool LinkRateForMultistreamNotIndicated;
+	bool BPPForMultistreamNotIndicated;
+	bool MultistreamWithHDMIOreDP;
+	bool MSOOrODMSplitWithNonDPLink;
+	bool NotEnoughLanesForMSO;
+	bool ViewportExceedsSurface;
+
+	bool ImmediateFlipRequiredButTheRequirementForEachSurfaceIsNotSpecified;
+	bool ImmediateFlipOrHostVMAndPStateWithMALLFullFrameOrPhantomPipe;
+	bool InvalidCombinationOfMALLUseForPStateAndStaticScreen;
+	bool InvalidCombinationOfMALLUseForPState;
+
+	enum dm_output_link_dp_rate OutputLinkDPRate[DC__NUM_DPP__MAX];
+	double PrefetchLinesYThisState[DC__NUM_DPP__MAX];
+	double PrefetchLinesCThisState[DC__NUM_DPP__MAX];
+	double meta_row_bandwidth_this_state[DC__NUM_DPP__MAX];
+	double dpte_row_bandwidth_this_state[DC__NUM_DPP__MAX];
+	double DPTEBytesPerRowThisState[DC__NUM_DPP__MAX];
+	double PDEAndMetaPTEBytesPerFrameThisState[DC__NUM_DPP__MAX];
+	double MetaRowBytesThisState[DC__NUM_DPP__MAX];
+	bool use_one_row_for_frame[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	bool use_one_row_for_frame_flip[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	bool use_one_row_for_frame_this_state[DC__NUM_DPP__MAX];
+	bool use_one_row_for_frame_flip_this_state[DC__NUM_DPP__MAX];
+
+	unsigned int OutputTypeAndRatePerState[DC__VOLTAGE_STATES][DC__NUM_DPP__MAX];
+	double RequiredDISPCLKPerSurface[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	unsigned int MicroTileHeightY[DC__NUM_DPP__MAX];
+	unsigned int MicroTileHeightC[DC__NUM_DPP__MAX];
+	unsigned int MicroTileWidthY[DC__NUM_DPP__MAX];
+	unsigned int MicroTileWidthC[DC__NUM_DPP__MAX];
+	bool ImmediateFlipRequiredFinal;
+	bool DCCProgrammingAssumesScanDirectionUnknownFinal;
+	bool EnoughWritebackUnits;
+	bool ODMCombine2To1SupportCheckOK[DC__VOLTAGE_STATES];
+	bool NumberOfDP2p0Support;
+	unsigned int MaxNumDP2p0Streams;
+	unsigned int MaxNumDP2p0Outputs;
+	enum dm_output_type OutputTypePerState[DC__VOLTAGE_STATES][DC__NUM_DPP__MAX];
+	enum dm_output_rate OutputRatePerState[DC__VOLTAGE_STATES][DC__NUM_DPP__MAX];
 	double WritebackLineBufferLumaBufferSize;
 	double WritebackLineBufferChromaBufferSize;
 	double WritebackMinHSCLRatio;
@@ -647,6 +813,7 @@ struct vba_vars_st {
 	double         dummy7[DC__NUM_DPP__MAX];
 	double         dummy8[DC__NUM_DPP__MAX];
 	double         dummy13[DC__NUM_DPP__MAX];
+	double         dummy_double_array[2][DC__NUM_DPP__MAX];
 	unsigned int        dummyinteger1ms[DC__NUM_DPP__MAX];
 	double        dummyinteger2ms[DC__NUM_DPP__MAX];
 	unsigned int        dummyinteger3[DC__NUM_DPP__MAX];
@@ -666,6 +833,9 @@ struct vba_vars_st {
 	unsigned int        dummyintegerarr2[DC__NUM_DPP__MAX];
 	unsigned int        dummyintegerarr3[DC__NUM_DPP__MAX];
 	unsigned int        dummyintegerarr4[DC__NUM_DPP__MAX];
+	unsigned int        dummy_integer_array[8][DC__NUM_DPP__MAX];
+	unsigned int        dummy_integer_array22[22][DC__NUM_DPP__MAX];
+
 	bool           dummysinglestring;
 	bool           SingleDPPViewportSizeSupportPerPlane[DC__NUM_DPP__MAX];
 	double         PlaneRequiredDISPCLKWithODMCombine2To1;
@@ -896,8 +1066,8 @@ struct vba_vars_st {
 	double meta_row_bandwidth[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
 	double DETBufferSizeYAllStates[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
 	double DETBufferSizeCAllStates[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
-	int swath_width_luma_ub_all_states[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
-	int swath_width_chroma_ub_all_states[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	unsigned int swath_width_luma_ub_all_states[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	unsigned int swath_width_chroma_ub_all_states[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
 	bool NotUrgentLatencyHiding[DC__VOLTAGE_STATES][2];
 	unsigned int SwathHeightYAllStates[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
 	unsigned int SwathHeightCAllStates[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
@@ -972,6 +1142,74 @@ struct vba_vars_st {
 	int Z8NumberOfStutterBurstsPerFrame;
 	unsigned int MaximumDSCBitsPerComponent;
 	unsigned int NotEnoughUrgentLatencyHidingA[DC__VOLTAGE_STATES][2];
+	double UrgentLatencyWithUSRRetraining;
+	double UrgLatencyWithUSRRetraining[DC__VOLTAGE_STATES];
+	double ReadBandwidthSurfaceLuma[DC__NUM_DPP__MAX];
+	double ReadBandwidthSurfaceChroma[DC__NUM_DPP__MAX];
+	double SurfaceRequiredDISPCLKWithoutODMCombine;
+	double SurfaceRequiredDISPCLK;
+	double SurfaceRequiredDISPCLKWithODMCombine2To1;
+	double SurfaceRequiredDISPCLKWithODMCombine4To1;
+	double MinActiveFCLKChangeLatencySupported;
+	double dummy14;
+	double dummy15;
+	int MinVoltageLevel;
+	int MaxVoltageLevel;
+	unsigned int TotalNumberOfSingleDPPSurfaces[DC__VOLTAGE_STATES][2];
+	unsigned int CompressedBufferSizeInkByteAllStates[DC__VOLTAGE_STATES][2];
+	unsigned int DETBufferSizeInKByteAllStates[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];
+	unsigned int DETBufferSizeInKByteThisState[DC__NUM_DPP__MAX];
+	unsigned int SurfaceSizeInMALL[DC__NUM_DPP__MAX];
+	bool ExceededMALLSize;
+	bool PTE_BUFFER_MODE[DC__NUM_DPP__MAX];
+	unsigned int BIGK_FRAGMENT_SIZE[DC__NUM_DPP__MAX];
+	unsigned int dummyinteger33;
+	unsigned int CompressedBufferSizeInkByteThisState;
+	enum dm_fclock_change_support FCLKChangeSupport[DC__VOLTAGE_STATES][2];
+	Latencies myLatency;
+	Latencies mLatency;
+	Watermarks DummyWatermark;
+	bool USRRetrainingSupport[DC__VOLTAGE_STATES][2];
+	bool dummyBooleanvector1[DC__NUM_DPP__MAX];
+	bool dummyBooleanvector2[DC__NUM_DPP__MAX];
+	enum dm_use_mall_for_pstate_change_mode UsesMALLForPStateChange[DC__NUM_DPP__MAX];
+	bool NotEnoughUrgentLatencyHiding_dml32[DC__VOLTAGE_STATES][2];
+	bool UnboundedRequestEnabledAllStates[DC__VOLTAGE_STATES][2];
+	bool SingleDPPViewportSizeSupportPerSurface[DC__NUM_DPP__MAX];
+	enum dm_use_mall_for_static_screen_mode UseMALLForStaticScreen[DC__NUM_DPP__MAX];
+	bool UnboundedRequestEnabledThisState;
+	bool DRAMClockChangeRequirementFinal;
+	bool FCLKChangeRequirementFinal;
+	bool USRRetrainingRequiredFinal;
+	bool MALLUseFinal;
+	bool ConfigurableDETSizeEnFinal;
+	bool dummyboolean;
+	unsigned int DETSizeOverride[DC__NUM_DPP__MAX];
+	unsigned int nomDETInKByte;
+	enum mpc_combine_affinity  MPCCombineUse[DC__NUM_DPP__MAX];
+	bool MPCCombineMethodIncompatible;
+	unsigned int RequiredSlots[DC__VOLTAGE_STATES][DC__NUM_DPP__MAX];
+	bool ExceededMultistreamSlots[DC__VOLTAGE_STATES];
+	enum odm_combine_policy ODMUse[DC__NUM_DPP__MAX];
+	unsigned int OutputMultistreamId[DC__NUM_DPP__MAX];
+	bool OutputMultistreamEn[DC__NUM_DPP__MAX];
+	bool UsesMALLForStaticScreen[DC__NUM_DPP__MAX];
+	double MaxActiveDRAMClockChangeLatencySupported[DC__NUM_DPP__MAX];
+	double WritebackAllowFCLKChangeEndPosition[DC__NUM_DPP__MAX];
+	bool PTEBufferSizeNotExceededPerState[DC__NUM_DPP__MAX]; // new in DML32
+	bool DCCMetaBufferSizeNotExceededPerState[DC__NUM_DPP__MAX]; // new in DML32
+	bool NotEnoughDSCSlices[DC__VOLTAGE_STATES];
+	bool PixelsPerLinePerDSCUnitSupport[DC__VOLTAGE_STATES];
+	bool DCCMetaBufferSizeNotExceeded[DC__VOLTAGE_STATES][2];
+	unsigned int dpte_row_height_linear[DC__NUM_DPP__MAX];
+	unsigned int dpte_row_height_linear_chroma[DC__NUM_DPP__MAX];
+	unsigned int BlockHeightY[DC__NUM_DPP__MAX];
+	unsigned int BlockHeightC[DC__NUM_DPP__MAX];
+	unsigned int BlockWidthY[DC__NUM_DPP__MAX];
+	unsigned int BlockWidthC[DC__NUM_DPP__MAX];
+	unsigned int SubViewportLinesNeededInMALL[DC__NUM_DPP__MAX];
+	bool VActiveBandwithSupport[DC__VOLTAGE_STATES][2];
+	struct dummy_vars dummy_vars;
 };
 
 bool CalculateMinAndMaxPrefetchMode(
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml/dml_wrapper.c
index d2273674e872..b4b51e51fc25 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dml_wrapper.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dml_wrapper.c
@@ -23,7 +23,6 @@
  *
  */
 
-#include "dml_wrapper.h"
 #include "resource.h"
 #include "core_types.h"
 #include "dsc.h"
@@ -86,25 +85,6 @@ static void get_pixel_clock_parameters(
 
 }
 
-static enum dc_status build_pipe_hw_param(struct pipe_ctx *pipe_ctx)
-{
-	get_pixel_clock_parameters(pipe_ctx, &pipe_ctx->stream_res.pix_clk_params);
-
-	if (pipe_ctx->clock_source)
-		pipe_ctx->clock_source->funcs->get_pix_clk_dividers(
-			pipe_ctx->clock_source,
-			&pipe_ctx->stream_res.pix_clk_params,
-			&pipe_ctx->pll_settings);
-
-	pipe_ctx->stream->clamping.pixel_encoding = pipe_ctx->stream->timing.pixel_encoding;
-
-	resource_build_bit_depth_reduction_params(pipe_ctx->stream,
-					&pipe_ctx->stream->bit_depth_params);
-	build_clamping_params(pipe_ctx->stream);
-
-	return DC_OK;
-}
-
 static void resource_build_bit_depth_reduction_params(struct dc_stream_state *stream,
 		struct bit_depth_reduction_params *fmt_bit_depth)
 {
@@ -231,6 +211,30 @@ static void resource_build_bit_depth_reduction_params(struct dc_stream_state *st
 	fmt_bit_depth->pixel_encoding = pixel_encoding;
 }
 
+/* Move this after the above function as VS complains about
+ * declaration issues for resource_build_bit_depth_reduction_params.
+ */
+
+static enum dc_status build_pipe_hw_param(struct pipe_ctx *pipe_ctx)
+{
+
+	get_pixel_clock_parameters(pipe_ctx, &pipe_ctx->stream_res.pix_clk_params);
+
+	if (pipe_ctx->clock_source)
+		pipe_ctx->clock_source->funcs->get_pix_clk_dividers(
+			pipe_ctx->clock_source,
+			&pipe_ctx->stream_res.pix_clk_params,
+			&pipe_ctx->pll_settings);
+
+	pipe_ctx->stream->clamping.pixel_encoding = pipe_ctx->stream->timing.pixel_encoding;
+
+	resource_build_bit_depth_reduction_params(pipe_ctx->stream,
+		&pipe_ctx->stream->bit_depth_params);
+	build_clamping_params(pipe_ctx->stream);
+
+	return DC_OK;
+}
+
 bool dml_validate_dsc(struct dc *dc, struct dc_state *new_ctx)
 {
 	int i;
@@ -1130,7 +1134,7 @@ static int dml_populate_dml_pipes_from_context(
 {
 	int i, pipe_cnt;
 	struct resource_context *res_ctx = &context->res_ctx;
-	struct pipe_ctx *pipe;
+	struct pipe_ctx *pipe = NULL;	// Fix potentially uninitialized error from VS
 
 	populate_dml_pipes_from_context_base(dc, context, pipes, fast_validate);
 
@@ -1296,6 +1300,7 @@ static void dml_update_soc_for_wm_a(struct dc *dc, struct dc_state *context)
 		context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.pstate_latency_us;
 		context->bw_ctx.dml.soc.sr_enter_plus_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.sr_enter_plus_exit_time_us;
 		context->bw_ctx.dml.soc.sr_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.sr_exit_time_us;
+		context->bw_ctx.dml.soc.fclk_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.fclk_change_latency_us;
 	}
 }
 
@@ -1593,11 +1598,8 @@ static void dml_calculate_dlg_params(
 		context->bw_ctx.bw.dcn.clk.z9_support = DCN_Z9_SUPPORT_ALLOW;
 	*/
 	context->bw_ctx.bw.dcn.clk.dtbclk_en = is_dtbclk_required(dc, context);
-	/* TODO : Uncomment the below line and make changes
-	 * as per DML nomenclature once it is available.
-	 * context->bw_ctx.bw.dcn.clk.fclk_p_state_change_support = context->bw_ctx.dml.vba.fclk_pstate_support;
-	 */
-
+	context->bw_ctx.bw.dcn.clk.fclk_p_state_change_support =
+		context->bw_ctx.dml.vba.FCLKChangeSupport[vlevel][context->bw_ctx.dml.vba.maxMpcComb];
 	if (context->bw_ctx.bw.dcn.clk.dispclk_khz < dc->debug.min_disp_clk_khz)
 		context->bw_ctx.bw.dcn.clk.dispclk_khz = dc->debug.min_disp_clk_khz;
 
@@ -1699,12 +1701,11 @@ static void dml_calculate_wm_and_dlg(
 	context->bw_ctx.bw.dcn.watermarks.b.frac_urg_bw_nom = get_fraction_of_urgent_bandwidth(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
 	context->bw_ctx.bw.dcn.watermarks.b.frac_urg_bw_flip = get_fraction_of_urgent_bandwidth_imm_flip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
 	context->bw_ctx.bw.dcn.watermarks.b.urgent_latency_ns = get_urgent_latency(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-	//context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.fclk_pstate_change_ns = get_wm_fclk_pstate(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.fclk_pstate_change_ns = get_fclk_watermark(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
 	//context->bw_ctx.bw.dcn.watermarks.b.usr_retraining_ns = get_wm_usr_retraining(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
 
 	/* Temporary, to have some fclk_pstate_change_ns and usr_retraining_ns wm values until DML is implemented */
-	context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.fclk_pstate_change_ns = context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.pstate_change_ns / 4;
-	context->bw_ctx.bw.dcn.watermarks.b.usr_retraining_ns = context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.pstate_change_ns / 8;
+	//context->bw_ctx.bw.dcn.watermarks.b.usr_retraining = context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.pstate_change_ns / 8;
 
 	/* Set D:
 	 * All clocks min.
@@ -1736,13 +1737,11 @@ static void dml_calculate_wm_and_dlg(
 	context->bw_ctx.bw.dcn.watermarks.d.frac_urg_bw_nom = get_fraction_of_urgent_bandwidth(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
 	context->bw_ctx.bw.dcn.watermarks.d.frac_urg_bw_flip = get_fraction_of_urgent_bandwidth_imm_flip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
 	context->bw_ctx.bw.dcn.watermarks.d.urgent_latency_ns = get_urgent_latency(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-	//context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.fclk_pstate_change_ns = get_wm_fclk_pstate(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.fclk_pstate_change_ns = get_fclk_watermark(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
 	//context->bw_ctx.bw.dcn.watermarks.d.usr_retraining_ns = get_wm_usr_retraining(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
 
 	/* Temporary, to have some fclk_pstate_change_ns and usr_retraining_ns wm values until DML is implemented */
-	context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.fclk_pstate_change_ns = context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.pstate_change_ns / 4;
-	context->bw_ctx.bw.dcn.watermarks.d.usr_retraining_ns = context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.pstate_change_ns / 8;
-
+	//context->bw_ctx.bw.dcn.watermarks.d.usr_retraining = context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.pstate_change_ns / 8;
 	/* Set C, for Dummy P-State:
 	 * All clocks min.
 	 * DCFCLK: Min, as reported by PM FW, when available
@@ -1773,13 +1772,11 @@ static void dml_calculate_wm_and_dlg(
 	context->bw_ctx.bw.dcn.watermarks.c.frac_urg_bw_nom = get_fraction_of_urgent_bandwidth(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
 	context->bw_ctx.bw.dcn.watermarks.c.frac_urg_bw_flip = get_fraction_of_urgent_bandwidth_imm_flip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
 	context->bw_ctx.bw.dcn.watermarks.c.urgent_latency_ns = get_urgent_latency(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
-	//context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.fclk_pstate_change_ns = get_wm_fclk_pstate(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.fclk_pstate_change_ns = get_fclk_watermark(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
 	//context->bw_ctx.bw.dcn.watermarks.c.usr_retraining_ns = get_wm_usr_retraining(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
 
 	/* Temporary, to have some fclk_pstate_change_ns and usr_retraining_ns wm values until DML is implemented */
-	context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.fclk_pstate_change_ns = context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.pstate_change_ns / 4;
-	context->bw_ctx.bw.dcn.watermarks.c.usr_retraining_ns = context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.pstate_change_ns / 8;
-
+	//context->bw_ctx.bw.dcn.watermarks.c.usr_retraining = context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.pstate_change_ns / 8;
 	if ((!pstate_en) && (dc->clk_mgr->bw_params->wm_table.nv_entries[WM_C].valid)) {
 		/* The only difference between A and C is p-state latency, if p-state is not supported
 		 * with full p-state latency we want to calculate DLG based on dummy p-state latency,
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 09/43] drm/amd/display: add CLKMGR changes for DCN32/321
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (6 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 08/43] drm/amd/display: DML " Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 10/43] drm/amd/display: add DCN32/321 specific files for Display Core Alex Deucher
                   ` (33 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jerry Zuo, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

Add support for managing DCN3.2.x clocks.

v2: squash in smu interface updates (Alex)
v3: Drop unused SMU header (Alex)

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Acked-by: Jerry Zuo <jerry.zuo@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/display/dc/clk_mgr/Makefile   |   35 +
 .../gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c  |   17 +-
 .../display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c  |   15 +-
 .../display/dc/clk_mgr/dcn30/dcn30_clk_mgr.h  |   60 +
 .../drm/amd/display/dc/clk_mgr/dcn32/dalsmc.h |   65 +
 .../display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c  |  628 +++++
 .../display/dc/clk_mgr/dcn32/dcn32_clk_mgr.h  |   39 +
 .../dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c  |  117 +
 .../dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h  |   49 +
 .../dc/clk_mgr/dcn32/smu13_driver_if.h        |  108 +
 .../gpu/drm/amd/display/dc/core/dc_link_dp.c  |    3 +
 drivers/gpu/drm/amd/display/dc/dc.h           |    3 +
 .../amd/display/dc/dcn321/dcn321_resource.c   | 2325 +++++++++++++++++
 .../gpu/drm/amd/display/dc/inc/hw/clk_mgr.h   |    2 +
 .../amd/display/dc/inc/hw/clk_mgr_internal.h  |   45 +-
 15 files changed, 3506 insertions(+), 5 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dalsmc.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/smu13_driver_if.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile b/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
index 817871917632..c935c10b5f4f 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
@@ -172,4 +172,39 @@ AMD_DAL_CLK_MGR_DCN316 = $(addprefix $(AMDDALPATH)/dc/clk_mgr/dcn316/,$(CLK_MGR_
 
 AMD_DISPLAY_FILES += $(AMD_DAL_CLK_MGR_DCN316)
 
+###############################################################################
+# DCN32
+###############################################################################
+CLK_MGR_DCN32 = dcn32_clk_mgr.o dcn32_clk_mgr_smu_msg.o
+
+AMD_DAL_CLK_MGR_DCN32 = $(addprefix $(AMDDALPATH)/dc/clk_mgr/dcn32/,$(CLK_MGR_DCN32))
+
+ifdef CONFIG_X86
+CFLAGS_$(AMDDALPATH)/dc/clk_mgr/dcn32/dcn32_clk_mgr.o := -msse
+endif
+
+ifdef CONFIG_PPC64
+CFLAGS_$(AMDDALPATH)/dc/clk_mgr/dcn32/dcn32_clk_mgr.o := -mhard-float -maltivec
+endif
+
+ifdef CONFIG_CC_IS_GCC
+ifeq ($(call cc-ifversion, -lt, 0701, y), y)
+IS_OLD_GCC = 1
+endif
+CFLAGS_$(AMDDALPATH)/dc/clk_mgr/dcn32/dcn32_clk_mgr.o := -mhard-float
+endif
+
+ifdef CONFIG_X86
+ifdef IS_OLD_GCC
+# Stack alignment mismatch, proceed with caution.
+# GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3
+# (8B stack alignment).
+CFLAGS_$(AMDDALPATH)/dc/clk_mgr/dcn32/dcn32_clk_mgr.o := -mpreferred-stack-boundary=4
+else
+CFLAGS_$(AMDDALPATH)/dc/clk_mgr/dcn32/dcn32_clk_mgr.o := -msse2
+endif
+endif
+
+AMD_DISPLAY_FILES += $(AMD_DAL_CLK_MGR_DCN32)
+
 endif
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
index 772bffda68cc..e803e59abd56 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
@@ -45,6 +45,7 @@
 #include "dcn31/dcn31_clk_mgr.h"
 #include "dcn315/dcn315_clk_mgr.h"
 #include "dcn316/dcn316_clk_mgr.h"
+#include "dcn32/dcn32_clk_mgr.h"
 
 
 int clk_mgr_helper_get_active_display_cnt(
@@ -316,8 +317,19 @@ struct clk_mgr *dc_clk_mgr_create(struct dc_context *ctx, struct pp_smu_funcs *p
 		return &clk_mgr->base.base;
 	}
 		break;
-#endif
+	case AMDGPU_FAMILY_GC_11_0_0: {
+	    struct clk_mgr_internal *clk_mgr = kzalloc(sizeof(*clk_mgr), GFP_KERNEL);
+
+	    if (clk_mgr == NULL) {
+		BREAK_TO_DEBUGGER();
+		return NULL;
+	    }
 
+	    dcn32_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
+	    return &clk_mgr->base;
+	    break;
+#endif
+	}
 	default:
 		ASSERT(0); /* Unknown Asic */
 		break;
@@ -360,6 +372,9 @@ void dc_destroy_clk_mgr(struct clk_mgr *clk_mgr_base)
 		dcn316_clk_mgr_destroy(clk_mgr);
 		break;
 
+	case AMDGPU_FAMILY_GC_11_0_0:
+		dcn32_clk_mgr_destroy(clk_mgr);
+		break;
 	default:
 		break;
 	}
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
index 5ed6a93d1708..914708cefc79 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
@@ -129,7 +129,7 @@ static noinline void dcn3_build_wm_range_table(struct clk_mgr_internal *clk_mgr)
 
 	/* Set C - Dummy P-State - P-State latency set to "dummy p-state" value */
 	clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].valid = true;
-	clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].dml_input.pstate_latency_us = clk_mgr->base.ctx->dc->dml.soc.dummy_pstate_latency_us;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].dml_input.pstate_latency_us = 0;
 	clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].dml_input.sr_exit_time_us = sr_exit_time_us;
 	clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].dml_input.sr_enter_plus_exit_time_us = sr_enter_plus_exit_time_us;
 	clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].pmfw_breakdown.wm_type = WATERMARKS_DUMMY_PSTATE;
@@ -137,6 +137,14 @@ static noinline void dcn3_build_wm_range_table(struct clk_mgr_internal *clk_mgr)
 	clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].pmfw_breakdown.max_dcfclk = 0xFFFF;
 	clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].pmfw_breakdown.min_uclk = min_uclk_mhz;
 	clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].pmfw_breakdown.max_uclk = 0xFFFF;
+	clk_mgr->base.bw_params->dummy_pstate_table[0].dram_speed_mts = 1600;
+	clk_mgr->base.bw_params->dummy_pstate_table[0].dummy_pstate_latency_us = 38;
+	clk_mgr->base.bw_params->dummy_pstate_table[1].dram_speed_mts = 8000;
+	clk_mgr->base.bw_params->dummy_pstate_table[1].dummy_pstate_latency_us = 9;
+	clk_mgr->base.bw_params->dummy_pstate_table[2].dram_speed_mts = 10000;
+	clk_mgr->base.bw_params->dummy_pstate_table[2].dummy_pstate_latency_us = 8;
+	clk_mgr->base.bw_params->dummy_pstate_table[3].dram_speed_mts = 16000;
+	clk_mgr->base.bw_params->dummy_pstate_table[3].dummy_pstate_latency_us = 5;
 
 	/* Set D - MALL - SR enter and exit times adjusted for MALL */
 	clk_mgr->base.bw_params->wm_table.nv_entries[WM_D].valid = true;
@@ -517,6 +525,8 @@ static void dcn30_notify_link_rate_change(struct clk_mgr *clk_mgr_base, struct d
 	if (!clk_mgr->smu_present)
 		return;
 
+	/* TODO - DP2.0 HW: calculate link 128b/132 link rate in clock manager with new formula */
+
 	clk_mgr->cur_phyclk_req_table[link->link_index] = link->cur_link_settings.link_rate * LINK_RATE_REF_FREQ_IN_KHZ;
 
 	for (i = 0; i < MAX_PIPES * 2; i++) {
@@ -620,7 +630,8 @@ void dcn3_clk_mgr_construct(
 
 void dcn3_clk_mgr_destroy(struct clk_mgr_internal *clk_mgr)
 {
-	kfree(clk_mgr->base.bw_params);
+	if (clk_mgr->base.bw_params)
+		kfree(clk_mgr->base.bw_params);
 
 	if (clk_mgr->wm_range_table)
 		dm_helpers_free_gpu_mem(clk_mgr->base.ctx, DC_MEM_ALLOC_TYPE_GART,
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.h
index dd4a0bd72458..2cd95ec38266 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.h
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.h
@@ -26,6 +26,66 @@
 #ifndef __DCN30_CLK_MGR_H__
 #define __DCN30_CLK_MGR_H__
 
+//CLK1_CLK_PLL_REQ
+#ifndef CLK11_CLK1_CLK_PLL_REQ__FbMult_int__SHIFT
+#define CLK11_CLK1_CLK_PLL_REQ__FbMult_int__SHIFT                                                                   0x0
+#define CLK11_CLK1_CLK_PLL_REQ__PllSpineDiv__SHIFT                                                                  0xc
+#define CLK11_CLK1_CLK_PLL_REQ__FbMult_frac__SHIFT                                                                  0x10
+#define CLK11_CLK1_CLK_PLL_REQ__FbMult_int_MASK                                                                     0x000001FFL
+#define CLK11_CLK1_CLK_PLL_REQ__PllSpineDiv_MASK                                                                    0x0000F000L
+#define CLK11_CLK1_CLK_PLL_REQ__FbMult_frac_MASK                                                                    0xFFFF0000L
+//CLK1_CLK0_DFS_CNTL
+#define CLK11_CLK1_CLK0_DFS_CNTL__CLK0_DIVIDER__SHIFT                                                               0x0
+#define CLK11_CLK1_CLK0_DFS_CNTL__CLK0_DIVIDER_MASK                                                                 0x0000007FL
+/*DPREF clock related*/
+#define CLK0_CLK3_DFS_CNTL__CLK3_DIVIDER__SHIFT                                                               0x0
+#define CLK0_CLK3_DFS_CNTL__CLK3_DIVIDER_MASK                                                                 0x0000007FL
+#define CLK1_CLK3_DFS_CNTL__CLK3_DIVIDER__SHIFT                                                               0x0
+#define CLK1_CLK3_DFS_CNTL__CLK3_DIVIDER_MASK                                                                 0x0000007FL
+#define CLK2_CLK3_DFS_CNTL__CLK3_DIVIDER__SHIFT                                                               0x0
+#define CLK2_CLK3_DFS_CNTL__CLK3_DIVIDER_MASK                                                                 0x0000007FL
+#define CLK3_CLK3_DFS_CNTL__CLK3_DIVIDER__SHIFT                                                               0x0
+#define CLK3_CLK3_DFS_CNTL__CLK3_DIVIDER_MASK                                                                 0x0000007FL
+
+//CLK3_0_CLK3_CLK_PLL_REQ
+#define CLK3_0_CLK3_CLK_PLL_REQ__FbMult_int__SHIFT                                                            0x0
+#define CLK3_0_CLK3_CLK_PLL_REQ__PllSpineDiv__SHIFT                                                           0xc
+#define CLK3_0_CLK3_CLK_PLL_REQ__FbMult_frac__SHIFT                                                           0x10
+#define CLK3_0_CLK3_CLK_PLL_REQ__FbMult_int_MASK                                                              0x000001FFL
+#define CLK3_0_CLK3_CLK_PLL_REQ__PllSpineDiv_MASK                                                             0x0000F000L
+#define CLK3_0_CLK3_CLK_PLL_REQ__FbMult_frac_MASK                                                             0xFFFF0000L
+
+#define mmCLK0_CLK2_DFS_CNTL                            0x16C55
+#define mmCLK00_CLK0_CLK2_DFS_CNTL                      0x16C55
+#define mmCLK01_CLK0_CLK2_DFS_CNTL                      0x16E55
+#define mmCLK02_CLK0_CLK2_DFS_CNTL                      0x17055
+
+#define mmCLK0_CLK3_DFS_CNTL                            0x16C60
+#define mmCLK00_CLK0_CLK3_DFS_CNTL                      0x16C60
+#define mmCLK01_CLK0_CLK3_DFS_CNTL                      0x16E60
+#define mmCLK02_CLK0_CLK3_DFS_CNTL                      0x17060
+#define mmCLK03_CLK0_CLK3_DFS_CNTL                      0x17260
+
+#define mmCLK0_CLK_PLL_REQ                              0x16C10
+#define mmCLK00_CLK0_CLK_PLL_REQ                        0x16C10
+#define mmCLK01_CLK0_CLK_PLL_REQ                        0x16E10
+#define mmCLK02_CLK0_CLK_PLL_REQ                        0x17010
+#define mmCLK03_CLK0_CLK_PLL_REQ                        0x17210
+
+#define mmCLK1_CLK_PLL_REQ                              0x1B00D
+#define mmCLK10_CLK1_CLK_PLL_REQ                        0x1B00D
+#define mmCLK11_CLK1_CLK_PLL_REQ                        0x1B20D
+#define mmCLK12_CLK1_CLK_PLL_REQ                        0x1B40D
+#define mmCLK13_CLK1_CLK_PLL_REQ                        0x1B60D
+
+#define mmCLK2_CLK_PLL_REQ                              0x17E0D
+
+/*AMCLK*/
+
+#define mmCLK11_CLK1_CLK0_DFS_CNTL                      0x1B23F
+#define mmCLK11_CLK1_CLK_PLL_REQ                        0x1B20D
+
+#endif
 void dcn3_init_clocks(struct clk_mgr *clk_mgr_base);
 
 void dcn3_clk_mgr_construct(struct dc_context *ctx,
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dalsmc.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dalsmc.h
new file mode 100644
index 000000000000..c427be6add8a
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dalsmc.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+#ifndef DALSMC_H
+#define DALSMC_H
+
+#define DALSMC_VERSION 0x1
+
+// SMU Response Codes:
+#define DALSMC_Result_OK                   0x1
+#define DALSMC_Result_Failed               0xFF
+#define DALSMC_Result_UnknownCmd           0xFE
+#define DALSMC_Result_CmdRejectedPrereq    0xFD
+#define DALSMC_Result_CmdRejectedBusy      0xFC
+
+// Message Definitions:
+#define DALSMC_MSG_TestMessage                    0x1
+#define DALSMC_MSG_GetSmuVersion                  0x2
+#define DALSMC_MSG_GetDriverIfVersion             0x3
+#define DALSMC_MSG_GetMsgHeaderVersion            0x4
+#define DALSMC_MSG_SetDalDramAddrHigh             0x5
+#define DALSMC_MSG_SetDalDramAddrLow              0x6
+#define DALSMC_MSG_TransferTableSmu2Dram          0x7
+#define DALSMC_MSG_TransferTableDram2Smu          0x8
+#define DALSMC_MSG_SetHardMinByFreq               0x9
+#define DALSMC_MSG_SetHardMaxByFreq               0xA
+#define DALSMC_MSG_GetDpmFreqByIndex              0xB
+#define DALSMC_MSG_GetDcModeMaxDpmFreq            0xC
+#define DALSMC_MSG_SetMinDeepSleepDcfclk          0xD
+#define DALSMC_MSG_NumOfDisplays                  0xE
+#define DALSMC_MSG_SetExternalClientDfCstateAllow 0xF
+#define DALSMC_MSG_BacoAudioD3PME                 0x10
+#define DALSMC_MSG_SetFclkSwitchAllow             0x11
+#define DALSMC_MSG_SetCabForUclkPstate            0x12
+#define DALSMC_MSG_SetWorstCaseUclkLatency        0x13
+#define DALSMC_Message_Count                      0x14
+
+typedef enum {
+	FCLK_SWITCH_DISALLOW,
+	FCLK_SWITCH_ALLOW,
+} FclkSwitchAllow_e;
+
+#endif
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
new file mode 100644
index 000000000000..419cc83b3d21
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
@@ -0,0 +1,628 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dccg.h"
+#include "clk_mgr_internal.h"
+
+#include "dcn32/dcn32_clk_mgr_smu_msg.h"
+#include "dcn20/dcn20_clk_mgr.h"
+#include "dce100/dce_clk_mgr.h"
+#include "reg_helper.h"
+#include "core_types.h"
+#include "dm_helpers.h"
+
+#include "atomfirmware.h"
+#include "smu13_driver_if.h"
+
+#include "dcn/dcn_3_2_0_offset.h"
+#include "dcn/dcn_3_2_0_sh_mask.h"
+
+#include "dcn32/dcn32_clk_mgr.h"
+
+#define DCN_BASE__INST0_SEG1                       0x000000C0
+
+#define mmCLK1_CLK_PLL_REQ                              0x16E37
+#define mmCLK1_CLK0_DFS_CNTL                            0x16E69
+#define mmCLK1_CLK1_DFS_CNTL                            0x16E6C
+#define mmCLK1_CLK2_DFS_CNTL                            0x16E6F
+#define mmCLK1_CLK3_DFS_CNTL                            0x16E72
+#define mmCLK1_CLK4_DFS_CNTL                            0x16E75
+
+#define CLK1_CLK_PLL_REQ__FbMult_int_MASK               0x000001ffUL
+#define CLK1_CLK_PLL_REQ__PllSpineDiv_MASK              0x0000f000UL
+#define CLK1_CLK_PLL_REQ__FbMult_frac_MASK              0xffff0000UL
+#define CLK1_CLK_PLL_REQ__FbMult_int__SHIFT             0x00000000
+#define CLK1_CLK_PLL_REQ__PllSpineDiv__SHIFT            0x0000000c
+#define CLK1_CLK_PLL_REQ__FbMult_frac__SHIFT            0x00000010
+
+#define mmCLK01_CLK0_CLK_PLL_REQ                        0x16E37
+#define mmCLK01_CLK0_CLK0_DFS_CNTL                      0x16E64
+#define mmCLK01_CLK0_CLK1_DFS_CNTL                      0x16E67
+#define mmCLK01_CLK0_CLK2_DFS_CNTL                      0x16E6A
+#define mmCLK01_CLK0_CLK3_DFS_CNTL                      0x16E6D
+#define mmCLK01_CLK0_CLK4_DFS_CNTL                      0x16E70
+
+#define CLK0_CLK_PLL_REQ__FbMult_int_MASK               0x000001ffL
+#define CLK0_CLK_PLL_REQ__PllSpineDiv_MASK              0x0000f000L
+#define CLK0_CLK_PLL_REQ__FbMult_frac_MASK              0xffff0000L
+#define CLK0_CLK_PLL_REQ__FbMult_int__SHIFT             0x00000000
+#define CLK0_CLK_PLL_REQ__PllSpineDiv__SHIFT            0x0000000c
+#define CLK0_CLK_PLL_REQ__FbMult_frac__SHIFT            0x00000010
+
+#undef FN
+#define FN(reg_name, field_name) \
+	clk_mgr->clk_mgr_shift->field_name, clk_mgr->clk_mgr_mask->field_name
+
+#define REG(reg) \
+	(clk_mgr->regs->reg)
+
+#define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
+
+#define BASE(seg) BASE_INNER(seg)
+
+#define SR(reg_name)\
+		.reg_name = BASE(reg ## reg_name ## _BASE_IDX) +  \
+					reg ## reg_name
+
+#define CLK_SR_DCN32(reg_name)\
+	.reg_name = mm ## reg_name
+
+static const struct clk_mgr_registers clk_mgr_regs_dcn32 = {
+	CLK_REG_LIST_DCN32()
+};
+
+static const struct clk_mgr_shift clk_mgr_shift_dcn32 = {
+	CLK_COMMON_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct clk_mgr_mask clk_mgr_mask_dcn32 = {
+	CLK_COMMON_MASK_SH_LIST_DCN32(_MASK)
+};
+
+
+#define CLK_SR_DCN321(reg_name, block, inst)\
+	.reg_name = mm ## block ## _ ## reg_name
+
+static const struct clk_mgr_registers clk_mgr_regs_dcn321 = {
+	CLK_REG_LIST_DCN321()
+};
+
+static const struct clk_mgr_shift clk_mgr_shift_dcn321 = {
+	CLK_COMMON_MASK_SH_LIST_DCN321(__SHIFT)
+};
+
+static const struct clk_mgr_mask clk_mgr_mask_dcn321 = {
+	CLK_COMMON_MASK_SH_LIST_DCN321(_MASK)
+};
+
+
+/* Query SMU for all clock states for a particular clock */
+static void dcn32_init_single_clock(struct clk_mgr_internal *clk_mgr, PPCLK_e clk, unsigned int *entry_0,
+		unsigned int *num_levels)
+{
+	unsigned int i;
+	char *entry_i = (char *)entry_0;
+
+	uint32_t ret = dcn30_smu_get_dpm_freq_by_index(clk_mgr, clk, 0xFF);
+
+	if (ret & (1 << 31))
+		/* fine-grained, only min and max */
+		*num_levels = 2;
+	else
+		/* discrete, a number of fixed states */
+		/* will set num_levels to 0 on failure */
+		*num_levels = ret & 0xFF;
+
+	/* if the initial message failed, num_levels will be 0 */
+	for (i = 0; i < *num_levels; i++) {
+		*((unsigned int *)entry_i) = (dcn30_smu_get_dpm_freq_by_index(clk_mgr, clk, i) & 0xFFFF);
+		entry_i += sizeof(clk_mgr->base.bw_params->clk_table.entries[0]);
+	}
+}
+
+static void dcn32_build_wm_range_table(struct clk_mgr_internal *clk_mgr)
+{
+	/* defaults */
+	double pstate_latency_us = clk_mgr->base.ctx->dc->dml.soc.dram_clock_change_latency_us;
+	double fclk_change_latency_us = clk_mgr->base.ctx->dc->dml.soc.fclk_change_latency_us;
+	double sr_exit_time_us = clk_mgr->base.ctx->dc->dml.soc.sr_exit_time_us;
+	double sr_enter_plus_exit_time_us = clk_mgr->base.ctx->dc->dml.soc.sr_enter_plus_exit_time_us;
+	/* For min clocks use as reported by PM FW and report those as min */
+	uint16_t min_uclk_mhz			= clk_mgr->base.bw_params->clk_table.entries[0].memclk_mhz;
+	uint16_t min_dcfclk_mhz			= clk_mgr->base.bw_params->clk_table.entries[0].dcfclk_mhz;
+	uint16_t setb_min_uclk_mhz		= min_uclk_mhz;
+	uint16_t setb_min_dcfclk_mhz	= min_dcfclk_mhz;
+	/* For Set B ranges use min clocks state 2 when available, and report those to PM FW */
+	if (clk_mgr->base.ctx->dc->dml.soc.clock_limits[2].dcfclk_mhz)
+		setb_min_dcfclk_mhz = clk_mgr->base.ctx->dc->dml.soc.clock_limits[2].dcfclk_mhz;
+	if (clk_mgr->base.bw_params->clk_table.entries[2].memclk_mhz)
+		setb_min_uclk_mhz = clk_mgr->base.bw_params->clk_table.entries[2].memclk_mhz;
+
+	/* Set A - Normal - default values */
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_A].valid = true;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_A].dml_input.pstate_latency_us = pstate_latency_us;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_A].dml_input.fclk_change_latency_us = fclk_change_latency_us;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_A].dml_input.sr_exit_time_us = sr_exit_time_us;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_A].dml_input.sr_enter_plus_exit_time_us = sr_enter_plus_exit_time_us;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_A].pmfw_breakdown.wm_type = WATERMARKS_CLOCK_RANGE;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_A].pmfw_breakdown.min_dcfclk = min_dcfclk_mhz;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_A].pmfw_breakdown.max_dcfclk = 0xFFFF;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_A].pmfw_breakdown.min_uclk = min_uclk_mhz;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_A].pmfw_breakdown.max_uclk = 0xFFFF;
+
+	/* Set B - Performance - higher clocks, using DPM[2] DCFCLK and UCLK */
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].valid = true;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].dml_input.pstate_latency_us = pstate_latency_us;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].dml_input.fclk_change_latency_us = fclk_change_latency_us;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].dml_input.sr_exit_time_us = sr_exit_time_us;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].dml_input.sr_enter_plus_exit_time_us = sr_enter_plus_exit_time_us;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].pmfw_breakdown.wm_type = WATERMARKS_CLOCK_RANGE;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].pmfw_breakdown.min_dcfclk = setb_min_dcfclk_mhz;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].pmfw_breakdown.max_dcfclk = 0xFFFF;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].pmfw_breakdown.min_uclk = setb_min_uclk_mhz;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].pmfw_breakdown.max_uclk = 0xFFFF;
+
+	/* Set C - Dummy P-State - P-State latency set to "dummy p-state" value */
+	/* 'DalDummyClockChangeLatencyNs' registry key option set to 0x7FFFFFFF can be used to disable Set C for dummy p-state */
+	if (clk_mgr->base.ctx->dc->bb_overrides.dummy_clock_change_latency_ns != 0x7FFFFFFF) {
+		clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].valid = true;
+		clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].dml_input.pstate_latency_us = 38;
+		clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].dml_input.fclk_change_latency_us = fclk_change_latency_us;
+		clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].dml_input.sr_exit_time_us = sr_exit_time_us;
+		clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].dml_input.sr_enter_plus_exit_time_us = sr_enter_plus_exit_time_us;
+		clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].pmfw_breakdown.wm_type = WATERMARKS_DUMMY_PSTATE;
+		clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].pmfw_breakdown.min_dcfclk = min_dcfclk_mhz;
+		clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].pmfw_breakdown.max_dcfclk = 0xFFFF;
+		clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].pmfw_breakdown.min_uclk = min_uclk_mhz;
+		clk_mgr->base.bw_params->wm_table.nv_entries[WM_C].pmfw_breakdown.max_uclk = 0xFFFF;
+		clk_mgr->base.bw_params->dummy_pstate_table[0].dram_speed_mts = clk_mgr->base.bw_params->clk_table.entries[0].memclk_mhz * 16;
+		clk_mgr->base.bw_params->dummy_pstate_table[0].dummy_pstate_latency_us = 38;
+		clk_mgr->base.bw_params->dummy_pstate_table[1].dram_speed_mts = clk_mgr->base.bw_params->clk_table.entries[1].memclk_mhz * 16;
+		clk_mgr->base.bw_params->dummy_pstate_table[1].dummy_pstate_latency_us = 9;
+		clk_mgr->base.bw_params->dummy_pstate_table[2].dram_speed_mts = clk_mgr->base.bw_params->clk_table.entries[2].memclk_mhz * 16;
+		clk_mgr->base.bw_params->dummy_pstate_table[2].dummy_pstate_latency_us = 8;
+		clk_mgr->base.bw_params->dummy_pstate_table[3].dram_speed_mts = clk_mgr->base.bw_params->clk_table.entries[3].memclk_mhz * 16;
+		clk_mgr->base.bw_params->dummy_pstate_table[3].dummy_pstate_latency_us = 5;
+	}
+	/* Set D - MALL - SR enter and exit time specific to MALL, TBD after bringup or later phase for now use DRAM values / 2 */
+	/* For MALL DRAM clock change latency is N/A, for watermak calculations use lowest value dummy P state latency */
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_D].valid = true;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_D].dml_input.pstate_latency_us = clk_mgr->base.bw_params->dummy_pstate_table[3].dummy_pstate_latency_us;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_D].dml_input.fclk_change_latency_us = fclk_change_latency_us;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_D].dml_input.sr_exit_time_us = sr_exit_time_us / 2; // TBD
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_D].dml_input.sr_enter_plus_exit_time_us = sr_enter_plus_exit_time_us / 2; // TBD
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_D].pmfw_breakdown.wm_type = WATERMARKS_MALL;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_D].pmfw_breakdown.min_dcfclk = min_dcfclk_mhz;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_D].pmfw_breakdown.max_dcfclk = 0xFFFF;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_D].pmfw_breakdown.min_uclk = min_uclk_mhz;
+	clk_mgr->base.bw_params->wm_table.nv_entries[WM_D].pmfw_breakdown.max_uclk = 0xFFFF;
+}
+
+void dcn32_init_clocks(struct clk_mgr *clk_mgr_base)
+{
+	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+	unsigned int num_levels;
+
+	memset(&(clk_mgr_base->clks), 0, sizeof(struct dc_clocks));
+	clk_mgr_base->clks.p_state_change_support = true;
+	clk_mgr_base->clks.prev_p_state_change_support = true;
+	clk_mgr_base->clks.fclk_prev_p_state_change_support = true;
+	clk_mgr->smu_present = false;
+
+	if (!clk_mgr_base->bw_params)
+		return;
+
+	if (!clk_mgr_base->force_smu_not_present && dcn30_smu_get_smu_version(clk_mgr, &clk_mgr->smu_ver))
+		clk_mgr->smu_present = true;
+
+	if (!clk_mgr->smu_present)
+		return;
+
+	dcn30_smu_check_driver_if_version(clk_mgr);
+	dcn30_smu_check_msg_header_version(clk_mgr);
+
+	/* DCFCLK */
+	dcn32_init_single_clock(clk_mgr, PPCLK_DCFCLK,
+			&clk_mgr_base->bw_params->clk_table.entries[0].dcfclk_mhz,
+			&num_levels);
+
+	/* SOCCLK */
+	dcn32_init_single_clock(clk_mgr, PPCLK_SOCCLK,
+					&clk_mgr_base->bw_params->clk_table.entries[0].socclk_mhz,
+					&num_levels);
+	/* DTBCLK */
+	dcn32_init_single_clock(clk_mgr, PPCLK_DTBCLK,
+			&clk_mgr_base->bw_params->clk_table.entries[0].dtbclk_mhz,
+			&num_levels);
+
+	/* DISPCLK */
+	dcn32_init_single_clock(clk_mgr, PPCLK_DISPCLK,
+			&clk_mgr_base->bw_params->clk_table.entries[0].dispclk_mhz,
+			&num_levels);
+
+
+	/* Get UCLK, update bounding box */
+	clk_mgr_base->funcs->get_memclk_states_from_smu(clk_mgr_base);
+
+	/* WM range table */
+	dcn32_build_wm_range_table(clk_mgr);
+}
+
+static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
+			struct dc_state *context,
+			bool safe_to_lower)
+{
+	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+	struct dc_clocks *new_clocks = &context->bw_ctx.bw.dcn.clk;
+	struct dc *dc = clk_mgr_base->ctx->dc;
+	int display_count;
+	bool update_dppclk = false;
+	bool update_dispclk = false;
+	bool enter_display_off = false;
+	bool dpp_clock_lowered = false;
+	struct dmcu *dmcu = clk_mgr_base->ctx->dc->res_pool->dmcu;
+	bool force_reset = false;
+	bool update_uclk = false;
+	bool p_state_change_support;
+	bool fclk_p_state_change_support;
+	int total_plane_count;
+
+	if (dc->work_arounds.skip_clock_update)
+		return;
+
+	if (clk_mgr_base->clks.dispclk_khz == 0 ||
+			(dc->debug.force_clock_mode & 0x1)) {
+		/* This is from resume or boot up, if forced_clock cfg option used,
+		 * we bypass program dispclk and DPPCLK, but need set them for S3.
+		 */
+		force_reset = true;
+
+		dcn2_read_clocks_from_hw_dentist(clk_mgr_base);
+
+		/* Force_clock_mode 0x1:  force reset the clock even it is the same clock
+		 * as long as it is in Passive level.
+		 */
+	}
+	display_count = clk_mgr_helper_get_active_display_cnt(dc, context);
+
+	if (display_count == 0)
+		enter_display_off = true;
+
+	if (enter_display_off == safe_to_lower)
+		dcn30_smu_set_num_of_displays(clk_mgr, display_count);
+
+	if (dc->debug.force_min_dcfclk_mhz > 0)
+		new_clocks->dcfclk_khz = (new_clocks->dcfclk_khz > (dc->debug.force_min_dcfclk_mhz * 1000)) ?
+				new_clocks->dcfclk_khz : (dc->debug.force_min_dcfclk_mhz * 1000);
+
+	if (should_set_clock(safe_to_lower, new_clocks->dcfclk_khz, clk_mgr_base->clks.dcfclk_khz)) {
+		clk_mgr_base->clks.dcfclk_khz = new_clocks->dcfclk_khz;
+		dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DCFCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_khz));
+	}
+
+	if (should_set_clock(safe_to_lower, new_clocks->dcfclk_deep_sleep_khz, clk_mgr_base->clks.dcfclk_deep_sleep_khz)) {
+		clk_mgr_base->clks.dcfclk_deep_sleep_khz = new_clocks->dcfclk_deep_sleep_khz;
+		dcn30_smu_set_min_deep_sleep_dcef_clk(clk_mgr, khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_deep_sleep_khz));
+	}
+
+	if (should_set_clock(safe_to_lower, new_clocks->socclk_khz, clk_mgr_base->clks.socclk_khz))
+		/* We don't actually care about socclk, don't notify SMU of hard min */
+		clk_mgr_base->clks.socclk_khz = new_clocks->socclk_khz;
+
+	clk_mgr_base->clks.prev_p_state_change_support = clk_mgr_base->clks.p_state_change_support;
+	clk_mgr_base->clks.fclk_prev_p_state_change_support = clk_mgr_base->clks.fclk_p_state_change_support;
+
+	total_plane_count = clk_mgr_helper_get_active_plane_cnt(dc, context);
+	p_state_change_support = new_clocks->p_state_change_support || (total_plane_count == 0);
+	fclk_p_state_change_support = new_clocks->fclk_p_state_change_support || (total_plane_count == 0);
+	if (should_update_pstate_support(safe_to_lower, p_state_change_support, clk_mgr_base->clks.p_state_change_support)) {
+		clk_mgr_base->clks.p_state_change_support = p_state_change_support;
+
+		/* to disable P-State switching, set UCLK min = max */
+		if (!clk_mgr_base->clks.p_state_change_support)
+			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
+					clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
+	}
+
+	if (should_update_pstate_support(safe_to_lower, fclk_p_state_change_support, clk_mgr_base->clks.fclk_p_state_change_support)) {
+		clk_mgr_base->clks.fclk_p_state_change_support = fclk_p_state_change_support;
+
+		/* To disable FCLK P-state switching, send FCLK_PSTATE_NOTSUPPORTED message to PMFW */
+		if (!clk_mgr_base->clks.fclk_p_state_change_support) {
+			/* Handle code for sending a message to PMFW that FCLK P-state change is not supported */
+			dcn32_smu_send_fclk_pstate_message(clk_mgr, FCLK_PSTATE_NOTSUPPORTED);
+		}
+	}
+
+	/* Always update saved value, even if new value not set due to P-State switching unsupported */
+	if (should_set_clock(safe_to_lower, new_clocks->dramclk_khz, clk_mgr_base->clks.dramclk_khz)) {
+		clk_mgr_base->clks.dramclk_khz = new_clocks->dramclk_khz;
+		update_uclk = true;
+	}
+
+	/* set UCLK to requested value if P-State switching is supported, or to re-enable P-State switching */
+	if (clk_mgr_base->clks.p_state_change_support &&
+			(update_uclk || !clk_mgr_base->clks.prev_p_state_change_support))
+		dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
+
+	if (clk_mgr_base->clks.fclk_p_state_change_support &&
+			(update_uclk || !clk_mgr_base->clks.fclk_prev_p_state_change_support)) {
+		/* Handle the code for sending a message to PMFW that FCLK P-state change is supported */
+		dcn32_smu_send_fclk_pstate_message(clk_mgr, FCLK_PSTATE_SUPPORTED);
+	}
+
+	if (should_set_clock(safe_to_lower, new_clocks->dppclk_khz, clk_mgr_base->clks.dppclk_khz)) {
+		if (clk_mgr_base->clks.dppclk_khz > new_clocks->dppclk_khz)
+			dpp_clock_lowered = true;
+
+		clk_mgr_base->clks.dppclk_khz = new_clocks->dppclk_khz;
+		dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DPPCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dppclk_khz));
+		update_dppclk = true;
+	}
+
+	if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) {
+		clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
+		dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DISPCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dispclk_khz));
+		update_dispclk = true;
+	}
+
+	if (dc->config.forced_clocks == false || (force_reset && safe_to_lower)) {
+		if (dpp_clock_lowered) {
+			/* if clock is being lowered, increase DTO before lowering refclk */
+			dcn20_update_clocks_update_dpp_dto(clk_mgr, context, safe_to_lower);
+			dcn20_update_clocks_update_dentist(clk_mgr, context);
+		} else {
+			/* if clock is being raised, increase refclk before lowering DTO */
+			if (update_dppclk || update_dispclk)
+				dcn20_update_clocks_update_dentist(clk_mgr, context);
+			/* There is a check inside dcn20_update_clocks_update_dpp_dto which ensures
+			 * that we do not lower dto when it is not safe to lower. We do not need to
+			 * compare the current and new dppclk before calling this function.
+			 */
+			dcn20_update_clocks_update_dpp_dto(clk_mgr, context, safe_to_lower);
+		}
+	}
+
+	if (update_dispclk && dmcu && dmcu->funcs->is_dmcu_initialized(dmcu))
+		/*update dmcu for wait_loop count*/
+		dmcu->funcs->set_psr_wait_loop(dmcu,
+				clk_mgr_base->clks.dispclk_khz / 1000 / 7);
+}
+
+void dcn32_clock_read_ss_info(struct clk_mgr_internal *clk_mgr)
+{
+	struct dc_bios *bp = clk_mgr->base.ctx->dc_bios;
+	int ss_info_num = bp->funcs->get_ss_entry_number(
+			bp, AS_SIGNAL_TYPE_GPU_PLL);
+
+	if (ss_info_num) {
+		struct spread_spectrum_info info = { { 0 } };
+		enum bp_result result = bp->funcs->get_spread_spectrum_info(
+				bp, AS_SIGNAL_TYPE_GPU_PLL, 0, &info);
+
+		/* SSInfo.spreadSpectrumPercentage !=0 would be sign
+		 * that SS is enabled
+		 */
+		if (result == BP_RESULT_OK &&
+				info.spread_spectrum_percentage != 0) {
+			clk_mgr->ss_on_dprefclk = true;
+			clk_mgr->dprefclk_ss_divider = info.spread_percentage_divider;
+
+			if (info.type.CENTER_MODE == 0) {
+				/* Currently for DP Reference clock we
+				 * need only SS percentage for
+				 * downspread
+				 */
+				clk_mgr->dprefclk_ss_percentage =
+						info.spread_spectrum_percentage;
+			}
+		}
+	}
+}
+static void dcn32_notify_wm_ranges(struct clk_mgr *clk_mgr_base)
+{
+	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+	WatermarksExternal_t *table = (WatermarksExternal_t *) clk_mgr->wm_range_table;
+
+	if (!clk_mgr->smu_present)
+		return;
+
+	if (!table)
+		return;
+
+	memset(table, 0, sizeof(*table));
+
+	dcn30_smu_set_dram_addr_high(clk_mgr, clk_mgr->wm_range_table_addr >> 32);
+	dcn30_smu_set_dram_addr_low(clk_mgr, clk_mgr->wm_range_table_addr & 0xFFFFFFFF);
+	dcn32_smu_transfer_wm_table_dram_2_smu(clk_mgr);
+}
+
+/* Set min memclk to minimum, either constrained by the current mode or DPM0 */
+static void dcn32_set_hard_min_memclk(struct clk_mgr *clk_mgr_base, bool current_mode)
+{
+	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+
+	if (!clk_mgr->smu_present)
+		return;
+
+	if (current_mode) {
+		if (clk_mgr_base->clks.p_state_change_support)
+			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
+					khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
+		else
+			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
+					clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
+	} else {
+		dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
+				clk_mgr_base->bw_params->clk_table.entries[0].memclk_mhz);
+	}
+}
+
+/* Set max memclk to highest DPM value */
+static void dcn32_set_hard_max_memclk(struct clk_mgr *clk_mgr_base)
+{
+	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+
+	if (!clk_mgr->smu_present)
+		return;
+
+	dcn30_smu_set_hard_max_by_freq(clk_mgr, PPCLK_UCLK,
+			clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
+}
+
+/* Get current memclk states, update bounding box */
+static void dcn32_get_memclk_states_from_smu(struct clk_mgr *clk_mgr_base)
+{
+	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+	unsigned int num_levels;
+
+	if (!clk_mgr->smu_present)
+		return;
+
+	/* Refresh memclk states */
+	dcn32_init_single_clock(clk_mgr, PPCLK_UCLK,
+			&clk_mgr_base->bw_params->clk_table.entries[0].memclk_mhz,
+			&num_levels);
+	clk_mgr_base->bw_params->clk_table.num_entries = num_levels ? num_levels : 1;
+
+	/* Refresh bounding box */
+	clk_mgr_base->ctx->dc->res_pool->funcs->update_bw_bounding_box(
+			clk_mgr->base.ctx->dc, clk_mgr_base->bw_params);
+}
+
+static bool dcn32_are_clock_states_equal(struct dc_clocks *a,
+					struct dc_clocks *b)
+{
+	if (a->dispclk_khz != b->dispclk_khz)
+		return false;
+	else if (a->dppclk_khz != b->dppclk_khz)
+		return false;
+	else if (a->dcfclk_khz != b->dcfclk_khz)
+		return false;
+	else if (a->dcfclk_deep_sleep_khz != b->dcfclk_deep_sleep_khz)
+		return false;
+	else if (a->dramclk_khz != b->dramclk_khz)
+		return false;
+	else if (a->p_state_change_support != b->p_state_change_support)
+		return false;
+	else if (a->fclk_p_state_change_support != b->fclk_p_state_change_support)
+		return false;
+
+	return true;
+}
+
+static void dcn32_enable_pme_wa(struct clk_mgr *clk_mgr_base)
+{
+	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+
+	if (!clk_mgr->smu_present)
+		return;
+
+	dcn30_smu_set_pme_workaround(clk_mgr);
+}
+
+static bool dcn32_is_smu_present(struct clk_mgr *clk_mgr_base)
+{
+	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+	return clk_mgr->smu_present;
+}
+
+
+static struct clk_mgr_funcs dcn32_funcs = {
+		.get_dp_ref_clk_frequency = dce12_get_dp_ref_freq_khz,
+		.update_clocks = dcn32_update_clocks,
+		.init_clocks = dcn32_init_clocks,
+		.notify_wm_ranges = dcn32_notify_wm_ranges,
+		.set_hard_min_memclk = dcn32_set_hard_min_memclk,
+		.set_hard_max_memclk = dcn32_set_hard_max_memclk,
+		.get_memclk_states_from_smu = dcn32_get_memclk_states_from_smu,
+		.are_clock_states_equal = dcn32_are_clock_states_equal,
+		.enable_pme_wa = dcn32_enable_pme_wa,
+		.is_smu_present = dcn32_is_smu_present,
+};
+
+void dcn32_clk_mgr_construct(
+		struct dc_context *ctx,
+		struct clk_mgr_internal *clk_mgr,
+		struct pp_smu_funcs *pp_smu,
+		struct dccg *dccg)
+{
+	clk_mgr->base.ctx = ctx;
+	clk_mgr->base.funcs = &dcn32_funcs;
+	if (ASICREV_IS_GC_11_0_2(clk_mgr->base.ctx->asic_id.hw_internal_rev)) {
+		clk_mgr->regs = &clk_mgr_regs_dcn321;
+		clk_mgr->clk_mgr_shift = &clk_mgr_shift_dcn321;
+		clk_mgr->clk_mgr_mask = &clk_mgr_mask_dcn321;
+	} else {
+		clk_mgr->regs = &clk_mgr_regs_dcn32;
+		clk_mgr->clk_mgr_shift = &clk_mgr_shift_dcn32;
+		clk_mgr->clk_mgr_mask = &clk_mgr_mask_dcn32;
+	}
+
+	clk_mgr->dccg = dccg;
+	clk_mgr->dfs_bypass_disp_clk = 0;
+
+	clk_mgr->dprefclk_ss_percentage = 0;
+	clk_mgr->dprefclk_ss_divider = 1000;
+	clk_mgr->ss_on_dprefclk = false;
+	clk_mgr->dfs_ref_freq_khz = 100000;
+
+	clk_mgr->base.dprefclk_khz = 717000; /* Changed as per DCN3.2_clock_frequency doc */
+	clk_mgr->dccg->ref_dtbclk_khz = 477800;
+
+	/* integer part is now VCO frequency in kHz */
+	clk_mgr->base.dentist_vco_freq_khz = 4300000;//dcn32_get_vco_frequency_from_reg(clk_mgr);
+	/* in case we don't get a value from the register, use default */
+	if (clk_mgr->base.dentist_vco_freq_khz == 0)
+		clk_mgr->base.dentist_vco_freq_khz = 4300000; /* Updated as per HW docs */
+
+	if (clk_mgr->base.boot_snapshot.dprefclk != 0) {
+		//ASSERT(clk_mgr->base.dprefclk_khz == clk_mgr->base.boot_snapshot.dprefclk);
+		//clk_mgr->base.dprefclk_khz = clk_mgr->base.boot_snapshot.dprefclk;
+	}
+	dcn32_clock_read_ss_info(clk_mgr);
+
+	clk_mgr->dfs_bypass_enabled = false;
+
+	clk_mgr->smu_present = false;
+
+	clk_mgr->base.bw_params = kzalloc(sizeof(*clk_mgr->base.bw_params), GFP_KERNEL);
+
+	/* need physical address of table to give to PMFW */
+	clk_mgr->wm_range_table = dm_helpers_allocate_gpu_mem(clk_mgr->base.ctx,
+			DC_MEM_ALLOC_TYPE_GART, sizeof(WatermarksExternal_t),
+			&clk_mgr->wm_range_table_addr);
+}
+
+void dcn32_clk_mgr_destroy(struct clk_mgr_internal *clk_mgr)
+{
+	if (clk_mgr->base.bw_params)
+		kfree(clk_mgr->base.bw_params);
+
+	if (clk_mgr->wm_range_table)
+		dm_helpers_free_gpu_mem(clk_mgr->base.ctx, DC_MEM_ALLOC_TYPE_GART,
+				clk_mgr->wm_range_table);
+}
+
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.h
new file mode 100644
index 000000000000..57e09c7c95f5
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.h
@@ -0,0 +1,39 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+#ifndef __DCN32_CLK_MGR_H_
+#define __DCN32_CLK_MGR_H_
+
+void dcn32_init_clocks(struct clk_mgr *clk_mgr_base);
+
+void dcn32_clk_mgr_construct(struct dc_context *ctx,
+		struct clk_mgr_internal *clk_mgr,
+		struct pp_smu_funcs *pp_smu,
+		struct dccg *dccg);
+
+void dcn32_clk_mgr_destroy(struct clk_mgr_internal *clk_mgr);
+
+
+
+#endif /* __DCN32_CLK_MGR_H_ */
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c
new file mode 100644
index 000000000000..35e8afe6db93
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c
@@ -0,0 +1,117 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dcn32_clk_mgr_smu_msg.h"
+
+#include "clk_mgr_internal.h"
+#include "reg_helper.h"
+
+#define mmDAL_MSG_REG  0x1628A
+#define mmDAL_ARG_REG  0x16273
+#define mmDAL_RESP_REG 0x16274
+
+#define DALSMC_MSG_TransferTableDram2Smu          0x8
+
+#define REG(reg_name) \
+	mm ## reg_name
+
+#include "logger_types.h"
+#include "dalsmc.h"
+#include "smu13_driver_if.h"
+
+#define smu_print(str, ...) {DC_LOG_SMU(str, ##__VA_ARGS__); }
+
+
+/*
+ * Function to be used instead of REG_WAIT macro because the wait ends when
+ * the register is NOT EQUAL to zero, and because the translation in msg_if.h
+ * won't work with REG_WAIT.
+ */
+static uint32_t dcn32_smu_wait_for_response(struct clk_mgr_internal *clk_mgr, unsigned int delay_us, unsigned int max_retries)
+{
+	uint32_t reg = 0;
+
+	do {
+		reg = REG_READ(DAL_RESP_REG);
+		if (reg)
+			break;
+
+		if (delay_us >= 1000)
+			msleep(delay_us/1000);
+		else if (delay_us > 0)
+			udelay(delay_us);
+	} while (max_retries--);
+
+	return reg;
+}
+
+static bool dcn32_smu_send_msg_with_param(struct clk_mgr_internal *clk_mgr, uint32_t msg_id, uint32_t param_in, uint32_t *param_out)
+{
+	/* Wait for response register to be ready */
+	dcn32_smu_wait_for_response(clk_mgr, 10, 200000);
+
+	/* Clear response register */
+	REG_WRITE(DAL_RESP_REG, 0);
+
+	/* Set the parameter register for the SMU message */
+	REG_WRITE(DAL_ARG_REG, param_in);
+
+	/* Trigger the message transaction by writing the message ID */
+	REG_WRITE(DAL_MSG_REG, msg_id);
+
+	/* Wait for response */
+	if (dcn32_smu_wait_for_response(clk_mgr, 10, 200000) == DALSMC_Result_OK) {
+		if (param_out)
+			*param_out = REG_READ(DAL_ARG_REG);
+
+		return true;
+	}
+
+	return false;
+}
+
+void dcn32_smu_send_fclk_pstate_message(struct clk_mgr_internal *clk_mgr, bool enable)
+{
+	smu_print("FCLK P-state support value is : %d\n", enable);
+
+	dcn32_smu_send_msg_with_param(clk_mgr,
+			DALSMC_MSG_SetFclkSwitchAllow, enable ? FCLK_PSTATE_SUPPORTED : FCLK_PSTATE_NOTSUPPORTED, NULL);
+}
+
+void dcn32_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr)
+{
+       smu_print("SMU Transfer WM table DRAM 2 SMU\n");
+
+       dcn32_smu_send_msg_with_param(clk_mgr,
+                       DALSMC_MSG_TransferTableDram2Smu, TABLE_WATERMARKS, NULL);
+}
+
+void dcn32_smu_send_cab_for_uclk_message(struct clk_mgr_internal *clk_mgr, unsigned int num_ways)
+{
+	smu_print("Numways for SubVP : %d\n", num_ways);
+
+	dcn32_smu_send_msg_with_param(clk_mgr, DALSMC_MSG_SetCabForUclkPstate, num_ways, NULL);
+}
+
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h
new file mode 100644
index 000000000000..11b25de1527f
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h
@@ -0,0 +1,49 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DCN32_CLK_MGR_SMU_MSG_H_
+#define __DCN32_CLK_MGR_SMU_MSG_H_
+
+#include "core_types.h"
+#include "dcn30/dcn30_clk_mgr_smu_msg.h"
+
+#define FCLK_PSTATE_NOTSUPPORTED       0x00
+#define FCLK_PSTATE_SUPPORTED          0x01
+
+/* TODO Remove this MSG ID define after it becomes available in dalsmc */
+#define DALSMC_MSG_SetFclkSwitchAllow	0x11
+#define DALSMC_MSG_SetCabForUclkPstate	0x12
+#define DALSMC_Result_OK				0x1
+
+void
+dcn32_smu_send_fclk_pstate_message(struct clk_mgr_internal *clk_mgr,
+				   bool enable);
+
+void dcn32_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr);
+
+void dcn32_smu_send_cab_for_uclk_message(struct clk_mgr_internal *clk_mgr, unsigned int num_ways);
+
+
+#endif /* __DCN32_CLK_MGR_SMU_MSG_H_ */
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/smu13_driver_if.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/smu13_driver_if.h
new file mode 100644
index 000000000000..deeb85047e7b
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/smu13_driver_if.h
@@ -0,0 +1,108 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+#ifndef SMU13_DRIVER_IF_DCN32_H
+#define SMU13_DRIVER_IF_DCN32_H
+
+// *** IMPORTANT ***
+// PMFW TEAM: Always increment the interface version on any change to this file
+#define SMU13_DRIVER_IF_VERSION  0x18
+
+//Only Clks that have DPM descriptors are listed here
+typedef enum {
+  PPCLK_GFXCLK = 0,
+  PPCLK_SOCCLK,
+  PPCLK_UCLK,
+  PPCLK_FCLK,
+  PPCLK_DCLK_0,
+  PPCLK_VCLK_0,
+  PPCLK_DCLK_1,
+  PPCLK_VCLK_1,
+  PPCLK_DISPCLK,
+  PPCLK_DPPCLK,
+  PPCLK_DPREFCLK,
+  PPCLK_DCFCLK,
+  PPCLK_DTBCLK,
+  PPCLK_COUNT,
+} PPCLK_e;
+
+typedef enum {
+  UCLK_DIV_BY_1 = 0,
+  UCLK_DIV_BY_2,
+  UCLK_DIV_BY_4,
+  UCLK_DIV_BY_8,
+} UCLK_DIV_e;
+
+typedef struct {
+  uint8_t  WmSetting;
+  uint8_t  Flags;
+  uint8_t  Padding[2];
+
+} WatermarkRowGeneric_t;
+
+#define NUM_WM_RANGES 4
+
+typedef enum {
+  WATERMARKS_CLOCK_RANGE = 0,
+  WATERMARKS_DUMMY_PSTATE,
+  WATERMARKS_MALL,
+  WATERMARKS_COUNT,
+} WATERMARKS_FLAGS_e;
+
+typedef struct {
+  // Watermarks
+  WatermarkRowGeneric_t WatermarkRow[NUM_WM_RANGES];
+} Watermarks_t;
+
+typedef struct {
+  Watermarks_t Watermarks;
+  uint32_t  Spare[16];
+
+  uint32_t     MmHubPadding[8]; // SMU internal use
+} WatermarksExternal_t;
+
+// These defines are used with the following messages:
+// SMC_MSG_TransferTableDram2Smu
+// SMC_MSG_TransferTableSmu2Dram
+
+// Table transfer status
+#define TABLE_TRANSFER_OK         0x0
+#define TABLE_TRANSFER_FAILED     0xFF
+#define TABLE_TRANSFER_PENDING    0xAB
+
+// Table types
+#define TABLE_PMFW_PPTABLE            0
+#define TABLE_COMBO_PPTABLE           1
+#define TABLE_WATERMARKS              2
+#define TABLE_AVFS_PSM_DEBUG          3
+#define TABLE_PMSTATUSLOG             4
+#define TABLE_SMU_METRICS             5
+#define TABLE_DRIVER_SMU_CONFIG       6
+#define TABLE_ACTIVITY_MONITOR_COEFF  7
+#define TABLE_OVERDRIVE               8
+#define TABLE_I2C_COMMANDS            9
+#define TABLE_DRIVER_INFO             10
+#define TABLE_COUNT                   11
+
+#endif
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index 3c9523218c19..99b17629a203 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -3975,6 +3975,9 @@ static bool decide_mst_link_settings(const struct dc_link *link, struct dc_link_
 	return true;
 }
 
+bool FORCE_RATE = false;
+uint32_t FORCE_LANE_COUNT = 0;
+
 void decide_link_settings(struct dc_stream_state *stream,
 	struct dc_link_settings *link_setting)
 {
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index 3960c74482be..2eabcdef8903 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -416,12 +416,15 @@ struct dc_clocks {
 	bool p_state_change_support;
 	enum dcn_zstate_support_state zstate_support;
 	bool dtbclk_en;
+	int dtbclk_khz;
+	bool fclk_p_state_change_support;
 	enum dcn_pwr_state pwr_state;
 	/*
 	 * Elements below are not compared for the purposes of
 	 * optimization required
 	 */
 	bool prev_p_state_change_support;
+	bool fclk_prev_p_state_change_support;
 	enum dtm_pstate dtm_level;
 	int max_supported_dppclk_khz;
 	int max_supported_dispclk_khz;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
new file mode 100644
index 000000000000..a8fb4ab8ced1
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
@@ -0,0 +1,2325 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+
+#include "dm_services.h"
+#include "dc.h"
+
+#include "dcn32/dcn32_init.h"
+
+#include "resource.h"
+#include "include/irq_service_interface.h"
+#include "dcn32/dcn32_resource.h"
+#include "dcn321_resource.h"
+
+#include "dcn20/dcn20_resource.h"
+#include "dcn30/dcn30_resource.h"
+
+#include "dcn10/dcn10_ipp.h"
+#include "dcn30/dcn30_hubbub.h"
+#include "dcn31/dcn31_hubbub.h"
+#include "dcn32/dcn32_hubbub.h"
+#include "dcn32/dcn32_mpc.h"
+#include "dcn32/dcn32_hubp.h"
+#include "irq/dcn32/irq_service_dcn32.h"
+#include "dcn32/dcn32_dpp.h"
+#include "dcn32/dcn32_optc.h"
+#include "dcn20/dcn20_hwseq.h"
+#include "dcn30/dcn30_hwseq.h"
+#include "dce110/dce110_hw_sequencer.h"
+#include "dcn30/dcn30_opp.h"
+#include "dcn20/dcn20_dsc.h"
+#include "dcn30/dcn30_vpg.h"
+#include "dcn30/dcn30_afmt.h"
+#include "dcn30/dcn30_dio_stream_encoder.h"
+#include "dcn32/dcn32_dio_stream_encoder.h"
+#include "dcn31/dcn31_hpo_dp_stream_encoder.h"
+#include "dcn31/dcn31_hpo_dp_link_encoder.h"
+#include "dcn32/dcn32_hpo_dp_link_encoder.h"
+#include "dc_link_dp.h"
+#include "dcn31/dcn31_apg.h"
+#include "dcn31/dcn31_dio_link_encoder.h"
+#include "dcn32/dcn32_dio_link_encoder.h"
+#include "dce/dce_clock_source.h"
+#include "dce/dce_audio.h"
+#include "dce/dce_hwseq.h"
+#include "clk_mgr.h"
+#include "virtual/virtual_stream_encoder.h"
+#include "dml/display_mode_vba.h"
+#include "dcn32/dcn32_dccg.h"
+#include "dcn10/dcn10_resource.h"
+#include "dc_link_ddc.h"
+#include "dcn31/dcn31_panel_cntl.h"
+
+#include "dcn30/dcn30_dwb.h"
+#include "dcn32/dcn32_mmhubbub.h"
+
+#include "dcn/dcn_3_2_1_offset.h"
+#include "dcn/dcn_3_2_1_sh_mask.h"
+#include "nbio/nbio_4_3_0_offset.h"
+
+#include "reg_helper.h"
+#include "dce/dmub_abm.h"
+#include "dce/dmub_psr.h"
+#include "dce/dce_aux.h"
+#include "dce/dce_i2c.h"
+
+#include "dml/dcn30/display_mode_vba_30.h"
+#include "vm_helper.h"
+#include "dcn20/dcn20_vmid.h"
+
+#define DCN_BASE__INST0_SEG1                       0x000000C0
+#define DCN_BASE__INST0_SEG2                       0x000034C0
+#define DCN_BASE__INST0_SEG3                       0x00009000
+#define NBIO_BASE__INST0_SEG1                      0x00000014
+
+#define MAX_INSTANCE                                        8
+#define MAX_SEGMENT                                         6
+
+
+struct IP_BASE_INSTANCE
+{
+    unsigned int segment[MAX_SEGMENT];
+};
+
+struct IP_BASE
+{
+    struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
+};
+
+static const struct IP_BASE DCN_BASE = { { { { 0x00000012, 0x000000C0, 0x000034C0, 0x00009000, 0x02403C00, 0 } },
+                                        { { 0, 0, 0, 0, 0, 0 } },
+                                        { { 0, 0, 0, 0, 0, 0 } },
+                                        { { 0, 0, 0, 0, 0, 0 } },
+                                        { { 0, 0, 0, 0, 0, 0 } },
+                                        { { 0, 0, 0, 0, 0, 0 } },
+                                        { { 0, 0, 0, 0, 0, 0 } },
+                                        { { 0, 0, 0, 0, 0, 0 } } } };
+
+#define DC_LOGGER_INIT(logger)
+#define fixed16_to_double(x) (((double) x) / ((double) (1 << 16)))
+#define fixed16_to_double_to_cpu(x) fixed16_to_double(le32_to_cpu(x))
+
+#define DCN3_2_DEFAULT_DET_SIZE 256
+
+struct _vcs_dpi_ip_params_st dcn3_21_ip = {
+	.gpuvm_enable = 1,
+	.gpuvm_max_page_table_levels = 1,
+	.hostvm_enable = 0,
+	.rob_buffer_size_kbytes = 128,
+	.det_buffer_size_kbytes = DCN3_2_DEFAULT_DET_SIZE,
+	.config_return_buffer_size_in_kbytes = 1280,
+	.compressed_buffer_segment_size_in_kbytes = 64,
+	.meta_fifo_size_in_kentries = 22,
+	.zero_size_buffer_entries = 512,
+	.compbuf_reserved_space_64b = 256,
+	.compbuf_reserved_space_zs = 64,
+	.dpp_output_buffer_pixels = 2560,
+	.opp_output_buffer_lines = 1,
+	.pixel_chunk_size_kbytes = 8,
+	.alpha_pixel_chunk_size_kbytes = 4, // not appearing in spreadsheet, match c code from hw team
+	.min_pixel_chunk_size_bytes = 1024,
+	.dcc_meta_buffer_size_bytes = 6272,
+	.meta_chunk_size_kbytes = 2,
+	.min_meta_chunk_size_bytes = 256,
+	.writeback_chunk_size_kbytes = 8,
+	.ptoi_supported = false,
+	.num_dsc = 4,
+	.maximum_dsc_bits_per_component = 12,
+	.maximum_pixels_per_line_per_dsc_unit = 6016,
+	.dsc422_native_support = true,
+	.is_line_buffer_bpp_fixed = true,
+	.line_buffer_fixed_bpp = 57,
+	.line_buffer_size_bits = 1171920, //DPP doc, DCN3_2_DisplayMode_73.xlsm still shows as 986880 bits with 48 bpp
+	.max_line_buffer_lines = 32,
+	.writeback_interface_buffer_size_kbytes = 90,
+	.max_num_dpp = 4,
+	.max_num_otg = 4,
+	.max_num_hdmi_frl_outputs = 1,
+	.max_num_wb = 1,
+	.max_dchub_pscl_bw_pix_per_clk = 4,
+	.max_pscl_lb_bw_pix_per_clk = 2,
+	.max_lb_vscl_bw_pix_per_clk = 4,
+	.max_vscl_hscl_bw_pix_per_clk = 4,
+	.max_hscl_ratio = 6,
+	.max_vscl_ratio = 6,
+	.max_hscl_taps = 8,
+	.max_vscl_taps = 8,
+	.dpte_buffer_size_in_pte_reqs_luma = 64,
+	.dpte_buffer_size_in_pte_reqs_chroma = 34,
+	.dispclk_ramp_margin_percent = 1,
+	.max_inter_dcn_tile_repeaters = 8,
+	.cursor_buffer_size = 16,
+	.cursor_chunk_size = 2,
+	.writeback_line_buffer_buffer_size = 0,
+	.writeback_min_hscl_ratio = 1,
+	.writeback_min_vscl_ratio = 1,
+	.writeback_max_hscl_ratio = 1,
+	.writeback_max_vscl_ratio = 1,
+	.writeback_max_hscl_taps = 1,
+	.writeback_max_vscl_taps = 1,
+	.dppclk_delay_subtotal = 47,
+	.dppclk_delay_scl = 50,
+	.dppclk_delay_scl_lb_only = 16,
+	.dppclk_delay_cnvc_formatter = 28,
+	.dppclk_delay_cnvc_cursor = 6,
+	.dispclk_delay_subtotal = 125,
+	.dynamic_metadata_vm_enabled = false,
+	.odm_combine_4to1_supported = false,
+	.dcc_supported = true,
+	.max_num_dp2p0_outputs = 2,
+	.max_num_dp2p0_streams = 4,
+};
+
+struct _vcs_dpi_soc_bounding_box_st dcn3_21_soc = {
+	.clock_limits = {
+		{
+			.state = 0,
+			.dcfclk_mhz = 1564.0,
+			.fabricclk_mhz = 400.0,
+			.dispclk_mhz = 2150.0,
+			.dppclk_mhz = 2150.0,
+			.phyclk_mhz = 810.0,
+			.phyclk_d18_mhz = 667.0,
+			.phyclk_d32_mhz = 625.0,
+			.socclk_mhz = 1200.0,
+			.dscclk_mhz = 716.667,
+			.dram_speed_mts = 1600.0,
+			.dtbclk_mhz = 1564.0,
+		},
+	},
+	.num_states = 1,
+	.sr_exit_time_us = 5.20,
+	.sr_enter_plus_exit_time_us = 9.60,
+	.sr_exit_z8_time_us = 285.0,
+	.sr_enter_plus_exit_z8_time_us = 320,
+	.writeback_latency_us = 12.0,
+	.round_trip_ping_latency_dcfclk_cycles = 263,
+	.urgent_latency_pixel_data_only_us = 4.0,
+	.urgent_latency_pixel_mixed_with_vm_data_us = 4.0,
+	.urgent_latency_vm_data_only_us = 4.0,
+	.fclk_change_latency_us = 20,
+	.usr_retraining_latency_us = 2,
+	.smn_latency_us = 2,
+	.mall_allocated_for_dcn_mbytes = 64,
+	.urgent_out_of_order_return_per_channel_pixel_only_bytes = 4096,
+	.urgent_out_of_order_return_per_channel_pixel_and_vm_bytes = 4096,
+	.urgent_out_of_order_return_per_channel_vm_only_bytes = 4096,
+	.pct_ideal_sdp_bw_after_urgent = 100.0,
+	.pct_ideal_fabric_bw_after_urgent = 67.0,
+	.pct_ideal_dram_sdp_bw_after_urgent_pixel_only = 20.0,
+	.pct_ideal_dram_sdp_bw_after_urgent_pixel_and_vm = 60.0, // N/A, for now keep as is until DML implemented
+	.pct_ideal_dram_sdp_bw_after_urgent_vm_only = 30.0, // N/A, for now keep as is until DML implemented
+	.pct_ideal_dram_bw_after_urgent_strobe = 67.0,
+	.max_avg_sdp_bw_use_normal_percent = 80.0,
+	.max_avg_fabric_bw_use_normal_percent = 60.0,
+	.max_avg_dram_bw_use_normal_strobe_percent = 50.0,
+	.max_avg_dram_bw_use_normal_percent = 15.0,
+	.num_chans = 8,
+	.dram_channel_width_bytes = 2,
+	.fabric_datapath_to_dcn_data_return_bytes = 64,
+	.return_bus_width_bytes = 64,
+	.downspread_percent = 0.38,
+	.dcn_downspread_percent = 0.5,
+	.dram_clock_change_latency_us = 400,
+	.dispclk_dppclk_vco_speed_mhz = 4300.0,
+	.do_urgent_latency_adjustment = true,
+	.urgent_latency_adjustment_fabric_clock_component_us = 1.0,
+	.urgent_latency_adjustment_fabric_clock_reference_mhz = 1000,
+};
+
+enum dcn321_clk_src_array_id {
+	DCN321_CLK_SRC_PLL0,
+	DCN321_CLK_SRC_PLL1,
+	DCN321_CLK_SRC_PLL2,
+	DCN321_CLK_SRC_PLL3,
+	DCN321_CLK_SRC_PLL4,
+	DCN321_CLK_SRC_TOTAL
+};
+
+/* begin *********************
+ * macros to expend register list macro defined in HW object header file
+ */
+
+/* DCN */
+/* TODO awful hack. fixup dcn20_dwb.h */
+#undef BASE_INNER
+#define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
+
+#define BASE(seg) BASE_INNER(seg)
+
+#define SR(reg_name)\
+		.reg_name = BASE(reg ## reg_name ## _BASE_IDX) +  \
+					reg ## reg_name
+
+#define SRI(reg_name, block, id)\
+	.reg_name = BASE(reg ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+					reg ## block ## id ## _ ## reg_name
+
+#define SRI2(reg_name, block, id)\
+	.reg_name = BASE(reg ## reg_name ## _BASE_IDX) + \
+					reg ## reg_name
+
+#define SRIR(var_name, reg_name, block, id)\
+	.var_name = BASE(reg ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+					reg ## block ## id ## _ ## reg_name
+
+#define SRII(reg_name, block, id)\
+	.reg_name[id] = BASE(reg ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+					reg ## block ## id ## _ ## reg_name
+
+#define SRII_MPC_RMU(reg_name, block, id)\
+	.RMU##_##reg_name[id] = BASE(reg ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+					reg ## block ## id ## _ ## reg_name
+
+#define SRII_DWB(reg_name, temp_name, block, id)\
+	.reg_name[id] = BASE(reg ## block ## id ## _ ## temp_name ## _BASE_IDX) + \
+					reg ## block ## id ## _ ## temp_name
+
+#define DCCG_SRII(reg_name, block, id)\
+	.block ## _ ## reg_name[id] = BASE(reg ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+					reg ## block ## id ## _ ## reg_name
+
+#define VUPDATE_SRII(reg_name, block, id)\
+	.reg_name[id] = BASE(reg ## reg_name ## _ ## block ## id ## _BASE_IDX) + \
+					reg ## reg_name ## _ ## block ## id
+
+/* NBIO */
+#define NBIO_BASE_INNER(seg) \
+	NBIO_BASE__INST0_SEG ## seg
+
+#define NBIO_BASE(seg) \
+	NBIO_BASE_INNER(seg)
+
+#define NBIO_SR(reg_name)\
+		.reg_name = NBIO_BASE(regBIF_BX0_ ## reg_name ## _BASE_IDX) + \
+					regBIF_BX0_ ## reg_name
+
+#define CTX ctx
+#define REG(reg_name) \
+	(DCN_BASE.instance[0].segment[reg ## reg_name ## _BASE_IDX] + reg ## reg_name)
+
+static const struct bios_registers bios_regs = {
+		NBIO_SR(BIOS_SCRATCH_3),
+		NBIO_SR(BIOS_SCRATCH_6)
+};
+
+#define clk_src_regs(index, pllid)\
+[index] = {\
+	CS_COMMON_REG_LIST_DCN3_0(index, pllid),\
+}
+
+static const struct dce110_clk_src_regs clk_src_regs[] = {
+	clk_src_regs(0, A),
+	clk_src_regs(1, B),
+	clk_src_regs(2, C),
+	clk_src_regs(3, D)
+};
+
+static const struct dce110_clk_src_shift cs_shift = {
+		CS_COMMON_MASK_SH_LIST_DCN3_2(__SHIFT)
+};
+
+static const struct dce110_clk_src_mask cs_mask = {
+		CS_COMMON_MASK_SH_LIST_DCN3_2(_MASK)
+};
+
+#define abm_regs(id)\
+[id] = {\
+		ABM_DCN32_REG_LIST(id)\
+}
+
+static const struct dce_abm_registers abm_regs[] = {
+		abm_regs(0),
+		abm_regs(1),
+		abm_regs(2),
+		abm_regs(3),
+};
+
+static const struct dce_abm_shift abm_shift = {
+		ABM_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dce_abm_mask abm_mask = {
+		ABM_MASK_SH_LIST_DCN32(_MASK)
+};
+
+#define audio_regs(id)\
+[id] = {\
+		AUD_COMMON_REG_LIST(id)\
+}
+
+static const struct dce_audio_registers audio_regs[] = {
+	audio_regs(0),
+	audio_regs(1),
+	audio_regs(2),
+	audio_regs(3),
+	audio_regs(4)
+};
+
+#define DCE120_AUD_COMMON_MASK_SH_LIST(mask_sh)\
+		SF(AZF0ENDPOINT0_AZALIA_F0_CODEC_ENDPOINT_INDEX, AZALIA_ENDPOINT_REG_INDEX, mask_sh),\
+		SF(AZF0ENDPOINT0_AZALIA_F0_CODEC_ENDPOINT_DATA, AZALIA_ENDPOINT_REG_DATA, mask_sh),\
+		AUD_COMMON_MASK_SH_LIST_BASE(mask_sh)
+
+static const struct dce_audio_shift audio_shift = {
+		DCE120_AUD_COMMON_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dce_audio_mask audio_mask = {
+		DCE120_AUD_COMMON_MASK_SH_LIST(_MASK)
+};
+
+#define vpg_regs(id)\
+[id] = {\
+	VPG_DCN3_REG_LIST(id)\
+}
+
+static const struct dcn30_vpg_registers vpg_regs[] = {
+	vpg_regs(0),
+	vpg_regs(1),
+	vpg_regs(2),
+	vpg_regs(3),
+	vpg_regs(4),
+	vpg_regs(5),
+	vpg_regs(6),
+	vpg_regs(7),
+	vpg_regs(8),
+	vpg_regs(9),
+};
+
+static const struct dcn30_vpg_shift vpg_shift = {
+	DCN3_VPG_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn30_vpg_mask vpg_mask = {
+	DCN3_VPG_MASK_SH_LIST(_MASK)
+};
+
+#define afmt_regs(id)\
+[id] = {\
+	AFMT_DCN3_REG_LIST(id)\
+}
+
+static const struct dcn30_afmt_registers afmt_regs[] = {
+	afmt_regs(0),
+	afmt_regs(1),
+	afmt_regs(2),
+	afmt_regs(3),
+	afmt_regs(4),
+	afmt_regs(5)
+};
+
+static const struct dcn30_afmt_shift afmt_shift = {
+	DCN3_AFMT_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn30_afmt_mask afmt_mask = {
+	DCN3_AFMT_MASK_SH_LIST(_MASK)
+};
+
+#define apg_regs(id)\
+[id] = {\
+	APG_DCN31_REG_LIST(id)\
+}
+
+static const struct dcn31_apg_registers apg_regs[] = {
+	apg_regs(0),
+	apg_regs(1),
+	apg_regs(2),
+	apg_regs(3)
+};
+
+static const struct dcn31_apg_shift apg_shift = {
+	DCN31_APG_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn31_apg_mask apg_mask = {
+		DCN31_APG_MASK_SH_LIST(_MASK)
+};
+
+#define stream_enc_regs(id)\
+[id] = {\
+	SE_DCN32_REG_LIST(id)\
+}
+
+static const struct dcn10_stream_enc_registers stream_enc_regs[] = {
+	stream_enc_regs(0),
+	stream_enc_regs(1),
+	stream_enc_regs(2),
+	stream_enc_regs(3),
+	stream_enc_regs(4)
+};
+
+static const struct dcn10_stream_encoder_shift se_shift = {
+		SE_COMMON_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dcn10_stream_encoder_mask se_mask = {
+		SE_COMMON_MASK_SH_LIST_DCN32(_MASK)
+};
+
+
+#define aux_regs(id)\
+[id] = {\
+	DCN2_AUX_REG_LIST(id)\
+}
+
+static const struct dcn10_link_enc_aux_registers link_enc_aux_regs[] = {
+		aux_regs(0),
+		aux_regs(1),
+		aux_regs(2),
+		aux_regs(3),
+		aux_regs(4)
+};
+
+#define hpd_regs(id)\
+[id] = {\
+	HPD_REG_LIST(id)\
+}
+
+static const struct dcn10_link_enc_hpd_registers link_enc_hpd_regs[] = {
+		hpd_regs(0),
+		hpd_regs(1),
+		hpd_regs(2),
+		hpd_regs(3),
+		hpd_regs(4)
+};
+
+#define link_regs(id, phyid)\
+[id] = {\
+	LE_DCN31_REG_LIST(id), \
+	UNIPHY_DCN2_REG_LIST(phyid), \
+	/*DPCS_DCN31_REG_LIST(id),*/ \
+}
+
+static const struct dcn10_link_enc_registers link_enc_regs[] = {
+	link_regs(0, A),
+	link_regs(1, B),
+	link_regs(2, C),
+	link_regs(3, D),
+	link_regs(4, E)
+};
+
+static const struct dcn10_link_enc_shift le_shift = {
+	LINK_ENCODER_MASK_SH_LIST_DCN31(__SHIFT), \
+//	DPCS_DCN31_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn10_link_enc_mask le_mask = {
+	LINK_ENCODER_MASK_SH_LIST_DCN31(_MASK), \
+//	DPCS_DCN31_MASK_SH_LIST(_MASK)
+};
+
+#define hpo_dp_stream_encoder_reg_list(id)\
+[id] = {\
+	DCN3_1_HPO_DP_STREAM_ENC_REG_LIST(id)\
+}
+
+static const struct dcn31_hpo_dp_stream_encoder_registers hpo_dp_stream_enc_regs[] = {
+	hpo_dp_stream_encoder_reg_list(0),
+	hpo_dp_stream_encoder_reg_list(1),
+	hpo_dp_stream_encoder_reg_list(2),
+	hpo_dp_stream_encoder_reg_list(3),
+};
+
+static const struct dcn31_hpo_dp_stream_encoder_shift hpo_dp_se_shift = {
+	DCN3_1_HPO_DP_STREAM_ENC_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn31_hpo_dp_stream_encoder_mask hpo_dp_se_mask = {
+	DCN3_1_HPO_DP_STREAM_ENC_MASK_SH_LIST(_MASK)
+};
+
+
+#define hpo_dp_link_encoder_reg_list(id)\
+[id] = {\
+	DCN3_1_HPO_DP_LINK_ENC_REG_LIST(id),\
+	/*DCN3_1_RDPCSTX_REG_LIST(0),*/\
+	/*DCN3_1_RDPCSTX_REG_LIST(1),*/\
+	/*DCN3_1_RDPCSTX_REG_LIST(2),*/\
+	/*DCN3_1_RDPCSTX_REG_LIST(3),*/\
+	/*DCN3_1_RDPCSTX_REG_LIST(4)*/\
+}
+
+static const struct dcn31_hpo_dp_link_encoder_registers hpo_dp_link_enc_regs[] = {
+	hpo_dp_link_encoder_reg_list(0),
+	hpo_dp_link_encoder_reg_list(1),
+};
+
+static const struct dcn31_hpo_dp_link_encoder_shift hpo_dp_le_shift = {
+	DCN3_2_HPO_DP_LINK_ENC_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn31_hpo_dp_link_encoder_mask hpo_dp_le_mask = {
+	DCN3_2_HPO_DP_LINK_ENC_MASK_SH_LIST(_MASK)
+};
+
+#define dpp_regs(id)\
+[id] = {\
+	DPP_REG_LIST_DCN30_COMMON(id),\
+}
+
+static const struct dcn3_dpp_registers dpp_regs[] = {
+	dpp_regs(0),
+	dpp_regs(1),
+	dpp_regs(2),
+	dpp_regs(3)
+};
+
+static const struct dcn3_dpp_shift tf_shift = {
+		DPP_REG_LIST_SH_MASK_DCN30_COMMON(__SHIFT)
+};
+
+static const struct dcn3_dpp_mask tf_mask = {
+		DPP_REG_LIST_SH_MASK_DCN30_COMMON(_MASK)
+};
+
+
+#define opp_regs(id)\
+[id] = {\
+	OPP_REG_LIST_DCN30(id),\
+}
+
+static const struct dcn20_opp_registers opp_regs[] = {
+	opp_regs(0),
+	opp_regs(1),
+	opp_regs(2),
+	opp_regs(3)
+};
+
+static const struct dcn20_opp_shift opp_shift = {
+	OPP_MASK_SH_LIST_DCN20(__SHIFT)
+};
+
+static const struct dcn20_opp_mask opp_mask = {
+	OPP_MASK_SH_LIST_DCN20(_MASK)
+};
+
+#define aux_engine_regs(id)\
+[id] = {\
+	AUX_COMMON_REG_LIST0(id), \
+	.AUXN_IMPCAL = 0, \
+	.AUXP_IMPCAL = 0, \
+	.AUX_RESET_MASK = DP_AUX0_AUX_CONTROL__AUX_RESET_MASK, \
+}
+
+static const struct dce110_aux_registers aux_engine_regs[] = {
+		aux_engine_regs(0),
+		aux_engine_regs(1),
+		aux_engine_regs(2),
+		aux_engine_regs(3),
+		aux_engine_regs(4)
+};
+
+static const struct dce110_aux_registers_shift aux_shift = {
+	DCN_AUX_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dce110_aux_registers_mask aux_mask = {
+	DCN_AUX_MASK_SH_LIST(_MASK)
+};
+
+
+#define dwbc_regs_dcn3(id)\
+[id] = {\
+	DWBC_COMMON_REG_LIST_DCN30(id),\
+}
+
+static const struct dcn30_dwbc_registers dwbc30_regs[] = {
+	dwbc_regs_dcn3(0),
+};
+
+static const struct dcn30_dwbc_shift dwbc30_shift = {
+	DWBC_COMMON_MASK_SH_LIST_DCN30(__SHIFT)
+};
+
+static const struct dcn30_dwbc_mask dwbc30_mask = {
+	DWBC_COMMON_MASK_SH_LIST_DCN30(_MASK)
+};
+
+#define mcif_wb_regs_dcn3(id)\
+[id] = {\
+	MCIF_WB_COMMON_REG_LIST_DCN32(id),\
+}
+
+static const struct dcn30_mmhubbub_registers mcif_wb30_regs[] = {
+	mcif_wb_regs_dcn3(0)
+};
+
+static const struct dcn30_mmhubbub_shift mcif_wb30_shift = {
+	MCIF_WB_COMMON_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dcn30_mmhubbub_mask mcif_wb30_mask = {
+	MCIF_WB_COMMON_MASK_SH_LIST_DCN32(_MASK)
+};
+
+#define dsc_regsDCN20(id)\
+[id] = {\
+	DSC_REG_LIST_DCN20(id)\
+}
+
+static const struct dcn20_dsc_registers dsc_regs[] = {
+	dsc_regsDCN20(0),
+	dsc_regsDCN20(1),
+	dsc_regsDCN20(2),
+	dsc_regsDCN20(3)
+};
+
+static const struct dcn20_dsc_shift dsc_shift = {
+	DSC_REG_LIST_SH_MASK_DCN20(__SHIFT)
+};
+
+static const struct dcn20_dsc_mask dsc_mask = {
+	DSC_REG_LIST_SH_MASK_DCN20(_MASK)
+};
+
+static const struct dcn30_mpc_registers mpc_regs = {
+		MPC_REG_LIST_DCN3_0(0),
+		MPC_REG_LIST_DCN3_0(1),
+		MPC_REG_LIST_DCN3_0(2),
+		MPC_REG_LIST_DCN3_0(3),
+		MPC_OUT_MUX_REG_LIST_DCN3_0(0),
+		MPC_OUT_MUX_REG_LIST_DCN3_0(1),
+		MPC_OUT_MUX_REG_LIST_DCN3_0(2),
+		MPC_OUT_MUX_REG_LIST_DCN3_0(3),
+		MPC_MCM_REG_LIST_DCN32(0),
+		MPC_MCM_REG_LIST_DCN32(1),
+		MPC_MCM_REG_LIST_DCN32(2),
+		MPC_MCM_REG_LIST_DCN32(3),
+		MPC_DWB_MUX_REG_LIST_DCN3_0(0),
+};
+
+static const struct dcn30_mpc_shift mpc_shift = {
+	MPC_COMMON_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dcn30_mpc_mask mpc_mask = {
+	MPC_COMMON_MASK_SH_LIST_DCN32(_MASK)
+};
+
+#define optc_regs(id)\
+[id] = {OPTC_COMMON_REG_LIST_DCN3_2(id)}
+
+static const struct dcn_optc_registers optc_regs[] = {
+	optc_regs(0),
+	optc_regs(1),
+	optc_regs(2),
+	optc_regs(3)
+};
+
+static const struct dcn_optc_shift optc_shift = {
+	OPTC_COMMON_MASK_SH_LIST_DCN3_2(__SHIFT)
+};
+
+static const struct dcn_optc_mask optc_mask = {
+	OPTC_COMMON_MASK_SH_LIST_DCN3_2(_MASK)
+};
+
+#define hubp_regs(id)\
+[id] = {\
+	HUBP_REG_LIST_DCN32(id)\
+}
+
+static const struct dcn_hubp2_registers hubp_regs[] = {
+		hubp_regs(0),
+		hubp_regs(1),
+		hubp_regs(2),
+		hubp_regs(3)
+};
+
+
+static const struct dcn_hubp2_shift hubp_shift = {
+		HUBP_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dcn_hubp2_mask hubp_mask = {
+		HUBP_MASK_SH_LIST_DCN32(_MASK)
+};
+static const struct dcn_hubbub_registers hubbub_reg = {
+		HUBBUB_REG_LIST_DCN32(0)
+};
+
+static const struct dcn_hubbub_shift hubbub_shift = {
+		HUBBUB_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dcn_hubbub_mask hubbub_mask = {
+		HUBBUB_MASK_SH_LIST_DCN32(_MASK)
+};
+
+static const struct dccg_registers dccg_regs = {
+		DCCG_REG_LIST_DCN32()
+};
+
+static const struct dccg_shift dccg_shift = {
+		DCCG_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dccg_mask dccg_mask = {
+		DCCG_MASK_SH_LIST_DCN32(_MASK)
+};
+
+
+#define SRII2(reg_name_pre, reg_name_post, id)\
+	.reg_name_pre ## _ ##  reg_name_post[id] = BASE(reg ## reg_name_pre \
+			## id ## _ ## reg_name_post ## _BASE_IDX) + \
+			reg ## reg_name_pre ## id ## _ ## reg_name_post
+
+
+#define HWSEQ_DCN32_REG_LIST()\
+	SR(DCHUBBUB_GLOBAL_TIMER_CNTL), \
+	SR(DIO_MEM_PWR_CTRL), \
+	SR(ODM_MEM_PWR_CTRL3), \
+	SR(MMHUBBUB_MEM_PWR_CNTL), \
+	SR(DCCG_GATE_DISABLE_CNTL), \
+	SR(DCCG_GATE_DISABLE_CNTL2), \
+	SR(DCFCLK_CNTL),\
+	SR(DC_MEM_GLOBAL_PWR_REQ_CNTL), \
+	SRII(PIXEL_RATE_CNTL, OTG, 0), \
+	SRII(PIXEL_RATE_CNTL, OTG, 1),\
+	SRII(PIXEL_RATE_CNTL, OTG, 2),\
+	SRII(PIXEL_RATE_CNTL, OTG, 3),\
+	SRII(PHYPLL_PIXEL_RATE_CNTL, OTG, 0),\
+	SRII(PHYPLL_PIXEL_RATE_CNTL, OTG, 1),\
+	SRII(PHYPLL_PIXEL_RATE_CNTL, OTG, 2),\
+	SRII(PHYPLL_PIXEL_RATE_CNTL, OTG, 3),\
+	SR(MICROSECOND_TIME_BASE_DIV), \
+	SR(MILLISECOND_TIME_BASE_DIV), \
+	SR(DISPCLK_FREQ_CHANGE_CNTL), \
+	SR(RBBMIF_TIMEOUT_DIS), \
+	SR(RBBMIF_TIMEOUT_DIS_2), \
+	SR(DCHUBBUB_CRC_CTRL), \
+	SR(DPP_TOP0_DPP_CRC_CTRL), \
+	SR(DPP_TOP0_DPP_CRC_VAL_B_A), \
+	SR(DPP_TOP0_DPP_CRC_VAL_R_G), \
+	SR(MPC_CRC_CTRL), \
+	SR(MPC_CRC_RESULT_GB), \
+	SR(MPC_CRC_RESULT_C), \
+	SR(MPC_CRC_RESULT_AR), \
+	SR(DOMAIN0_PG_CONFIG), \
+	SR(DOMAIN1_PG_CONFIG), \
+	SR(DOMAIN2_PG_CONFIG), \
+	SR(DOMAIN3_PG_CONFIG), \
+	SR(DOMAIN16_PG_CONFIG), \
+	SR(DOMAIN17_PG_CONFIG), \
+	SR(DOMAIN18_PG_CONFIG), \
+	SR(DOMAIN19_PG_CONFIG), \
+	SR(DOMAIN0_PG_STATUS), \
+	SR(DOMAIN1_PG_STATUS), \
+	SR(DOMAIN2_PG_STATUS), \
+	SR(DOMAIN3_PG_STATUS), \
+	SR(DOMAIN16_PG_STATUS), \
+	SR(DOMAIN17_PG_STATUS), \
+	SR(DOMAIN18_PG_STATUS), \
+	SR(DOMAIN19_PG_STATUS), \
+	SR(D1VGA_CONTROL), \
+	SR(D2VGA_CONTROL), \
+	SR(D3VGA_CONTROL), \
+	SR(D4VGA_CONTROL), \
+	SR(D5VGA_CONTROL), \
+	SR(D6VGA_CONTROL), \
+	SR(DC_IP_REQUEST_CNTL), \
+	SR(AZALIA_AUDIO_DTO), \
+	SR(AZALIA_CONTROLLER_CLOCK_GATING)
+
+static const struct dce_hwseq_registers hwseq_reg = {
+		HWSEQ_DCN32_REG_LIST()
+};
+
+#define HWSEQ_DCN32_MASK_SH_LIST(mask_sh)\
+	HWSEQ_DCN_MASK_SH_LIST(mask_sh), \
+	HWS_SF(, DCHUBBUB_GLOBAL_TIMER_CNTL, DCHUBBUB_GLOBAL_TIMER_REFDIV, mask_sh), \
+	HWS_SF(, DOMAIN0_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN0_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN1_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN1_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN2_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN2_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN3_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN3_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN16_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN16_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN17_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN17_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN18_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN18_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN19_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN19_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN0_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN1_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN2_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN3_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN16_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN17_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN18_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN19_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DC_IP_REQUEST_CNTL, IP_REQUEST_EN, mask_sh), \
+	HWS_SF(, AZALIA_AUDIO_DTO, AZALIA_AUDIO_DTO_MODULE, mask_sh), \
+	HWS_SF(, HPO_TOP_CLOCK_CONTROL, HPO_HDMISTREAMCLK_G_GATE_DIS, mask_sh), \
+	HWS_SF(, ODM_MEM_PWR_CTRL3, ODM_MEM_UNASSIGNED_PWR_MODE, mask_sh), \
+	HWS_SF(, ODM_MEM_PWR_CTRL3, ODM_MEM_VBLANK_PWR_MODE, mask_sh), \
+	HWS_SF(, MMHUBBUB_MEM_PWR_CNTL, VGA_MEM_PWR_FORCE, mask_sh)
+
+static const struct dce_hwseq_shift hwseq_shift = {
+		HWSEQ_DCN32_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dce_hwseq_mask hwseq_mask = {
+		HWSEQ_DCN32_MASK_SH_LIST(_MASK)
+};
+#define vmid_regs(id)\
+[id] = {\
+		DCN20_VMID_REG_LIST(id)\
+}
+
+static const struct dcn_vmid_registers vmid_regs[] = {
+	vmid_regs(0),
+	vmid_regs(1),
+	vmid_regs(2),
+	vmid_regs(3),
+	vmid_regs(4),
+	vmid_regs(5),
+	vmid_regs(6),
+	vmid_regs(7),
+	vmid_regs(8),
+	vmid_regs(9),
+	vmid_regs(10),
+	vmid_regs(11),
+	vmid_regs(12),
+	vmid_regs(13),
+	vmid_regs(14),
+	vmid_regs(15)
+};
+
+static const struct dcn20_vmid_shift vmid_shifts = {
+		DCN20_VMID_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn20_vmid_mask vmid_masks = {
+		DCN20_VMID_MASK_SH_LIST(_MASK)
+};
+
+static const struct resource_caps res_cap_dcn321 = {
+	.num_timing_generator = 4,
+	.num_opp = 4,
+	.num_video_plane = 4,
+	.num_audio = 5,
+	.num_stream_encoder = 5,
+	.num_hpo_dp_stream_encoder = 4,
+	.num_hpo_dp_link_encoder = 2,
+	.num_pll = 5,
+	.num_dwb = 1,
+	.num_ddc = 5,
+	.num_vmid = 16,
+	.num_mpc_3dlut = 4,
+	.num_dsc = 4,
+};
+
+static const struct dc_plane_cap plane_cap = {
+	.type = DC_PLANE_TYPE_DCN_UNIVERSAL,
+	.blends_with_above = true,
+	.blends_with_below = true,
+	.per_pixel_alpha = true,
+
+	.pixel_format_support = {
+			.argb8888 = true,
+			.nv12 = true,
+			.fp16 = true,
+			.p010 = true,
+			.ayuv = false,
+	},
+
+	.max_upscale_factor = {
+			.argb8888 = 16000,
+			.nv12 = 16000,
+			.fp16 = 16000
+	},
+
+	// 6:1 downscaling ratio: 1000/6 = 166.666
+	.max_downscale_factor = {
+			.argb8888 = 167,
+			.nv12 = 167,
+			.fp16 = 167
+	},
+	64,
+	64
+};
+
+static const struct dc_debug_options debug_defaults_drv = {
+	.disable_dmcu = true,
+	.force_abm_enable = false,
+	.timing_trace = false,
+	.clock_trace = true,
+	.disable_pplib_clock_request = false,
+	.pipe_split_policy = MPC_SPLIT_DYNAMIC,
+	.force_single_disp_pipe_split = false,
+	.disable_dcc = DCC_ENABLE,
+	.vsr_support = true,
+	.performance_trace = false,
+	.max_downscale_src_width = 7680,/*upto 8K*/
+	.disable_pplib_wm_range = false,
+	.scl_reset_length10 = true,
+	.sanity_checks = false,
+	.underflow_assert_delay_us = 0xFFFFFFFF,
+	.dwb_fi_phase = -1, // -1 = disable,
+	.dmub_command_table = true,
+	.enable_mem_low_power = {
+		.bits = {
+			.vga = false,
+			.i2c = false,
+			.dmcu = false, // This is previously known to cause hang on S3 cycles if enabled
+			.dscl = false,
+			.cm = false,
+			.mpc = false,
+			.optc = false,
+		}
+	},
+	.use_max_lb = true,
+	.force_disable_subvp = true
+};
+
+static const struct dc_debug_options debug_defaults_diags = {
+	.disable_dmcu = true,
+	.force_abm_enable = false,
+	.timing_trace = true,
+	.clock_trace = true,
+	.disable_dpp_power_gate = true,
+	.disable_hubp_power_gate = true,
+	.disable_dsc_power_gate = true,
+	.disable_clock_gate = true,
+	.disable_pplib_clock_request = true,
+	.disable_pplib_wm_range = true,
+	.disable_stutter = false,
+	.scl_reset_length10 = true,
+	.dwb_fi_phase = -1, // -1 = disable
+	.dmub_command_table = true,
+	.enable_tri_buf = true,
+	.use_max_lb = true,
+	.force_disable_subvp = true
+};
+
+
+static struct dce_aux *dcn321_aux_engine_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct aux_engine_dce110 *aux_engine =
+		kzalloc(sizeof(struct aux_engine_dce110), GFP_KERNEL);
+
+	if (!aux_engine)
+		return NULL;
+
+	dce110_aux_engine_construct(aux_engine, ctx, inst,
+				    SW_AUX_TIMEOUT_PERIOD_MULTIPLIER * AUX_TIMEOUT_PERIOD,
+				    &aux_engine_regs[inst],
+					&aux_mask,
+					&aux_shift,
+					ctx->dc->caps.extended_aux_timeout_support);
+
+	return &aux_engine->base;
+}
+#define i2c_inst_regs(id) { I2C_HW_ENGINE_COMMON_REG_LIST_DCN30(id) }
+
+static const struct dce_i2c_registers i2c_hw_regs[] = {
+		i2c_inst_regs(1),
+		i2c_inst_regs(2),
+		i2c_inst_regs(3),
+		i2c_inst_regs(4),
+		i2c_inst_regs(5),
+};
+
+static const struct dce_i2c_shift i2c_shifts = {
+		I2C_COMMON_MASK_SH_LIST_DCN30(__SHIFT)
+};
+
+static const struct dce_i2c_mask i2c_masks = {
+		I2C_COMMON_MASK_SH_LIST_DCN30(_MASK)
+};
+
+static struct dce_i2c_hw *dcn321_i2c_hw_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dce_i2c_hw *dce_i2c_hw =
+		kzalloc(sizeof(struct dce_i2c_hw), GFP_KERNEL);
+
+	if (!dce_i2c_hw)
+		return NULL;
+
+	dcn2_i2c_hw_construct(dce_i2c_hw, ctx, inst,
+				    &i2c_hw_regs[inst], &i2c_shifts, &i2c_masks);
+
+	return dce_i2c_hw;
+}
+
+static struct clock_source *dcn321_clock_source_create(
+		struct dc_context *ctx,
+		struct dc_bios *bios,
+		enum clock_source_id id,
+		const struct dce110_clk_src_regs *regs,
+		bool dp_clk_src)
+{
+	struct dce110_clk_src *clk_src =
+		kzalloc(sizeof(struct dce110_clk_src), GFP_KERNEL);
+
+	if (!clk_src)
+		return NULL;
+
+	if (dcn3_clk_src_construct(clk_src, ctx, bios, id,
+			regs, &cs_shift, &cs_mask)) {
+		clk_src->base.dp_clk_src = dp_clk_src;
+		return &clk_src->base;
+	}
+
+	BREAK_TO_DEBUGGER();
+	return NULL;
+}
+
+static struct hubbub *dcn321_hubbub_create(struct dc_context *ctx)
+{
+	int i;
+
+	struct dcn20_hubbub *hubbub2 = kzalloc(sizeof(struct dcn20_hubbub),
+					  GFP_KERNEL);
+
+	if (!hubbub2)
+		return NULL;
+
+	hubbub32_construct(hubbub2, ctx,
+			&hubbub_reg,
+			&hubbub_shift,
+			&hubbub_mask,
+			ctx->dc->dml.ip.det_buffer_size_kbytes,
+			ctx->dc->dml.ip.pixel_chunk_size_kbytes,
+			ctx->dc->dml.ip.config_return_buffer_size_in_kbytes);
+
+
+	for (i = 0; i < res_cap_dcn321.num_vmid; i++) {
+		struct dcn20_vmid *vmid = &hubbub2->vmid[i];
+
+		vmid->ctx = ctx;
+
+		vmid->regs = &vmid_regs[i];
+		vmid->shifts = &vmid_shifts;
+		vmid->masks = &vmid_masks;
+	}
+
+	return &hubbub2->base;
+}
+
+static struct hubp *dcn321_hubp_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dcn20_hubp *hubp2 =
+		kzalloc(sizeof(struct dcn20_hubp), GFP_KERNEL);
+
+	if (!hubp2)
+		return NULL;
+
+	if (hubp32_construct(hubp2, ctx, inst,
+			&hubp_regs[inst], &hubp_shift, &hubp_mask))
+		return &hubp2->base;
+
+	BREAK_TO_DEBUGGER();
+	kfree(hubp2);
+	return NULL;
+}
+
+static void dcn321_dpp_destroy(struct dpp **dpp)
+{
+	kfree(TO_DCN30_DPP(*dpp));
+	*dpp = NULL;
+}
+
+static struct dpp *dcn321_dpp_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dcn3_dpp *dpp3 =
+		kzalloc(sizeof(struct dcn3_dpp), GFP_KERNEL);
+
+	if (!dpp3)
+		return NULL;
+
+	if (dpp32_construct(dpp3, ctx, inst,
+			&dpp_regs[inst], &tf_shift, &tf_mask))
+		return &dpp3->base;
+
+	BREAK_TO_DEBUGGER();
+	kfree(dpp3);
+	return NULL;
+}
+
+static struct mpc *dcn321_mpc_create(
+		struct dc_context *ctx,
+		int num_mpcc,
+		int num_rmu)
+{
+	struct dcn30_mpc *mpc30 = kzalloc(sizeof(struct dcn30_mpc),
+					  GFP_KERNEL);
+
+	if (!mpc30)
+		return NULL;
+
+	dcn32_mpc_construct(mpc30, ctx,
+			&mpc_regs,
+			&mpc_shift,
+			&mpc_mask,
+			num_mpcc,
+			num_rmu);
+
+	return &mpc30->base;
+}
+
+static struct output_pixel_processor *dcn321_opp_create(
+	struct dc_context *ctx, uint32_t inst)
+{
+	struct dcn20_opp *opp2 =
+		kzalloc(sizeof(struct dcn20_opp), GFP_KERNEL);
+
+	if (!opp2) {
+		BREAK_TO_DEBUGGER();
+		return NULL;
+	}
+
+	dcn20_opp_construct(opp2, ctx, inst,
+			&opp_regs[inst], &opp_shift, &opp_mask);
+	return &opp2->base;
+}
+
+
+static struct timing_generator *dcn321_timing_generator_create(
+		struct dc_context *ctx,
+		uint32_t instance)
+{
+	struct optc *tgn10 =
+		kzalloc(sizeof(struct optc), GFP_KERNEL);
+
+	if (!tgn10)
+		return NULL;
+
+	tgn10->base.inst = instance;
+	tgn10->base.ctx = ctx;
+
+	tgn10->tg_regs = &optc_regs[instance];
+	tgn10->tg_shift = &optc_shift;
+	tgn10->tg_mask = &optc_mask;
+
+	dcn32_timing_generator_init(tgn10);
+
+	return &tgn10->base;
+}
+
+static const struct encoder_feature_support link_enc_feature = {
+		.max_hdmi_deep_color = COLOR_DEPTH_121212,
+		.max_hdmi_pixel_clock = 600000,
+		.hdmi_ycbcr420_supported = true,
+		.dp_ycbcr420_supported = true,
+		.fec_supported = true,
+		.flags.bits.IS_HBR2_CAPABLE = true,
+		.flags.bits.IS_HBR3_CAPABLE = true,
+		.flags.bits.IS_TPS3_CAPABLE = true,
+		.flags.bits.IS_TPS4_CAPABLE = true
+};
+
+static struct link_encoder *dcn321_link_encoder_create(
+	const struct encoder_init_data *enc_init_data)
+{
+	struct dcn20_link_encoder *enc20 =
+		kzalloc(sizeof(struct dcn20_link_encoder), GFP_KERNEL);
+
+	if (!enc20)
+		return NULL;
+
+	dcn32_link_encoder_construct(enc20,
+			enc_init_data,
+			&link_enc_feature,
+			&link_enc_regs[enc_init_data->transmitter],
+			&link_enc_aux_regs[enc_init_data->channel - 1],
+			&link_enc_hpd_regs[enc_init_data->hpd_source],
+			&le_shift,
+			&le_mask);
+
+	return &enc20->enc10.base;
+}
+
+static void read_dce_straps(
+	struct dc_context *ctx,
+	struct resource_straps *straps)
+{
+	generic_reg_get(ctx, regDC_PINSTRAPS + BASE(regDC_PINSTRAPS_BASE_IDX),
+		FN(DC_PINSTRAPS, DC_PINSTRAPS_AUDIO), &straps->dc_pinstraps_audio);
+
+}
+
+static struct audio *dcn321_create_audio(
+		struct dc_context *ctx, unsigned int inst)
+{
+	return dce_audio_create(ctx, inst,
+			&audio_regs[inst], &audio_shift, &audio_mask);
+}
+
+static struct vpg *dcn321_vpg_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dcn30_vpg *vpg3 = kzalloc(sizeof(struct dcn30_vpg), GFP_KERNEL);
+
+	if (!vpg3)
+		return NULL;
+
+	vpg3_construct(vpg3, ctx, inst,
+			&vpg_regs[inst],
+			&vpg_shift,
+			&vpg_mask);
+
+	return &vpg3->base;
+}
+
+static struct afmt *dcn321_afmt_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dcn30_afmt *afmt3 = kzalloc(sizeof(struct dcn30_afmt), GFP_KERNEL);
+
+	if (!afmt3)
+		return NULL;
+
+	afmt3_construct(afmt3, ctx, inst,
+			&afmt_regs[inst],
+			&afmt_shift,
+			&afmt_mask);
+
+	return &afmt3->base;
+}
+
+static struct apg *dcn321_apg_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dcn31_apg *apg31 = kzalloc(sizeof(struct dcn31_apg), GFP_KERNEL);
+
+	if (!apg31)
+		return NULL;
+
+	apg31_construct(apg31, ctx, inst,
+			&apg_regs[inst],
+			&apg_shift,
+			&apg_mask);
+
+	return &apg31->base;
+}
+
+static struct stream_encoder *dcn321_stream_encoder_create(
+	enum engine_id eng_id,
+	struct dc_context *ctx)
+{
+	struct dcn10_stream_encoder *enc1;
+	struct vpg *vpg;
+	struct afmt *afmt;
+	int vpg_inst;
+	int afmt_inst;
+
+	/* Mapping of VPG, AFMT, DME register blocks to DIO block instance */
+	if (eng_id <= ENGINE_ID_DIGF) {
+		vpg_inst = eng_id;
+		afmt_inst = eng_id;
+	} else
+		return NULL;
+
+	enc1 = kzalloc(sizeof(struct dcn10_stream_encoder), GFP_KERNEL);
+	vpg = dcn321_vpg_create(ctx, vpg_inst);
+	afmt = dcn321_afmt_create(ctx, afmt_inst);
+
+	if (!enc1 || !vpg || !afmt) {
+		kfree(enc1);
+		kfree(vpg);
+		kfree(afmt);
+		return NULL;
+	}
+
+	dcn32_dio_stream_encoder_construct(enc1, ctx, ctx->dc_bios,
+					eng_id, vpg, afmt,
+					&stream_enc_regs[eng_id],
+					&se_shift, &se_mask);
+
+	return &enc1->base;
+}
+
+static struct hpo_dp_stream_encoder *dcn321_hpo_dp_stream_encoder_create(
+	enum engine_id eng_id,
+	struct dc_context *ctx)
+{
+	struct dcn31_hpo_dp_stream_encoder *hpo_dp_enc31;
+	struct vpg *vpg;
+	struct apg *apg;
+	uint32_t hpo_dp_inst;
+	uint32_t vpg_inst;
+	uint32_t apg_inst;
+
+	ASSERT((eng_id >= ENGINE_ID_HPO_DP_0) && (eng_id <= ENGINE_ID_HPO_DP_3));
+	hpo_dp_inst = eng_id - ENGINE_ID_HPO_DP_0;
+
+	/* Mapping of VPG register blocks to HPO DP block instance:
+	 * VPG[6] -> HPO_DP[0]
+	 * VPG[7] -> HPO_DP[1]
+	 * VPG[8] -> HPO_DP[2]
+	 * VPG[9] -> HPO_DP[3]
+	 */
+	vpg_inst = hpo_dp_inst + 6;
+
+	/* Mapping of APG register blocks to HPO DP block instance:
+	 * APG[0] -> HPO_DP[0]
+	 * APG[1] -> HPO_DP[1]
+	 * APG[2] -> HPO_DP[2]
+	 * APG[3] -> HPO_DP[3]
+	 */
+	apg_inst = hpo_dp_inst;
+
+	/* allocate HPO stream encoder and create VPG sub-block */
+	hpo_dp_enc31 = kzalloc(sizeof(struct dcn31_hpo_dp_stream_encoder), GFP_KERNEL);
+	vpg = dcn321_vpg_create(ctx, vpg_inst);
+	apg = dcn321_apg_create(ctx, apg_inst);
+
+	if (!hpo_dp_enc31 || !vpg || !apg) {
+		kfree(hpo_dp_enc31);
+		kfree(vpg);
+		kfree(apg);
+		return NULL;
+	}
+
+	dcn31_hpo_dp_stream_encoder_construct(hpo_dp_enc31, ctx, ctx->dc_bios,
+					hpo_dp_inst, eng_id, vpg, apg,
+					&hpo_dp_stream_enc_regs[hpo_dp_inst],
+					&hpo_dp_se_shift, &hpo_dp_se_mask);
+
+	return &hpo_dp_enc31->base;
+}
+
+static struct hpo_dp_link_encoder *dcn321_hpo_dp_link_encoder_create(
+	uint8_t inst,
+	struct dc_context *ctx)
+{
+	struct dcn31_hpo_dp_link_encoder *hpo_dp_enc31;
+
+	/* allocate HPO link encoder */
+	hpo_dp_enc31 = kzalloc(sizeof(struct dcn31_hpo_dp_link_encoder), GFP_KERNEL);
+
+	hpo_dp_link_encoder32_construct(hpo_dp_enc31, ctx, inst,
+					&hpo_dp_link_enc_regs[inst],
+					&hpo_dp_le_shift, &hpo_dp_le_mask);
+
+	return &hpo_dp_enc31->base;
+}
+
+static struct dce_hwseq *dcn321_hwseq_create(
+	struct dc_context *ctx)
+{
+	struct dce_hwseq *hws = kzalloc(sizeof(struct dce_hwseq), GFP_KERNEL);
+
+	if (hws) {
+		hws->ctx = ctx;
+		hws->regs = &hwseq_reg;
+		hws->shifts = &hwseq_shift;
+		hws->masks = &hwseq_mask;
+	}
+	return hws;
+}
+static const struct resource_create_funcs res_create_funcs = {
+	.read_dce_straps = read_dce_straps,
+	.create_audio = dcn321_create_audio,
+	.create_stream_encoder = dcn321_stream_encoder_create,
+	.create_hpo_dp_stream_encoder = dcn321_hpo_dp_stream_encoder_create,
+	.create_hpo_dp_link_encoder = dcn321_hpo_dp_link_encoder_create,
+	.create_hwseq = dcn321_hwseq_create,
+};
+
+static const struct resource_create_funcs res_create_maximus_funcs = {
+	.read_dce_straps = NULL,
+	.create_audio = NULL,
+	.create_stream_encoder = NULL,
+	.create_hpo_dp_stream_encoder = dcn321_hpo_dp_stream_encoder_create,
+	.create_hpo_dp_link_encoder = dcn321_hpo_dp_link_encoder_create,
+	.create_hwseq = dcn321_hwseq_create,
+};
+
+static void dcn321_resource_destruct(struct dcn321_resource_pool *pool)
+{
+	unsigned int i;
+
+	for (i = 0; i < pool->base.stream_enc_count; i++) {
+		if (pool->base.stream_enc[i] != NULL) {
+			if (pool->base.stream_enc[i]->vpg != NULL) {
+				kfree(DCN30_VPG_FROM_VPG(pool->base.stream_enc[i]->vpg));
+				pool->base.stream_enc[i]->vpg = NULL;
+			}
+			if (pool->base.stream_enc[i]->afmt != NULL) {
+				kfree(DCN30_AFMT_FROM_AFMT(pool->base.stream_enc[i]->afmt));
+				pool->base.stream_enc[i]->afmt = NULL;
+			}
+			kfree(DCN10STRENC_FROM_STRENC(pool->base.stream_enc[i]));
+			pool->base.stream_enc[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.hpo_dp_stream_enc_count; i++) {
+		if (pool->base.hpo_dp_stream_enc[i] != NULL) {
+			if (pool->base.hpo_dp_stream_enc[i]->vpg != NULL) {
+				kfree(DCN30_VPG_FROM_VPG(pool->base.hpo_dp_stream_enc[i]->vpg));
+				pool->base.hpo_dp_stream_enc[i]->vpg = NULL;
+			}
+			if (pool->base.hpo_dp_stream_enc[i]->apg != NULL) {
+				kfree(DCN31_APG_FROM_APG(pool->base.hpo_dp_stream_enc[i]->apg));
+				pool->base.hpo_dp_stream_enc[i]->apg = NULL;
+			}
+			kfree(DCN3_1_HPO_DP_STREAM_ENC_FROM_HPO_STREAM_ENC(pool->base.hpo_dp_stream_enc[i]));
+			pool->base.hpo_dp_stream_enc[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.hpo_dp_link_enc_count; i++) {
+		if (pool->base.hpo_dp_link_enc[i] != NULL) {
+			kfree(DCN3_1_HPO_DP_LINK_ENC_FROM_HPO_LINK_ENC(pool->base.hpo_dp_link_enc[i]));
+			pool->base.hpo_dp_link_enc[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_dsc; i++) {
+		if (pool->base.dscs[i] != NULL)
+			dcn20_dsc_destroy(&pool->base.dscs[i]);
+	}
+
+	if (pool->base.mpc != NULL) {
+		kfree(TO_DCN20_MPC(pool->base.mpc));
+		pool->base.mpc = NULL;
+	}
+	if (pool->base.hubbub != NULL) {
+		kfree(TO_DCN20_HUBBUB(pool->base.hubbub));
+		pool->base.hubbub = NULL;
+	}
+	for (i = 0; i < pool->base.pipe_count; i++) {
+		if (pool->base.dpps[i] != NULL)
+			dcn321_dpp_destroy(&pool->base.dpps[i]);
+
+		if (pool->base.ipps[i] != NULL)
+			pool->base.ipps[i]->funcs->ipp_destroy(&pool->base.ipps[i]);
+
+		if (pool->base.hubps[i] != NULL) {
+			kfree(TO_DCN20_HUBP(pool->base.hubps[i]));
+			pool->base.hubps[i] = NULL;
+		}
+
+		if (pool->base.irqs != NULL) {
+			dal_irq_service_destroy(&pool->base.irqs);
+		}
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_ddc; i++) {
+		if (pool->base.engines[i] != NULL)
+			dce110_engine_destroy(&pool->base.engines[i]);
+		if (pool->base.hw_i2cs[i] != NULL) {
+			kfree(pool->base.hw_i2cs[i]);
+			pool->base.hw_i2cs[i] = NULL;
+		}
+		if (pool->base.sw_i2cs[i] != NULL) {
+			kfree(pool->base.sw_i2cs[i]);
+			pool->base.sw_i2cs[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_opp; i++) {
+		if (pool->base.opps[i] != NULL)
+			pool->base.opps[i]->funcs->opp_destroy(&pool->base.opps[i]);
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_timing_generator; i++) {
+		if (pool->base.timing_generators[i] != NULL)	{
+			kfree(DCN10TG_FROM_TG(pool->base.timing_generators[i]));
+			pool->base.timing_generators[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_dwb; i++) {
+		if (pool->base.dwbc[i] != NULL) {
+			kfree(TO_DCN30_DWBC(pool->base.dwbc[i]));
+			pool->base.dwbc[i] = NULL;
+		}
+		if (pool->base.mcif_wb[i] != NULL) {
+			kfree(TO_DCN30_MMHUBBUB(pool->base.mcif_wb[i]));
+			pool->base.mcif_wb[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.audio_count; i++) {
+		if (pool->base.audios[i])
+			dce_aud_destroy(&pool->base.audios[i]);
+	}
+
+	for (i = 0; i < pool->base.clk_src_count; i++) {
+		if (pool->base.clock_sources[i] != NULL) {
+			dcn20_clock_source_destroy(&pool->base.clock_sources[i]);
+			pool->base.clock_sources[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_mpc_3dlut; i++) {
+		if (pool->base.mpc_lut[i] != NULL) {
+			dc_3dlut_func_release(pool->base.mpc_lut[i]);
+			pool->base.mpc_lut[i] = NULL;
+		}
+		if (pool->base.mpc_shaper[i] != NULL) {
+			dc_transfer_func_release(pool->base.mpc_shaper[i]);
+			pool->base.mpc_shaper[i] = NULL;
+		}
+	}
+
+	if (pool->base.dp_clock_source != NULL) {
+		dcn20_clock_source_destroy(&pool->base.dp_clock_source);
+		pool->base.dp_clock_source = NULL;
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_timing_generator; i++) {
+		if (pool->base.multiple_abms[i] != NULL)
+			dce_abm_destroy(&pool->base.multiple_abms[i]);
+	}
+
+	if (pool->base.psr != NULL)
+		dmub_psr_destroy(&pool->base.psr);
+
+	if (pool->base.dccg != NULL)
+		dcn_dccg_destroy(&pool->base.dccg);
+
+	if (pool->base.oem_device != NULL)
+		dal_ddc_service_destroy(&pool->base.oem_device);
+}
+
+
+static bool dcn321_dwbc_create(struct dc_context *ctx, struct resource_pool *pool)
+{
+	int i;
+	uint32_t dwb_count = pool->res_cap->num_dwb;
+
+	for (i = 0; i < dwb_count; i++) {
+		struct dcn30_dwbc *dwbc30 = kzalloc(sizeof(struct dcn30_dwbc),
+						    GFP_KERNEL);
+
+		if (!dwbc30) {
+			dm_error("DC: failed to create dwbc30!\n");
+			return false;
+		}
+
+		dcn30_dwbc_construct(dwbc30, ctx,
+				&dwbc30_regs[i],
+				&dwbc30_shift,
+				&dwbc30_mask,
+				i);
+
+		pool->dwbc[i] = &dwbc30->base;
+	}
+	return true;
+}
+
+static bool dcn321_mmhubbub_create(struct dc_context *ctx, struct resource_pool *pool)
+{
+	int i;
+	uint32_t dwb_count = pool->res_cap->num_dwb;
+
+	for (i = 0; i < dwb_count; i++) {
+		struct dcn30_mmhubbub *mcif_wb30 = kzalloc(sizeof(struct dcn30_mmhubbub),
+						    GFP_KERNEL);
+
+		if (!mcif_wb30) {
+			dm_error("DC: failed to create mcif_wb30!\n");
+			return false;
+		}
+
+		dcn32_mmhubbub_construct(mcif_wb30, ctx,
+				&mcif_wb30_regs[i],
+				&mcif_wb30_shift,
+				&mcif_wb30_mask,
+				i);
+
+		pool->mcif_wb[i] = &mcif_wb30->base;
+	}
+	return true;
+}
+
+static struct display_stream_compressor *dcn321_dsc_create(
+	struct dc_context *ctx, uint32_t inst)
+{
+	struct dcn20_dsc *dsc =
+		kzalloc(sizeof(struct dcn20_dsc), GFP_KERNEL);
+
+	if (!dsc) {
+		BREAK_TO_DEBUGGER();
+		return NULL;
+	}
+
+	dsc2_construct(dsc, ctx, inst, &dsc_regs[inst], &dsc_shift, &dsc_mask);
+	return &dsc->base;
+}
+
+static void dcn321_destroy_resource_pool(struct resource_pool **pool)
+{
+	struct dcn321_resource_pool *dcn321_pool = TO_DCN321_RES_POOL(*pool);
+
+	dcn321_resource_destruct(dcn321_pool);
+	kfree(dcn321_pool);
+	*pool = NULL;
+}
+
+static struct dc_cap_funcs cap_funcs = {
+	.get_dcc_compression_cap = dcn20_get_dcc_compression_cap
+};
+
+
+static void dcn321_get_optimal_dcfclk_fclk_for_uclk(unsigned int uclk_mts,
+		unsigned int *optimal_dcfclk,
+		unsigned int *optimal_fclk)
+{
+	double bw_from_dram, bw_from_dram1, bw_from_dram2;
+
+	bw_from_dram1 = uclk_mts * dcn3_21_soc.num_chans *
+		dcn3_21_soc.dram_channel_width_bytes * (dcn3_21_soc.max_avg_dram_bw_use_normal_percent / 100);
+	bw_from_dram2 = uclk_mts * dcn3_21_soc.num_chans *
+		dcn3_21_soc.dram_channel_width_bytes * (dcn3_21_soc.max_avg_sdp_bw_use_normal_percent / 100);
+
+	bw_from_dram = (bw_from_dram1 < bw_from_dram2) ? bw_from_dram1 : bw_from_dram2;
+
+	if (optimal_fclk)
+		*optimal_fclk = bw_from_dram /
+		(dcn3_21_soc.fabric_datapath_to_dcn_data_return_bytes * (dcn3_21_soc.max_avg_sdp_bw_use_normal_percent / 100));
+
+	if (optimal_dcfclk)
+		*optimal_dcfclk =  bw_from_dram /
+		(dcn3_21_soc.return_bus_width_bytes * (dcn3_21_soc.max_avg_sdp_bw_use_normal_percent / 100));
+}
+
+/* dcn321_update_bw_bounding_box
+ * This would override some dcn3_2 ip_or_soc initial parameters hardcoded from spreadsheet
+ * with actual values as per dGPU SKU:
+ * -with passed few options from dc->config
+ * -with dentist_vco_frequency from Clk Mgr (currently hardcoded, but might need to get it from PM FW)
+ * -with passed latency values (passed in ns units) in dc-> bb override for debugging purposes
+ * -with passed latencies from VBIOS (in 100_ns units) if available for certain dGPU SKU
+ * -with number of DRAM channels from VBIOS (which differ for certain dGPU SKU of the same ASIC)
+ * -clocks levels with passed clk_table entries from Clk Mgr as reported by PM FW for different
+ *  clocks (which might differ for certain dGPU SKU of the same ASIC)
+ */
+static void dcn321_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
+{
+	if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
+		/* Overrides from dc->config options */
+		dcn3_21_ip.clamp_min_dcfclk = dc->config.clamp_min_dcfclk;
+
+		/* Override from passed dc->bb_overrides if available*/
+		if ((int)(dcn3_21_soc.sr_exit_time_us * 1000) != dc->bb_overrides.sr_exit_time_ns
+				&& dc->bb_overrides.sr_exit_time_ns) {
+			dcn3_21_soc.sr_exit_time_us = dc->bb_overrides.sr_exit_time_ns / 1000.0;
+		}
+
+		if ((int)(dcn3_21_soc.sr_enter_plus_exit_time_us * 1000)
+				!= dc->bb_overrides.sr_enter_plus_exit_time_ns
+				&& dc->bb_overrides.sr_enter_plus_exit_time_ns) {
+			dcn3_21_soc.sr_enter_plus_exit_time_us =
+				dc->bb_overrides.sr_enter_plus_exit_time_ns / 1000.0;
+		}
+
+		if ((int)(dcn3_21_soc.urgent_latency_us * 1000) != dc->bb_overrides.urgent_latency_ns
+			&& dc->bb_overrides.urgent_latency_ns) {
+			dcn3_21_soc.urgent_latency_us = dc->bb_overrides.urgent_latency_ns / 1000.0;
+		}
+
+		if ((int)(dcn3_21_soc.dram_clock_change_latency_us * 1000)
+				!= dc->bb_overrides.dram_clock_change_latency_ns
+				&& dc->bb_overrides.dram_clock_change_latency_ns) {
+			dcn3_21_soc.dram_clock_change_latency_us =
+				dc->bb_overrides.dram_clock_change_latency_ns / 1000.0;
+		}
+
+		if ((int)(dcn3_21_soc.dummy_pstate_latency_us * 1000)
+				!= dc->bb_overrides.dummy_clock_change_latency_ns
+				&& dc->bb_overrides.dummy_clock_change_latency_ns) {
+			dcn3_21_soc.dummy_pstate_latency_us =
+				dc->bb_overrides.dummy_clock_change_latency_ns / 1000.0;
+		}
+
+		/* Override from VBIOS if VBIOS bb_info available */
+		if (dc->ctx->dc_bios->funcs->get_soc_bb_info) {
+			struct bp_soc_bb_info bb_info = {0};
+
+			if (dc->ctx->dc_bios->funcs->get_soc_bb_info(dc->ctx->dc_bios, &bb_info) == BP_RESULT_OK) {
+				if (bb_info.dram_clock_change_latency_100ns > 0)
+					dcn3_21_soc.dram_clock_change_latency_us = bb_info.dram_clock_change_latency_100ns * 10;
+
+			if (bb_info.dram_sr_enter_exit_latency_100ns > 0)
+				dcn3_21_soc.sr_enter_plus_exit_time_us = bb_info.dram_sr_enter_exit_latency_100ns * 10;
+
+			if (bb_info.dram_sr_exit_latency_100ns > 0)
+				dcn3_21_soc.sr_exit_time_us = bb_info.dram_sr_exit_latency_100ns * 10;
+			}
+		}
+
+		/* Override from VBIOS for num_chan */
+		if (dc->ctx->dc_bios->vram_info.num_chans)
+			dcn3_21_soc.num_chans = dc->ctx->dc_bios->vram_info.num_chans;
+
+		if (dc->ctx->dc_bios->vram_info.dram_channel_width_bytes)
+			dcn3_21_soc.dram_channel_width_bytes = dc->ctx->dc_bios->vram_info.dram_channel_width_bytes;
+
+	}
+
+	/* Override dispclk_dppclk_vco_speed_mhz from Clk Mgr */
+	dcn3_21_soc.dispclk_dppclk_vco_speed_mhz = dc->clk_mgr->dentist_vco_freq_khz / 1000.0;
+	dc->dml.soc.dispclk_dppclk_vco_speed_mhz = dc->clk_mgr->dentist_vco_freq_khz / 1000.0;
+
+	/* Overrides Clock levelsfrom CLK Mgr table entries as reported by PM FW */
+	if ((!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) && (bw_params->clk_table.entries[0].memclk_mhz)) {
+		unsigned int i = 0, j = 0, num_states = 0;
+
+		unsigned int dcfclk_mhz[DC__VOLTAGE_STATES] = {0};
+		unsigned int dram_speed_mts[DC__VOLTAGE_STATES] = {0};
+		unsigned int optimal_uclk_for_dcfclk_sta_targets[DC__VOLTAGE_STATES] = {0};
+		unsigned int optimal_dcfclk_for_uclk[DC__VOLTAGE_STATES] = {0};
+
+		unsigned int dcfclk_sta_targets[DC__VOLTAGE_STATES] = {615, 906, 1324, 1564};
+		unsigned int num_dcfclk_sta_targets = 4, num_uclk_states = 0;
+		unsigned int max_dcfclk_mhz = 0, max_dispclk_mhz = 0, max_dppclk_mhz = 0, max_phyclk_mhz = 0;
+
+		for (i = 0; i < MAX_NUM_DPM_LVL; i++) {
+			if (bw_params->clk_table.entries[i].dcfclk_mhz > max_dcfclk_mhz)
+				max_dcfclk_mhz = bw_params->clk_table.entries[i].dcfclk_mhz;
+			if (bw_params->clk_table.entries[i].dispclk_mhz > max_dispclk_mhz)
+				max_dispclk_mhz = bw_params->clk_table.entries[i].dispclk_mhz;
+			if (bw_params->clk_table.entries[i].dppclk_mhz > max_dppclk_mhz)
+				max_dppclk_mhz = bw_params->clk_table.entries[i].dppclk_mhz;
+			if (bw_params->clk_table.entries[i].phyclk_mhz > max_phyclk_mhz)
+				max_phyclk_mhz = bw_params->clk_table.entries[i].phyclk_mhz;
+		}
+		if (!max_dcfclk_mhz)
+			max_dcfclk_mhz = dcn3_21_soc.clock_limits[0].dcfclk_mhz;
+		if (!max_dispclk_mhz)
+			max_dispclk_mhz = dcn3_21_soc.clock_limits[0].dispclk_mhz;
+		if (!max_dppclk_mhz)
+			max_dppclk_mhz = dcn3_21_soc.clock_limits[0].dppclk_mhz;
+		if (!max_phyclk_mhz)
+			max_phyclk_mhz = dcn3_21_soc.clock_limits[0].phyclk_mhz;
+
+		if (max_dcfclk_mhz > dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
+			// If max DCFCLK is greater than the max DCFCLK STA target, insert into the DCFCLK STA target array
+			dcfclk_sta_targets[num_dcfclk_sta_targets] = max_dcfclk_mhz;
+			num_dcfclk_sta_targets++;
+		} else if (max_dcfclk_mhz < dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
+			// If max DCFCLK is less than the max DCFCLK STA target, cap values and remove duplicates
+			for (i = 0; i < num_dcfclk_sta_targets; i++) {
+				if (dcfclk_sta_targets[i] > max_dcfclk_mhz) {
+					dcfclk_sta_targets[i] = max_dcfclk_mhz;
+					break;
+				}
+			}
+			// Update size of array since we "removed" duplicates
+			num_dcfclk_sta_targets = i + 1;
+		}
+
+		num_uclk_states = bw_params->clk_table.num_entries;
+
+		// Calculate optimal dcfclk for each uclk
+		for (i = 0; i < num_uclk_states; i++) {
+			dcn321_get_optimal_dcfclk_fclk_for_uclk(bw_params->clk_table.entries[i].memclk_mhz * 16,
+					&optimal_dcfclk_for_uclk[i], NULL);
+			if (optimal_dcfclk_for_uclk[i] < bw_params->clk_table.entries[0].dcfclk_mhz) {
+				optimal_dcfclk_for_uclk[i] = bw_params->clk_table.entries[0].dcfclk_mhz;
+			}
+		}
+
+		// Calculate optimal uclk for each dcfclk sta target
+		for (i = 0; i < num_dcfclk_sta_targets; i++) {
+			for (j = 0; j < num_uclk_states; j++) {
+				if (dcfclk_sta_targets[i] < optimal_dcfclk_for_uclk[j]) {
+					optimal_uclk_for_dcfclk_sta_targets[i] =
+							bw_params->clk_table.entries[j].memclk_mhz * 16;
+					break;
+				}
+			}
+		}
+
+		i = 0;
+		j = 0;
+		// create the final dcfclk and uclk table
+		while (i < num_dcfclk_sta_targets && j < num_uclk_states && num_states < DC__VOLTAGE_STATES) {
+			if (dcfclk_sta_targets[i] < optimal_dcfclk_for_uclk[j] && i < num_dcfclk_sta_targets) {
+				dcfclk_mhz[num_states] = dcfclk_sta_targets[i];
+				dram_speed_mts[num_states++] = optimal_uclk_for_dcfclk_sta_targets[i++];
+			} else {
+				if (j < num_uclk_states && optimal_dcfclk_for_uclk[j] <= max_dcfclk_mhz) {
+					dcfclk_mhz[num_states] = optimal_dcfclk_for_uclk[j];
+					dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+				} else {
+					j = num_uclk_states;
+				}
+			}
+		}
+
+		while (i < num_dcfclk_sta_targets && num_states < DC__VOLTAGE_STATES) {
+			dcfclk_mhz[num_states] = dcfclk_sta_targets[i];
+			dram_speed_mts[num_states++] = optimal_uclk_for_dcfclk_sta_targets[i++];
+		}
+
+		while (j < num_uclk_states && num_states < DC__VOLTAGE_STATES &&
+				optimal_dcfclk_for_uclk[j] <= max_dcfclk_mhz) {
+			dcfclk_mhz[num_states] = optimal_dcfclk_for_uclk[j];
+			dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+		}
+
+		dcn3_21_soc.num_states = num_states;
+		for (i = 0; i < dcn3_21_soc.num_states; i++) {
+			dcn3_21_soc.clock_limits[i].state = i;
+			dcn3_21_soc.clock_limits[i].dcfclk_mhz = dcfclk_mhz[i];
+			dcn3_21_soc.clock_limits[i].fabricclk_mhz = dcfclk_mhz[i];
+			dcn3_21_soc.clock_limits[i].dram_speed_mts = dram_speed_mts[i];
+
+			/* Fill all states with max values of all these clocks */
+			dcn3_21_soc.clock_limits[i].dispclk_mhz = max_dispclk_mhz;
+			dcn3_21_soc.clock_limits[i].dppclk_mhz  = max_dppclk_mhz;
+			dcn3_21_soc.clock_limits[i].phyclk_mhz  = max_phyclk_mhz;
+			dcn3_21_soc.clock_limits[i].dscclk_mhz  = max_dispclk_mhz / 3;
+
+			/* Populate from bw_params for DTBCLK, SOCCLK */
+			if (!bw_params->clk_table.entries[i].dtbclk_mhz && i > 0)
+				dcn3_21_soc.clock_limits[i].dtbclk_mhz  = dcn3_21_soc.clock_limits[i-1].dtbclk_mhz;
+			else
+				dcn3_21_soc.clock_limits[i].dtbclk_mhz  = bw_params->clk_table.entries[i].dtbclk_mhz;
+
+			if (!bw_params->clk_table.entries[i].socclk_mhz && i > 0)
+				dcn3_21_soc.clock_limits[i].socclk_mhz = dcn3_21_soc.clock_limits[i-1].socclk_mhz;
+			else
+				dcn3_21_soc.clock_limits[i].socclk_mhz = bw_params->clk_table.entries[i].socclk_mhz;
+
+			/* These clocks cannot come from bw_params, always fill from dcn3_21_soc[0] */
+			/* PHYCLK_D18, PHYCLK_D32 */
+			dcn3_21_soc.clock_limits[i].phyclk_d18_mhz = dcn3_21_soc.clock_limits[0].phyclk_d18_mhz;
+			dcn3_21_soc.clock_limits[i].phyclk_d32_mhz = dcn3_21_soc.clock_limits[0].phyclk_d32_mhz;
+		}
+
+		/* Re-init DML with updated bb */
+		dml_init_instance(&dc->dml, &dcn3_21_soc, &dcn3_21_ip, DML_PROJECT_DCN32);
+		if (dc->current_state)
+			dml_init_instance(&dc->current_state->bw_ctx.dml, &dcn3_21_soc, &dcn3_21_ip, DML_PROJECT_DCN32);
+	}
+}
+
+static struct resource_funcs dcn321_res_pool_funcs = {
+	.destroy = dcn321_destroy_resource_pool,
+	.link_enc_create = dcn321_link_encoder_create,
+	.link_enc_create_minimal = NULL,
+	.panel_cntl_create = dcn32_panel_cntl_create,
+	.validate_bandwidth = dcn32_validate_bandwidth,
+	.calculate_wm_and_dlg = dcn32_calculate_wm_and_dlg,
+	.populate_dml_pipes = dcn32_populate_dml_pipes_from_context,
+	.acquire_idle_pipe_for_layer = dcn20_acquire_idle_pipe_for_layer,
+	.add_stream_to_ctx = dcn30_add_stream_to_ctx,
+	.add_dsc_to_stream_resource = dcn20_add_dsc_to_stream_resource,
+	.remove_stream_from_ctx = dcn20_remove_stream_from_ctx,
+	.populate_dml_writeback_from_context = dcn30_populate_dml_writeback_from_context,
+	.set_mcif_arb_params = dcn30_set_mcif_arb_params,
+	.find_first_free_match_stream_enc_for_link = dcn10_find_first_free_match_stream_enc_for_link,
+	.acquire_post_bldn_3dlut = dcn32_acquire_post_bldn_3dlut,
+	.release_post_bldn_3dlut = dcn32_release_post_bldn_3dlut,
+	.update_bw_bounding_box = dcn321_update_bw_bounding_box,
+	.patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+	.update_soc_for_wm_a = dcn30_update_soc_for_wm_a,
+	.add_phantom_pipes = dcn32_add_phantom_pipes,
+	.remove_phantom_pipes = dcn32_remove_phantom_pipes,
+};
+
+
+static bool dcn321_resource_construct(
+	uint8_t num_virtual_links,
+	struct dc *dc,
+	struct dcn321_resource_pool *pool)
+{
+	int i, j;
+	struct dc_context *ctx = dc->ctx;
+	struct irq_service_init_data init_data;
+	struct ddc_service_init_data ddc_init_data = {0};
+	uint32_t pipe_fuses = 0;
+	uint32_t num_pipes  = 4;
+
+	ctx->dc_bios->regs = &bios_regs;
+
+	pool->base.res_cap = &res_cap_dcn321;
+	/* max number of pipes for ASIC before checking for pipe fuses */
+	num_pipes  = pool->base.res_cap->num_timing_generator;
+	pipe_fuses = REG_READ(CC_DC_PIPE_DIS);
+
+	for (i = 0; i < pool->base.res_cap->num_timing_generator; i++)
+		if (pipe_fuses & 1 << i)
+			num_pipes--;
+
+	if (pipe_fuses & 1)
+		ASSERT(0); //Unexpected - Pipe 0 should always be fully functional!
+
+	if (pipe_fuses & CC_DC_PIPE_DIS__DC_FULL_DIS_MASK)
+		ASSERT(0); //Entire DCN is harvested!
+
+	/* within dml lib, initial value is hard coded, if ASIC pipe is fused, the
+	 * value will be changed, update max_num_dpp and max_num_otg for dml.
+	 */
+	dcn3_21_ip.max_num_dpp = num_pipes;
+	dcn3_21_ip.max_num_otg = num_pipes;
+
+	pool->base.funcs = &dcn321_res_pool_funcs;
+
+	/*************************************************
+	 *  Resource + asic cap harcoding                *
+	 *************************************************/
+	pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE;
+	pool->base.timing_generator_count = num_pipes;
+	pool->base.pipe_count = num_pipes;
+	pool->base.mpcc_count = num_pipes;
+	dc->caps.max_downscale_ratio = 600;
+	dc->caps.i2c_speed_in_khz = 100;
+	dc->caps.i2c_speed_in_khz_hdcp = 100; /*1.4 w/a applied by default*/
+	dc->caps.max_cursor_size = 256;
+	dc->caps.min_horizontal_blanking_period = 80;
+	dc->caps.dmdata_alloc_size = 2048;
+	dc->caps.mall_size_per_mem_channel = 0;
+	dc->caps.mall_size_total = 0;
+	dc->caps.cursor_cache_size = dc->caps.max_cursor_size * dc->caps.max_cursor_size * 8;
+	dc->caps.cache_line_size = 64;
+	dc->caps.cache_num_ways = 16;
+	dc->caps.max_cab_allocation_bytes = 33554432; // 32MB = 1024 * 1024 * 32
+	dc->caps.subvp_fw_processing_delay_us = 15;
+	dc->caps.subvp_prefetch_end_to_mall_start_us = 15;
+	dc->caps.subvp_pstate_allow_width_us = 20;
+
+	dc->caps.max_slave_planes = 1;
+	dc->caps.max_slave_yuv_planes = 1;
+	dc->caps.max_slave_rgb_planes = 1;
+	dc->caps.post_blend_color_processing = true;
+	dc->caps.force_dp_tps4_for_cp2520 = true;
+	dc->caps.dp_hpo = true;
+	dc->caps.edp_dsc_support = true;
+	dc->caps.extended_aux_timeout_support = true;
+	dc->caps.dmcub_support = true;
+
+	/* Color pipeline capabilities */
+	dc->caps.color.dpp.dcn_arch = 1;
+	dc->caps.color.dpp.input_lut_shared = 0;
+	dc->caps.color.dpp.icsc = 1;
+	dc->caps.color.dpp.dgam_ram = 0; // must use gamma_corr
+	dc->caps.color.dpp.dgam_rom_caps.srgb = 1;
+	dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1;
+	dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1;
+	dc->caps.color.dpp.dgam_rom_caps.pq = 1;
+	dc->caps.color.dpp.dgam_rom_caps.hlg = 1;
+	dc->caps.color.dpp.post_csc = 1;
+	dc->caps.color.dpp.gamma_corr = 1;
+	dc->caps.color.dpp.dgam_rom_for_yuv = 0;
+
+	dc->caps.color.dpp.hw_3d_lut = 0; //3DLUT removed from DPP
+	dc->caps.color.dpp.ogam_ram = 0;  //Blnd Gam also removed
+	// no OGAM ROM on DCN2 and later ASICs
+	dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
+	dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
+	dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
+	dc->caps.color.dpp.ogam_rom_caps.pq = 0;
+	dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
+	dc->caps.color.dpp.ocsc = 0;
+
+	dc->caps.color.mpc.gamut_remap = 1;
+	dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //4, configurable to be before or after BLND in MPCC
+	dc->caps.color.mpc.ogam_ram = 1;
+	dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
+	dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
+	dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
+	dc->caps.color.mpc.ogam_rom_caps.pq = 0;
+	dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
+	dc->caps.color.mpc.ocsc = 1;
+
+	/* read VBIOS LTTPR caps */
+	{
+		if (ctx->dc_bios->funcs->get_lttpr_caps) {
+			enum bp_result bp_query_result;
+			uint8_t is_vbios_lttpr_enable = 0;
+
+			bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
+			dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
+		}
+
+		/* interop bit is implicit */
+		{
+			dc->caps.vbios_lttpr_aware = true;
+		}
+	}
+
+	if (dc->ctx->dce_environment == DCE_ENV_PRODUCTION_DRV)
+		dc->debug = debug_defaults_drv;
+	else if (dc->ctx->dce_environment == DCE_ENV_FPGA_MAXIMUS) {
+		dc->debug = debug_defaults_diags;
+	} else
+		dc->debug = debug_defaults_diags;
+	// Init the vm_helper
+	if (dc->vm_helper)
+		vm_helper_init(dc->vm_helper, 16);
+
+	/*************************************************
+	 *  Create resources                             *
+	 *************************************************/
+
+	/* Clock Sources for Pixel Clock*/
+	pool->base.clock_sources[DCN321_CLK_SRC_PLL0] =
+			dcn321_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL0,
+				&clk_src_regs[0], false);
+	pool->base.clock_sources[DCN321_CLK_SRC_PLL1] =
+			dcn321_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL1,
+				&clk_src_regs[1], false);
+	pool->base.clock_sources[DCN321_CLK_SRC_PLL2] =
+			dcn321_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL2,
+				&clk_src_regs[2], false);
+	pool->base.clock_sources[DCN321_CLK_SRC_PLL3] =
+			dcn321_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL3,
+				&clk_src_regs[3], false);
+	pool->base.clock_sources[DCN321_CLK_SRC_PLL4] =
+			dcn321_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL4,
+				&clk_src_regs[4], false);
+
+	pool->base.clk_src_count = DCN321_CLK_SRC_TOTAL;
+
+	/* todo: not reuse phy_pll registers */
+	pool->base.dp_clock_source =
+			dcn321_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_ID_DP_DTO,
+				&clk_src_regs[0], true);
+
+	for (i = 0; i < pool->base.clk_src_count; i++) {
+		if (pool->base.clock_sources[i] == NULL) {
+			dm_error("DC: failed to create clock sources!\n");
+			BREAK_TO_DEBUGGER();
+			goto create_fail;
+		}
+	}
+
+	/* DCCG */
+	pool->base.dccg = dccg32_create(ctx, &dccg_regs, &dccg_shift, &dccg_mask);
+	if (pool->base.dccg == NULL) {
+		dm_error("DC: failed to create dccg!\n");
+		BREAK_TO_DEBUGGER();
+		goto create_fail;
+	}
+
+	/* DML */
+	if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment))
+		dml_init_instance(&dc->dml, &dcn3_21_soc, &dcn3_21_ip, DML_PROJECT_DCN32);
+
+	/* IRQ Service */
+	init_data.ctx = dc->ctx;
+	pool->base.irqs = dal_irq_service_dcn32_create(&init_data);
+	if (!pool->base.irqs)
+		goto create_fail;
+
+	/* HUBBUB */
+	pool->base.hubbub = dcn321_hubbub_create(ctx);
+	if (pool->base.hubbub == NULL) {
+		BREAK_TO_DEBUGGER();
+		dm_error("DC: failed to create hubbub!\n");
+		goto create_fail;
+	}
+
+	/* HUBPs, DPPs, OPPs, TGs, ABMs */
+	for (i = 0, j = 0; i < pool->base.res_cap->num_timing_generator; i++) {
+
+		/* if pipe is disabled, skip instance of HW pipe,
+		 * i.e, skip ASIC register instance
+		 */
+		if (pipe_fuses & 1 << i)
+			continue;
+
+		pool->base.hubps[j] = dcn321_hubp_create(ctx, i);
+		if (pool->base.hubps[j] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC: failed to create hubps!\n");
+			goto create_fail;
+		}
+
+		pool->base.dpps[j] = dcn321_dpp_create(ctx, i);
+		if (pool->base.dpps[j] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC: failed to create dpps!\n");
+			goto create_fail;
+		}
+
+		pool->base.opps[j] = dcn321_opp_create(ctx, i);
+		if (pool->base.opps[j] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC: failed to create output pixel processor!\n");
+			goto create_fail;
+		}
+
+		pool->base.timing_generators[j] = dcn321_timing_generator_create(
+				ctx, i);
+		if (pool->base.timing_generators[j] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error("DC: failed to create tg!\n");
+			goto create_fail;
+		}
+
+		pool->base.multiple_abms[j] = dmub_abm_create(ctx,
+				&abm_regs[i],
+				&abm_shift,
+				&abm_mask);
+		if (pool->base.multiple_abms[j] == NULL) {
+			dm_error("DC: failed to create abm for pipe %d!\n", i);
+			BREAK_TO_DEBUGGER();
+			goto create_fail;
+		}
+
+		/* index for resource pool arrays for next valid pipe */
+		j++;
+	}
+
+	/* PSR */
+	pool->base.psr = dmub_psr_create(ctx);
+	if (pool->base.psr == NULL) {
+		dm_error("DC: failed to create psr obj!\n");
+		BREAK_TO_DEBUGGER();
+		goto create_fail;
+	}
+
+	/* MPCCs */
+	pool->base.mpc = dcn321_mpc_create(ctx,  pool->base.res_cap->num_timing_generator, pool->base.res_cap->num_mpc_3dlut);
+	if (pool->base.mpc == NULL) {
+		BREAK_TO_DEBUGGER();
+		dm_error("DC: failed to create mpc!\n");
+		goto create_fail;
+	}
+
+	/* DSCs */
+	for (i = 0; i < pool->base.res_cap->num_dsc; i++) {
+		pool->base.dscs[i] = dcn321_dsc_create(ctx, i);
+		if (pool->base.dscs[i] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error("DC: failed to create display stream compressor %d!\n", i);
+			goto create_fail;
+		}
+	}
+
+	/* DWB */
+	if (!dcn321_dwbc_create(ctx, &pool->base)) {
+		BREAK_TO_DEBUGGER();
+		dm_error("DC: failed to create dwbc!\n");
+		goto create_fail;
+	}
+
+	/* MMHUBBUB */
+	if (!dcn321_mmhubbub_create(ctx, &pool->base)) {
+		BREAK_TO_DEBUGGER();
+		dm_error("DC: failed to create mcif_wb!\n");
+		goto create_fail;
+	}
+
+	/* AUX and I2C */
+	for (i = 0; i < pool->base.res_cap->num_ddc; i++) {
+		pool->base.engines[i] = dcn321_aux_engine_create(ctx, i);
+		if (pool->base.engines[i] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC:failed to create aux engine!!\n");
+			goto create_fail;
+		}
+		pool->base.hw_i2cs[i] = dcn321_i2c_hw_create(ctx, i);
+		if (pool->base.hw_i2cs[i] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC:failed to create hw i2c!!\n");
+			goto create_fail;
+		}
+		pool->base.sw_i2cs[i] = NULL;
+	}
+
+	/* Audio, HWSeq, Stream Encoders including HPO and virtual, MPC 3D LUTs */
+	if (!resource_construct(num_virtual_links, dc, &pool->base,
+			(!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment) ?
+			&res_create_funcs : &res_create_maximus_funcs)))
+			goto create_fail;
+
+	/* HW Sequencer init functions and Plane caps */
+	dcn32_hw_sequencer_init_functions(dc);
+
+	dc->caps.max_planes =  pool->base.pipe_count;
+
+	for (i = 0; i < dc->caps.max_planes; ++i)
+		dc->caps.planes[i] = plane_cap;
+
+	dc->cap_funcs = cap_funcs;
+
+	if (dc->ctx->dc_bios->fw_info.oem_i2c_present) {
+		ddc_init_data.ctx = dc->ctx;
+		ddc_init_data.link = NULL;
+		ddc_init_data.id.id = dc->ctx->dc_bios->fw_info.oem_i2c_obj_id;
+		ddc_init_data.id.enum_id = 0;
+		ddc_init_data.id.type = OBJECT_TYPE_GENERIC;
+		pool->base.oem_device = dal_ddc_service_create(&ddc_init_data);
+	} else {
+		pool->base.oem_device = NULL;
+	}
+
+	return true;
+
+create_fail:
+
+	dcn321_resource_destruct(pool);
+
+	return false;
+}
+
+struct resource_pool *dcn321_create_resource_pool(
+		const struct dc_init_data *init_data,
+		struct dc *dc)
+{
+	struct dcn321_resource_pool *pool =
+		kzalloc(sizeof(struct dcn321_resource_pool), GFP_KERNEL);
+
+	if (!pool)
+		return NULL;
+
+	if (dcn321_resource_construct(init_data->num_virtual_links, dc, pool))
+		return &pool->base;
+
+	BREAK_TO_DEBUGGER();
+	kfree(pool);
+	return NULL;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
index 46ce5a0ee4ec..a89d219e712c 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
@@ -125,6 +125,7 @@ struct nv_wm_range_entry {
 		double pstate_latency_us;
 		double sr_exit_time_us;
 		double sr_enter_plus_exit_time_us;
+		double fclk_change_latency_us;
 	} dml_input;
 };
 
@@ -142,6 +143,7 @@ struct clk_state_registers_and_bypass {
 	uint32_t dprefclk;
 	uint32_t dispclk;
 	uint32_t dppclk;
+	uint32_t dtbclk;
 
 	uint32_t dppclk_bypass;
 	uint32_t dcfclk_bypass;
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
index 1391c20f1852..68c2ed434d2c 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
@@ -112,9 +112,10 @@ enum dentist_divider_range {
 	CLK_SRI(CLK3_CLK_PLL_REQ, CLK3, 0), \
 	CLK_SRI(CLK3_CLK2_DFS_CNTL, CLK3, 0)
 
-// TODO:
 #define CLK_REG_LIST_DCN3()	  \
-	SR(DENTIST_DISPCLK_CNTL)
+	CLK_COMMON_REG_LIST_DCN_BASE(), \
+	CLK_SRI(CLK0_CLK_PLL_REQ,   CLK02, 0), \
+	CLK_SRI(CLK0_CLK2_DFS_CNTL, CLK02, 0)
 
 #define CLK_SF(reg_name, field_name, post_fix)\
 	.field_name = reg_name ## __ ## field_name ## post_fix
@@ -155,6 +156,34 @@ enum dentist_divider_range {
 	CLK_SF(DENTIST_DISPCLK_CNTL, DENTIST_DPPCLK_CHG_DONE, mask_sh),\
 	CLK_SF(CLK4_0_CLK4_CLK_PLL_REQ, FbMult_int, mask_sh)
 
+#define CLK_REG_LIST_DCN32()	  \
+	SR(DENTIST_DISPCLK_CNTL), \
+	CLK_SR_DCN32(CLK1_CLK_PLL_REQ), \
+	CLK_SR_DCN32(CLK1_CLK0_DFS_CNTL), \
+	CLK_SR_DCN32(CLK1_CLK1_DFS_CNTL), \
+	CLK_SR_DCN32(CLK1_CLK2_DFS_CNTL), \
+	CLK_SR_DCN32(CLK1_CLK3_DFS_CNTL), \
+	CLK_SR_DCN32(CLK1_CLK4_DFS_CNTL)
+
+#define CLK_COMMON_MASK_SH_LIST_DCN32(mask_sh) \
+	CLK_COMMON_MASK_SH_LIST_DCN20_BASE(mask_sh),\
+	CLK_SF(CLK1_CLK_PLL_REQ, FbMult_int, mask_sh),\
+	CLK_SF(CLK1_CLK_PLL_REQ, FbMult_frac, mask_sh)
+
+#define CLK_REG_LIST_DCN321()	  \
+	SR(DENTIST_DISPCLK_CNTL), \
+	CLK_SR_DCN321(CLK0_CLK_PLL_REQ,   CLK01, 0), \
+	CLK_SR_DCN321(CLK0_CLK0_DFS_CNTL, CLK01, 0), \
+	CLK_SR_DCN321(CLK0_CLK1_DFS_CNTL, CLK01, 0), \
+	CLK_SR_DCN321(CLK0_CLK2_DFS_CNTL, CLK01, 0), \
+	CLK_SR_DCN321(CLK0_CLK3_DFS_CNTL, CLK01, 0), \
+	CLK_SR_DCN321(CLK0_CLK4_DFS_CNTL, CLK01, 0)
+
+#define CLK_COMMON_MASK_SH_LIST_DCN321(mask_sh) \
+	CLK_COMMON_MASK_SH_LIST_DCN20_BASE(mask_sh),\
+	CLK_SF(CLK0_CLK_PLL_REQ, FbMult_int, mask_sh),\
+	CLK_SF(CLK0_CLK_PLL_REQ, FbMult_frac, mask_sh)
+
 #define CLK_REG_FIELD_LIST(type) \
 	type DPREFCLK_SRC_SEL; \
 	type DENTIST_DPREFCLK_WDIVIDER; \
@@ -199,6 +228,18 @@ struct clk_mgr_registers {
 	uint32_t CLK0_CLK2_DFS_CNTL;
 	uint32_t CLK0_CLK_PLL_REQ;
 
+	uint32_t CLK1_CLK_PLL_REQ;
+	uint32_t CLK1_CLK0_DFS_CNTL;
+	uint32_t CLK1_CLK1_DFS_CNTL;
+	uint32_t CLK1_CLK2_DFS_CNTL;
+	uint32_t CLK1_CLK3_DFS_CNTL;
+	uint32_t CLK1_CLK4_DFS_CNTL;
+
+	uint32_t CLK0_CLK0_DFS_CNTL;
+	uint32_t CLK0_CLK1_DFS_CNTL;
+	uint32_t CLK0_CLK3_DFS_CNTL;
+	uint32_t CLK0_CLK4_DFS_CNTL;
+
 	uint32_t MP1_SMN_C2PMSG_67;
 	uint32_t MP1_SMN_C2PMSG_83;
 	uint32_t MP1_SMN_C2PMSG_91;
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 10/43] drm/amd/display: add DCN32/321 specific files for Display Core
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (7 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 09/43] drm/amd/display: add CLKMGR " Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 11/43] drm/amd/display: Add dependant changes for DCN32/321 Alex Deucher
                   ` (32 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

Add core DC support for DCN 3.2.x.

v2: squash in fixup (Alex)

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn32/Makefile |   37 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c |  299 ++
 .../gpu/drm/amd/display/dc/dcn32/dcn32_dccg.h |  155 +
 .../display/dc/dcn32/dcn32_dio_link_encoder.c |  283 ++
 .../display/dc/dcn32/dcn32_dio_link_encoder.h |   42 +
 .../dc/dcn32/dcn32_dio_stream_encoder.c       |  427 ++
 .../dc/dcn32/dcn32_dio_stream_encoder.h       |  254 ++
 .../gpu/drm/amd/display/dc/dcn32/dcn32_dpp.c  |  164 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_dpp.h  |   38 +
 .../dc/dcn32/dcn32_hpo_dp_link_encoder.c      |   90 +
 .../dc/dcn32/dcn32_hpo_dp_link_encoder.h      |   63 +
 .../drm/amd/display/dc/dcn32/dcn32_hubbub.c   |  964 ++++
 .../drm/amd/display/dc/dcn32/dcn32_hubbub.h   |  172 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_hubp.c |  148 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_hubp.h |   69 +
 .../drm/amd/display/dc/dcn32/dcn32_hwseq.c    |  891 ++++
 .../drm/amd/display/dc/dcn32/dcn32_hwseq.h    |   64 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_init.c |  155 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_init.h |   33 +
 .../drm/amd/display/dc/dcn32/dcn32_mmhubbub.c |  239 +
 .../drm/amd/display/dc/dcn32/dcn32_mmhubbub.h |  225 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c  |  810 ++++
 .../gpu/drm/amd/display/dc/dcn32/dcn32_mpc.h  |  213 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_optc.c |  236 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_optc.h |  253 ++
 .../drm/amd/display/dc/dcn32/dcn32_resource.c | 3976 +++++++++++++++++
 .../drm/amd/display/dc/dcn32/dcn32_resource.h |   88 +
 .../gpu/drm/amd/display/dc/dcn321/Makefile    |   34 +
 .../amd/display/dc/dcn321/dcn321_resource.c   |   32 +-
 .../amd/display/dc/dcn321/dcn321_resource.h   |   42 +
 30 files changed, 10478 insertions(+), 18 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/Makefile
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubp.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubp.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mmhubbub.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mmhubbub.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn321/Makefile
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.h

diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/Makefile b/drivers/gpu/drm/amd/display/dc/dcn32/Makefile
new file mode 100644
index 000000000000..6e0328060255
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/Makefile
@@ -0,0 +1,37 @@
+#
+# (c) Copyright 2022 Advanced Micro Devices, Inc. All the rights reserved
+#
+#  All rights reserved.  This notice is intended as a precaution against
+#  inadvertent publication and does not imply publication or any waiver
+#  of confidentiality.  The year included in the foregoing notice is the
+#  year of creation of the work.
+#
+#  Authors: AMD
+#
+# Makefile for dcn32.
+
+DCN32 = dcn32_resource.o dcn32_hubbub.o dcn32_hwseq.o dcn32_init.o \
+		dcn32_dccg.o dcn32_optc.o dcn32_mmhubbub.o dcn32_hubp.o dcn32_dpp.o \
+		dcn32_dio_stream_encoder.o dcn32_dio_link_encoder.o dcn32_hpo_dp_link_encoder.o \
+		dcn32_mpc.o
+
+CFLAGS_$(AMDDALPATH)/dc/dcn32/dcn32_resource.o := -mhard-float -msse
+
+ifdef CONFIG_CC_IS_GCC
+ifeq ($(call cc-ifversion, -lt, 0701, y), y)
+IS_OLD_GCC = 1
+endif
+endif
+
+ifdef IS_OLD_GCC
+# Stack alignment mismatch, proceed with caution.
+# GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3
+# (8B stack alignment).
+CFLAGS_$(AMDDALPATH)/dc/dcn32/dcn32_resource.o += -mpreferred-stack-boundary=4
+else
+CFLAGS_$(AMDDALPATH)/dc/dcn32/dcn32_resource.o += -msse2
+endif
+
+AMD_DAL_DCN32 = $(addprefix $(AMDDALPATH)/dc/dcn32/,$(DCN32))
+
+AMD_DISPLAY_FILES += $(AMD_DAL_DCN32)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
new file mode 100644
index 000000000000..12633561be3f
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
@@ -0,0 +1,299 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "reg_helper.h"
+#include "core_types.h"
+#include "dcn32_dccg.h"
+
+#define TO_DCN_DCCG(dccg)\
+	container_of(dccg, struct dcn_dccg, base)
+
+#define REG(reg) \
+	(dccg_dcn->regs->reg)
+
+#undef FN
+#define FN(reg_name, field_name) \
+	dccg_dcn->dccg_shift->field_name, dccg_dcn->dccg_mask->field_name
+
+#define CTX \
+	dccg_dcn->base.ctx
+#define DC_LOGGER \
+	dccg->ctx->logger
+
+enum pixel_rate_div {
+	PIXEL_RATE_DIV_BY_1 = 0,
+	PIXEL_RATE_DIV_BY_2 = 1,
+	PIXEL_RATE_DIV_BY_4 = 3
+};
+
+static void dccg32_set_pixel_rate_div(
+		struct dccg *dccg,
+		uint32_t otg_inst,
+		enum pixel_rate_div k1,
+		enum pixel_rate_div k2)
+{
+	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+
+	switch (otg_inst) {
+	case 0:
+		REG_UPDATE_2(OTG_PIXEL_RATE_DIV,
+				OTG0_PIXEL_RATE_DIVK1, k1,
+				OTG0_PIXEL_RATE_DIVK2, k2);
+		break;
+	case 1:
+		REG_UPDATE_2(OTG_PIXEL_RATE_DIV,
+				OTG1_PIXEL_RATE_DIVK1, k1,
+				OTG1_PIXEL_RATE_DIVK2, k2);
+		break;
+	case 2:
+		REG_UPDATE_2(OTG_PIXEL_RATE_DIV,
+				OTG2_PIXEL_RATE_DIVK1, k1,
+				OTG2_PIXEL_RATE_DIVK2, k2);
+		break;
+	case 3:
+		REG_UPDATE_2(OTG_PIXEL_RATE_DIV,
+				OTG3_PIXEL_RATE_DIVK1, k1,
+				OTG3_PIXEL_RATE_DIVK2, k2);
+		break;
+	default:
+		BREAK_TO_DEBUGGER();
+		return;
+	}
+}
+
+static void dccg32_set_dtbclk_p_src(
+		struct dccg *dccg,
+		enum streamclk_source src,
+		uint32_t otg_inst)
+{
+	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+
+	uint32_t p_src_sel = 0; /* selects dprefclk */
+	if (src == DTBCLK0)
+		p_src_sel = 2;  /* selects dtbclk0 */
+
+	switch (otg_inst) {
+	case 0:
+		if (src == REFCLK)
+			REG_UPDATE(DTBCLK_P_CNTL,
+					DTBCLK_P0_EN, 0);
+		else
+			REG_UPDATE_2(DTBCLK_P_CNTL,
+					DTBCLK_P0_SRC_SEL, p_src_sel,
+					DTBCLK_P0_EN, 1);
+		break;
+	case 1:
+		if (src == REFCLK)
+			REG_UPDATE(DTBCLK_P_CNTL,
+					DTBCLK_P1_EN, 0);
+		else
+			REG_UPDATE_2(DTBCLK_P_CNTL,
+					DTBCLK_P1_SRC_SEL, p_src_sel,
+					DTBCLK_P1_EN, 1);
+		break;
+	case 2:
+		if (src == REFCLK)
+			REG_UPDATE(DTBCLK_P_CNTL,
+					DTBCLK_P2_EN, 0);
+		else
+			REG_UPDATE_2(DTBCLK_P_CNTL,
+					DTBCLK_P2_SRC_SEL, p_src_sel,
+					DTBCLK_P2_EN, 1);
+		break;
+	case 3:
+		if (src == REFCLK)
+			REG_UPDATE(DTBCLK_P_CNTL,
+					DTBCLK_P3_EN, 0);
+		else
+			REG_UPDATE_2(DTBCLK_P_CNTL,
+					DTBCLK_P3_SRC_SEL, p_src_sel,
+					DTBCLK_P3_EN, 1);
+		break;
+	default:
+		BREAK_TO_DEBUGGER();
+		return;
+	}
+
+}
+
+/* Controls the generation of pixel valid for OTG in (OTG -> HPO case) */
+void dccg32_set_dtbclk_dto(
+		struct dccg *dccg,
+		int otg_inst,
+		int pixclk_khz,
+		int num_odm_segments,
+		const struct dc_crtc_timing *timing)
+{
+	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+	/* DTO Output Rate / Pixel Rate = 1/4 */
+	int req_dtbclk_khz = pixclk_khz / 4;
+
+	if (dccg->ref_dtbclk_khz && req_dtbclk_khz) {
+		uint32_t modulo, phase;
+
+		// phase / modulo = dtbclk / dtbclk ref
+		modulo = 0xffffffff;
+		phase = (((unsigned long long)modulo * req_dtbclk_khz) + dccg->ref_dtbclk_khz - 1) / dccg->ref_dtbclk_khz;
+
+		REG_WRITE(DTBCLK_DTO_MODULO[otg_inst], modulo);
+		REG_WRITE(DTBCLK_DTO_PHASE[otg_inst], phase);
+
+		REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
+				DTBCLK_DTO_ENABLE[otg_inst], 1);
+
+		REG_WAIT(OTG_PIXEL_RATE_CNTL[otg_inst],
+				DTBCLKDTO_ENABLE_STATUS[otg_inst], 1,
+				1, 100);
+
+		/* program OTG_PIXEL_RATE_DIV for DIVK1 and DIVK2 fields */
+		dccg32_set_pixel_rate_div(dccg, otg_inst, PIXEL_RATE_DIV_BY_1, PIXEL_RATE_DIV_BY_1);
+
+		/* The recommended programming sequence to enable DTBCLK DTO to generate
+		 * valid pixel HPO DPSTREAM ENCODER, specifies that DTO source select should
+		 * be set only after DTO is enabled
+		 */
+		REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
+				PIPE_DTO_SRC_SEL[otg_inst], 2);
+
+		dccg->dtbclk_khz[otg_inst] = req_dtbclk_khz;
+	} else {
+		REG_UPDATE_2(OTG_PIXEL_RATE_CNTL[otg_inst],
+				DTBCLK_DTO_ENABLE[otg_inst], 0,
+				PIPE_DTO_SRC_SEL[otg_inst], 1);
+
+		REG_WRITE(DTBCLK_DTO_MODULO[otg_inst], 0);
+		REG_WRITE(DTBCLK_DTO_PHASE[otg_inst], 0);
+
+		dccg->dtbclk_khz[otg_inst] = 0;
+	}
+}
+
+static void dccg32_get_dccg_ref_freq(struct dccg *dccg,
+		unsigned int xtalin_freq_inKhz,
+		unsigned int *dccg_ref_freq_inKhz)
+{
+	/*
+	 * Assume refclk is sourced from xtalin
+	 * expect 100MHz
+	 */
+	*dccg_ref_freq_inKhz = xtalin_freq_inKhz;
+	return;
+}
+
+void dccg32_set_dpstreamclk(
+		struct dccg *dccg,
+		enum streamclk_source src,
+		int otg_inst)
+{
+	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+
+	/* set the dtbclk_p source */
+	dccg32_set_dtbclk_p_src(dccg, src, otg_inst);
+
+	/* enabled to select one of the DTBCLKs for pipe */
+	switch (otg_inst)
+	{
+	case 0:
+		REG_UPDATE_2(DPSTREAMCLK_CNTL,
+			     DPSTREAMCLK0_EN,
+			     (src == REFCLK) ? 0 : 1, DPSTREAMCLK0_SRC_SEL, 0);
+		break;
+	case 1:
+		REG_UPDATE_2(DPSTREAMCLK_CNTL, DPSTREAMCLK1_EN,
+			     (src == REFCLK) ? 0 : 1, DPSTREAMCLK1_SRC_SEL, 1);
+		break;
+	case 2:
+		REG_UPDATE_2(DPSTREAMCLK_CNTL, DPSTREAMCLK2_EN,
+			     (src == REFCLK) ? 0 : 1, DPSTREAMCLK2_SRC_SEL, 2);
+		break;
+	case 3:
+		REG_UPDATE_2(DPSTREAMCLK_CNTL, DPSTREAMCLK3_EN,
+			     (src == REFCLK) ? 0 : 1, DPSTREAMCLK3_SRC_SEL, 3);
+		break;
+	default:
+		BREAK_TO_DEBUGGER();
+		return;
+	}
+}
+
+void dccg32_otg_add_pixel(struct dccg *dccg,
+		uint32_t otg_inst)
+{
+	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+
+	REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
+			OTG_ADD_PIXEL[otg_inst], 1);
+}
+
+void dccg32_otg_drop_pixel(struct dccg *dccg,
+		uint32_t otg_inst)
+{
+	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+
+	REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
+			OTG_DROP_PIXEL[otg_inst], 1);
+}
+
+static const struct dccg_funcs dccg32_funcs = {
+	.update_dpp_dto = dccg2_update_dpp_dto,
+	.get_dccg_ref_freq = dccg32_get_dccg_ref_freq,
+	.dccg_init = dccg31_init,
+	.set_dpstreamclk = dccg32_set_dpstreamclk,
+	.enable_symclk32_se = dccg31_enable_symclk32_se,
+	.disable_symclk32_se = dccg31_disable_symclk32_se,
+	.enable_symclk32_le = dccg31_enable_symclk32_le,
+	.disable_symclk32_le = dccg31_disable_symclk32_le,
+	.set_physymclk = dccg31_set_physymclk,
+	.set_dtbclk_dto = dccg32_set_dtbclk_dto,
+	.set_fifo_errdet_ovr_en = dccg2_set_fifo_errdet_ovr_en,
+	.set_audio_dtbclk_dto = dccg31_set_audio_dtbclk_dto,
+	.otg_add_pixel = dccg32_otg_add_pixel,
+	.otg_drop_pixel = dccg32_otg_drop_pixel,
+};
+
+struct dccg *dccg32_create(
+	struct dc_context *ctx,
+	const struct dccg_registers *regs,
+	const struct dccg_shift *dccg_shift,
+	const struct dccg_mask *dccg_mask)
+{
+	struct dcn_dccg *dccg_dcn = kzalloc(sizeof(*dccg_dcn), GFP_KERNEL);
+	struct dccg *base;
+
+	if (dccg_dcn == NULL) {
+		BREAK_TO_DEBUGGER();
+		return NULL;
+	}
+
+	base = &dccg_dcn->base;
+	base->ctx = ctx;
+	base->funcs = &dccg32_funcs;
+
+	dccg_dcn->regs = regs;
+	dccg_dcn->dccg_shift = dccg_shift;
+	dccg_dcn->dccg_mask = dccg_mask;
+
+	return &dccg_dcn->base;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.h
new file mode 100644
index 000000000000..0e54c0a105a1
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.h
@@ -0,0 +1,155 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DCN32_DCCG_H__
+#define __DCN32_DCCG_H__
+
+#include "dcn31/dcn31_dccg.h"
+
+#define DCCG_SFII(block, reg_name, field_prefix, field_name, inst, post_fix)\
+	.field_prefix ## _ ## field_name[inst] = block ## inst ## _ ## reg_name ## __ ## field_prefix ## inst ## _ ## field_name ## post_fix
+
+
+#define DCCG_REG_LIST_DCN32() \
+	SR(DPPCLK_DTO_CTRL),\
+	DCCG_SRII(DTO_PARAM, DPPCLK, 0),\
+	DCCG_SRII(DTO_PARAM, DPPCLK, 1),\
+	DCCG_SRII(DTO_PARAM, DPPCLK, 2),\
+	DCCG_SRII(DTO_PARAM, DPPCLK, 3),\
+	DCCG_SRII(CLOCK_CNTL, HDMICHARCLK, 0),\
+	SR(PHYASYMCLK_CLOCK_CNTL),\
+	SR(PHYBSYMCLK_CLOCK_CNTL),\
+	SR(PHYCSYMCLK_CLOCK_CNTL),\
+	SR(PHYDSYMCLK_CLOCK_CNTL),\
+	SR(PHYESYMCLK_CLOCK_CNTL),\
+	SR(DPSTREAMCLK_CNTL),\
+	SR(SYMCLK32_SE_CNTL),\
+	SR(SYMCLK32_LE_CNTL),\
+	DCCG_SRII(PIXEL_RATE_CNTL, OTG, 0),\
+	DCCG_SRII(PIXEL_RATE_CNTL, OTG, 1),\
+	DCCG_SRII(PIXEL_RATE_CNTL, OTG, 2),\
+	DCCG_SRII(PIXEL_RATE_CNTL, OTG, 3),\
+	DCCG_SRII(MODULO, DTBCLK_DTO, 0),\
+	DCCG_SRII(MODULO, DTBCLK_DTO, 1),\
+	DCCG_SRII(MODULO, DTBCLK_DTO, 2),\
+	DCCG_SRII(MODULO, DTBCLK_DTO, 3),\
+	DCCG_SRII(PHASE, DTBCLK_DTO, 0),\
+	DCCG_SRII(PHASE, DTBCLK_DTO, 1),\
+	DCCG_SRII(PHASE, DTBCLK_DTO, 2),\
+	DCCG_SRII(PHASE, DTBCLK_DTO, 3),\
+	SR(DCCG_AUDIO_DTBCLK_DTO_MODULO),\
+	SR(DCCG_AUDIO_DTBCLK_DTO_PHASE),\
+	SR(OTG_PIXEL_RATE_DIV),\
+	SR(DTBCLK_P_CNTL),\
+	SR(DCCG_AUDIO_DTO_SOURCE)
+
+
+#define DCCG_MASK_SH_LIST_DCN32(mask_sh) \
+	DCCG_SFI(DPPCLK_DTO_CTRL, DTO_ENABLE, DPPCLK, 0, mask_sh),\
+	DCCG_SFI(DPPCLK_DTO_CTRL, DTO_DB_EN, DPPCLK, 0, mask_sh),\
+	DCCG_SFI(DPPCLK_DTO_CTRL, DTO_ENABLE, DPPCLK, 1, mask_sh),\
+	DCCG_SFI(DPPCLK_DTO_CTRL, DTO_DB_EN, DPPCLK, 1, mask_sh),\
+	DCCG_SFI(DPPCLK_DTO_CTRL, DTO_ENABLE, DPPCLK, 2, mask_sh),\
+	DCCG_SFI(DPPCLK_DTO_CTRL, DTO_DB_EN, DPPCLK, 2, mask_sh),\
+	DCCG_SFI(DPPCLK_DTO_CTRL, DTO_ENABLE, DPPCLK, 3, mask_sh),\
+	DCCG_SFI(DPPCLK_DTO_CTRL, DTO_DB_EN, DPPCLK, 3, mask_sh),\
+	DCCG_SF(DPPCLK0_DTO_PARAM, DPPCLK0_DTO_PHASE, mask_sh),\
+	DCCG_SF(DPPCLK0_DTO_PARAM, DPPCLK0_DTO_MODULO, mask_sh),\
+	DCCG_SF(HDMICHARCLK0_CLOCK_CNTL, HDMICHARCLK0_EN, mask_sh),\
+	DCCG_SF(HDMICHARCLK0_CLOCK_CNTL, HDMICHARCLK0_SRC_SEL, mask_sh),\
+	DCCG_SF(PHYASYMCLK_CLOCK_CNTL, PHYASYMCLK_FORCE_EN, mask_sh),\
+	DCCG_SF(PHYASYMCLK_CLOCK_CNTL, PHYASYMCLK_FORCE_SRC_SEL, mask_sh),\
+	DCCG_SF(PHYBSYMCLK_CLOCK_CNTL, PHYBSYMCLK_FORCE_EN, mask_sh),\
+	DCCG_SF(PHYBSYMCLK_CLOCK_CNTL, PHYBSYMCLK_FORCE_SRC_SEL, mask_sh),\
+	DCCG_SF(PHYCSYMCLK_CLOCK_CNTL, PHYCSYMCLK_FORCE_EN, mask_sh),\
+	DCCG_SF(PHYCSYMCLK_CLOCK_CNTL, PHYCSYMCLK_FORCE_SRC_SEL, mask_sh),\
+	DCCG_SF(PHYDSYMCLK_CLOCK_CNTL, PHYDSYMCLK_FORCE_EN, mask_sh),\
+	DCCG_SF(PHYDSYMCLK_CLOCK_CNTL, PHYDSYMCLK_FORCE_SRC_SEL, mask_sh),\
+	DCCG_SF(PHYESYMCLK_CLOCK_CNTL, PHYESYMCLK_FORCE_EN, mask_sh),\
+	DCCG_SF(PHYESYMCLK_CLOCK_CNTL, PHYESYMCLK_FORCE_SRC_SEL, mask_sh),\
+	DCCG_SF(DPSTREAMCLK_CNTL, DPSTREAMCLK0_EN, mask_sh),\
+	DCCG_SF(DPSTREAMCLK_CNTL, DPSTREAMCLK1_EN, mask_sh),\
+	DCCG_SF(DPSTREAMCLK_CNTL, DPSTREAMCLK2_EN, mask_sh),\
+	DCCG_SF(DPSTREAMCLK_CNTL, DPSTREAMCLK3_EN, mask_sh),\
+	DCCG_SF(DPSTREAMCLK_CNTL, DPSTREAMCLK0_SRC_SEL, mask_sh),\
+	DCCG_SF(DPSTREAMCLK_CNTL, DPSTREAMCLK1_SRC_SEL, mask_sh),\
+	DCCG_SF(DPSTREAMCLK_CNTL, DPSTREAMCLK2_SRC_SEL, mask_sh),\
+	DCCG_SF(DPSTREAMCLK_CNTL, DPSTREAMCLK3_SRC_SEL, mask_sh),\
+	DCCG_SF(HDMISTREAMCLK_CNTL, HDMISTREAMCLK0_EN, mask_sh),\
+	DCCG_SF(SYMCLK32_SE_CNTL, SYMCLK32_SE0_SRC_SEL, mask_sh),\
+	DCCG_SF(SYMCLK32_SE_CNTL, SYMCLK32_SE1_SRC_SEL, mask_sh),\
+	DCCG_SF(SYMCLK32_SE_CNTL, SYMCLK32_SE2_SRC_SEL, mask_sh),\
+	DCCG_SF(SYMCLK32_SE_CNTL, SYMCLK32_SE3_SRC_SEL, mask_sh),\
+	DCCG_SF(SYMCLK32_SE_CNTL, SYMCLK32_SE0_EN, mask_sh),\
+	DCCG_SF(SYMCLK32_SE_CNTL, SYMCLK32_SE1_EN, mask_sh),\
+	DCCG_SF(SYMCLK32_SE_CNTL, SYMCLK32_SE2_EN, mask_sh),\
+	DCCG_SF(SYMCLK32_SE_CNTL, SYMCLK32_SE3_EN, mask_sh),\
+	DCCG_SF(SYMCLK32_LE_CNTL, SYMCLK32_LE0_SRC_SEL, mask_sh),\
+	DCCG_SF(SYMCLK32_LE_CNTL, SYMCLK32_LE1_SRC_SEL, mask_sh),\
+	DCCG_SF(SYMCLK32_LE_CNTL, SYMCLK32_LE0_EN, mask_sh),\
+	DCCG_SF(SYMCLK32_LE_CNTL, SYMCLK32_LE1_EN, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, DTBCLK_DTO, ENABLE, 0, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, DTBCLK_DTO, ENABLE, 1, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, DTBCLK_DTO, ENABLE, 2, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, DTBCLK_DTO, ENABLE, 3, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, DTBCLKDTO, ENABLE_STATUS, 0, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, DTBCLKDTO, ENABLE_STATUS, 1, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, DTBCLKDTO, ENABLE_STATUS, 2, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, DTBCLKDTO, ENABLE_STATUS, 3, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, PIPE, DTO_SRC_SEL, 0, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, PIPE, DTO_SRC_SEL, 1, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, PIPE, DTO_SRC_SEL, 2, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, PIPE, DTO_SRC_SEL, 3, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, OTG, ADD_PIXEL, 0, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, OTG, ADD_PIXEL, 1, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, OTG, ADD_PIXEL, 2, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, OTG, ADD_PIXEL, 3, mask_sh),\
+	DCCG_SF(OTG_PIXEL_RATE_DIV, OTG0_PIXEL_RATE_DIVK1, mask_sh),\
+	DCCG_SF(OTG_PIXEL_RATE_DIV, OTG0_PIXEL_RATE_DIVK2, mask_sh),\
+	DCCG_SF(OTG_PIXEL_RATE_DIV, OTG1_PIXEL_RATE_DIVK1, mask_sh),\
+	DCCG_SF(OTG_PIXEL_RATE_DIV, OTG1_PIXEL_RATE_DIVK2, mask_sh),\
+	DCCG_SF(OTG_PIXEL_RATE_DIV, OTG2_PIXEL_RATE_DIVK1, mask_sh),\
+	DCCG_SF(OTG_PIXEL_RATE_DIV, OTG2_PIXEL_RATE_DIVK2, mask_sh),\
+	DCCG_SF(OTG_PIXEL_RATE_DIV, OTG3_PIXEL_RATE_DIVK1, mask_sh),\
+	DCCG_SF(OTG_PIXEL_RATE_DIV, OTG3_PIXEL_RATE_DIVK2, mask_sh),\
+	DCCG_SF(OTG_PIXEL_RATE_DIV, OTG3_PIXEL_RATE_DIVK2, mask_sh),\
+	DCCG_SF(DTBCLK_P_CNTL, DTBCLK_P0_SRC_SEL, mask_sh),\
+	DCCG_SF(DTBCLK_P_CNTL, DTBCLK_P0_EN, mask_sh),\
+	DCCG_SF(DTBCLK_P_CNTL, DTBCLK_P1_SRC_SEL, mask_sh),\
+	DCCG_SF(DTBCLK_P_CNTL, DTBCLK_P1_EN, mask_sh),\
+	DCCG_SF(DTBCLK_P_CNTL, DTBCLK_P2_SRC_SEL, mask_sh),\
+	DCCG_SF(DTBCLK_P_CNTL, DTBCLK_P2_EN, mask_sh),\
+	DCCG_SF(DTBCLK_P_CNTL, DTBCLK_P3_SRC_SEL, mask_sh),\
+	DCCG_SF(DTBCLK_P_CNTL, DTBCLK_P3_EN, mask_sh),\
+	DCCG_SF(DCCG_AUDIO_DTO_SOURCE, DCCG_AUDIO_DTO0_SOURCE_SEL, mask_sh)
+
+
+struct dccg *dccg32_create(
+	struct dc_context *ctx,
+	const struct dccg_registers *regs,
+	const struct dccg_shift *dccg_shift,
+	const struct dccg_mask *dccg_mask);
+
+#endif //__DCN32_DCCG_H__
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c
new file mode 100644
index 000000000000..7170a9aa82a4
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c
@@ -0,0 +1,283 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+
+#include "reg_helper.h"
+
+#include "core_types.h"
+#include "link_encoder.h"
+#include "dcn31/dcn31_dio_link_encoder.h"
+#include "dcn32_dio_link_encoder.h"
+#include "stream_encoder.h"
+#include "i2caux_interface.h"
+#include "dc_bios_types.h"
+
+#include "gpio_service_interface.h"
+
+#ifndef MIN
+#define MIN(X, Y) ((X) < (Y) ? (X) : (Y))
+#endif
+
+#define CTX \
+	enc10->base.ctx
+#define DC_LOGGER \
+	enc10->base.ctx->logger
+
+#define REG(reg)\
+	(enc10->link_regs->reg)
+
+#undef FN
+#define FN(reg_name, field_name) \
+	enc10->link_shift->field_name, enc10->link_mask->field_name
+
+#define AUX_REG(reg)\
+	(enc10->aux_regs->reg)
+
+#define AUX_REG_READ(reg_name) \
+		dm_read_reg(CTX, AUX_REG(reg_name))
+
+#define AUX_REG_WRITE(reg_name, val) \
+			dm_write_reg(CTX, AUX_REG(reg_name), val)
+
+
+void enc32_hw_init(struct link_encoder *enc)
+{
+	struct dcn10_link_encoder *enc10 = TO_DCN10_LINK_ENC(enc);
+
+/*
+	00 - DP_AUX_DPHY_RX_DETECTION_THRESHOLD__1to2 : 1/2
+	01 - DP_AUX_DPHY_RX_DETECTION_THRESHOLD__3to4 : 3/4
+	02 - DP_AUX_DPHY_RX_DETECTION_THRESHOLD__7to8 : 7/8
+	03 - DP_AUX_DPHY_RX_DETECTION_THRESHOLD__15to16 : 15/16
+	04 - DP_AUX_DPHY_RX_DETECTION_THRESHOLD__31to32 : 31/32
+	05 - DP_AUX_DPHY_RX_DETECTION_THRESHOLD__63to64 : 63/64
+	06 - DP_AUX_DPHY_RX_DETECTION_THRESHOLD__127to128 : 127/128
+	07 - DP_AUX_DPHY_RX_DETECTION_THRESHOLD__255to256 : 255/256
+*/
+
+/*
+	AUX_REG_UPDATE_5(AUX_DPHY_RX_CONTROL0,
+	AUX_RX_START_WINDOW = 1 [6:4]
+	AUX_RX_RECEIVE_WINDOW = 1 default is 2 [10:8]
+	AUX_RX_HALF_SYM_DETECT_LEN  = 1 [13:12] default is 1
+	AUX_RX_TRANSITION_FILTER_EN = 1 [16] default is 1
+	AUX_RX_ALLOW_BELOW_THRESHOLD_PHASE_DETECT [17] is 0  default is 0
+	AUX_RX_ALLOW_BELOW_THRESHOLD_START [18] is 1  default is 1
+	AUX_RX_ALLOW_BELOW_THRESHOLD_STOP [19] is 1  default is 1
+	AUX_RX_PHASE_DETECT_LEN,  [21,20] = 0x3 default is 3
+	AUX_RX_DETECTION_THRESHOLD [30:28] = 1
+*/
+	AUX_REG_WRITE(AUX_DPHY_RX_CONTROL0, 0x103d1110);
+
+	AUX_REG_WRITE(AUX_DPHY_TX_CONTROL, 0x21c7a);
+
+	//AUX_DPHY_TX_REF_CONTROL'AUX_TX_REF_DIV HW default is 0x32;
+	// Set AUX_TX_REF_DIV Divider to generate 2 MHz reference from refclk
+	// 27MHz -> 0xd
+	// 100MHz -> 0x32
+	// 48MHz -> 0x18
+
+	// Set TMDS_CTL0 to 1.  This is a legacy setting.
+	REG_UPDATE(TMDS_CTL_BITS, TMDS_CTL0, 1);
+
+	dcn10_aux_initialize(enc10);
+}
+
+
+void dcn32_link_encoder_enable_dp_output(
+	struct link_encoder *enc,
+	const struct dc_link_settings *link_settings,
+	enum clock_source_id clock_source)
+{
+	if (!enc->ctx->dc->debug.avoid_vbios_exec_table) {
+		dcn10_link_encoder_enable_dp_output(enc, link_settings, clock_source);
+		return;
+	}
+}
+
+bool dcn32_link_encoder_is_in_alt_mode(struct link_encoder *enc)
+{
+	struct dcn10_link_encoder *enc10 = TO_DCN10_LINK_ENC(enc);
+	uint32_t dp_alt_mode_disable = 0;
+	bool is_usb_c_alt_mode = false;
+
+	if (enc->features.flags.bits.DP_IS_USB_C) {
+		/* if value == 1 alt mode is disabled, otherwise it is enabled */
+		//REG_GET(RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, &dp_alt_mode_disable);
+		is_usb_c_alt_mode = (dp_alt_mode_disable == 0);
+	}
+
+	return is_usb_c_alt_mode;
+}
+
+void dcn32_link_encoder_get_max_link_cap(struct link_encoder *enc,
+	struct dc_link_settings *link_settings)
+{
+	struct dcn10_link_encoder *enc10 = TO_DCN10_LINK_ENC(enc);
+	uint32_t is_in_usb_c_dp4_mode = 0;
+
+	dcn10_link_encoder_get_max_link_cap(enc, link_settings);
+
+	/* in usb c dp2 mode, max lane count is 2 */
+	if (enc->funcs->is_in_alt_mode && enc->funcs->is_in_alt_mode(enc)) {
+//		REG_GET(RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, &is_in_usb_c_dp4_mode);
+		if (!is_in_usb_c_dp4_mode)
+			link_settings->lane_count = MIN(LANE_COUNT_TWO, link_settings->lane_count);
+	}
+
+}
+
+static const struct link_encoder_funcs dcn32_link_enc_funcs = {
+	.read_state = link_enc2_read_state,
+	.validate_output_with_stream =
+			dcn30_link_encoder_validate_output_with_stream,
+	.hw_init = enc32_hw_init,
+	.setup = dcn10_link_encoder_setup,
+	.enable_tmds_output = dcn10_link_encoder_enable_tmds_output,
+	.enable_dp_output = dcn32_link_encoder_enable_dp_output,
+	.enable_dp_mst_output = dcn10_link_encoder_enable_dp_mst_output,
+	.disable_output = dcn10_link_encoder_disable_output,
+	.dp_set_lane_settings = dcn10_link_encoder_dp_set_lane_settings,
+	.dp_set_phy_pattern = dcn10_link_encoder_dp_set_phy_pattern,
+	.update_mst_stream_allocation_table =
+		dcn10_link_encoder_update_mst_stream_allocation_table,
+	.psr_program_dp_dphy_fast_training =
+			dcn10_psr_program_dp_dphy_fast_training,
+	.psr_program_secondary_packet = dcn10_psr_program_secondary_packet,
+	.connect_dig_be_to_fe = dcn10_link_encoder_connect_dig_be_to_fe,
+	.enable_hpd = dcn10_link_encoder_enable_hpd,
+	.disable_hpd = dcn10_link_encoder_disable_hpd,
+	.is_dig_enabled = dcn10_is_dig_enabled,
+	.destroy = dcn10_link_encoder_destroy,
+	.fec_set_enable = enc2_fec_set_enable,
+	.fec_set_ready = enc2_fec_set_ready,
+	.fec_is_active = enc2_fec_is_active,
+	.get_dig_frontend = dcn10_get_dig_frontend,
+	.get_dig_mode = dcn10_get_dig_mode,
+	.is_in_alt_mode = dcn32_link_encoder_is_in_alt_mode,
+	.get_max_link_cap = dcn32_link_encoder_get_max_link_cap,
+	.set_dio_phy_mux = dcn31_link_encoder_set_dio_phy_mux,
+};
+
+void dcn32_link_encoder_construct(
+	struct dcn20_link_encoder *enc20,
+	const struct encoder_init_data *init_data,
+	const struct encoder_feature_support *enc_features,
+	const struct dcn10_link_enc_registers *link_regs,
+	const struct dcn10_link_enc_aux_registers *aux_regs,
+	const struct dcn10_link_enc_hpd_registers *hpd_regs,
+	const struct dcn10_link_enc_shift *link_shift,
+	const struct dcn10_link_enc_mask *link_mask)
+{
+	struct bp_connector_speed_cap_info bp_cap_info = {0};
+	const struct dc_vbios_funcs *bp_funcs = init_data->ctx->dc_bios->funcs;
+	enum bp_result result = BP_RESULT_OK;
+	struct dcn10_link_encoder *enc10 = &enc20->enc10;
+
+	enc10->base.funcs = &dcn32_link_enc_funcs;
+	enc10->base.ctx = init_data->ctx;
+	enc10->base.id = init_data->encoder;
+
+	enc10->base.hpd_source = init_data->hpd_source;
+	enc10->base.connector = init_data->connector;
+
+	enc10->base.preferred_engine = ENGINE_ID_UNKNOWN;
+
+	enc10->base.features = *enc_features;
+
+	enc10->base.transmitter = init_data->transmitter;
+
+	/* set the flag to indicate whether driver poll the I2C data pin
+	 * while doing the DP sink detect
+	 */
+
+/*	if (dal_adapter_service_is_feature_supported(as,
+		FEATURE_DP_SINK_DETECT_POLL_DATA_PIN))
+		enc10->base.features.flags.bits.
+			DP_SINK_DETECT_POLL_DATA_PIN = true;*/
+
+	enc10->base.output_signals =
+		SIGNAL_TYPE_DVI_SINGLE_LINK |
+		SIGNAL_TYPE_DVI_DUAL_LINK |
+		SIGNAL_TYPE_LVDS |
+		SIGNAL_TYPE_DISPLAY_PORT |
+		SIGNAL_TYPE_DISPLAY_PORT_MST |
+		SIGNAL_TYPE_EDP |
+		SIGNAL_TYPE_HDMI_TYPE_A;
+
+	enc10->link_regs = link_regs;
+	enc10->aux_regs = aux_regs;
+	enc10->hpd_regs = hpd_regs;
+	enc10->link_shift = link_shift;
+	enc10->link_mask = link_mask;
+
+	switch (enc10->base.transmitter) {
+	case TRANSMITTER_UNIPHY_A:
+		enc10->base.preferred_engine = ENGINE_ID_DIGA;
+	break;
+	case TRANSMITTER_UNIPHY_B:
+		enc10->base.preferred_engine = ENGINE_ID_DIGB;
+	break;
+	case TRANSMITTER_UNIPHY_C:
+		enc10->base.preferred_engine = ENGINE_ID_DIGC;
+	break;
+	case TRANSMITTER_UNIPHY_D:
+		enc10->base.preferred_engine = ENGINE_ID_DIGD;
+	break;
+	case TRANSMITTER_UNIPHY_E:
+		enc10->base.preferred_engine = ENGINE_ID_DIGE;
+	break;
+	default:
+		ASSERT_CRITICAL(false);
+		enc10->base.preferred_engine = ENGINE_ID_UNKNOWN;
+	}
+
+	/* default to one to mirror Windows behavior */
+	enc10->base.features.flags.bits.HDMI_6GB_EN = 1;
+
+	if (bp_funcs->get_connector_speed_cap_info)
+		result = bp_funcs->get_connector_speed_cap_info(enc10->base.ctx->dc_bios,
+						enc10->base.connector, &bp_cap_info);
+
+	/* Override features with DCE-specific values */
+	if (result == BP_RESULT_OK) {
+		enc10->base.features.flags.bits.IS_HBR2_CAPABLE =
+				bp_cap_info.DP_HBR2_EN;
+		enc10->base.features.flags.bits.IS_HBR3_CAPABLE =
+				bp_cap_info.DP_HBR3_EN;
+		enc10->base.features.flags.bits.HDMI_6GB_EN = bp_cap_info.HDMI_6GB_EN;
+		enc10->base.features.flags.bits.IS_DP2_CAPABLE = 1;
+		enc10->base.features.flags.bits.IS_UHBR10_CAPABLE = bp_cap_info.DP_UHBR10_EN;
+		enc10->base.features.flags.bits.IS_UHBR13_5_CAPABLE = bp_cap_info.DP_UHBR13_5_EN;
+		enc10->base.features.flags.bits.IS_UHBR20_CAPABLE = bp_cap_info.DP_UHBR20_EN;
+	} else {
+		DC_LOG_WARNING("%s: Failed to get encoder_cap_info from VBIOS with error code %d!\n",
+				__func__,
+				result);
+	}
+	if (enc10->base.ctx->dc->debug.hdmi20_disable) {
+		enc10->base.features.flags.bits.HDMI_6GB_EN = 0;
+	}
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.h
new file mode 100644
index 000000000000..e880b76d5f25
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.h
@@ -0,0 +1,42 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ *  and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_LINK_ENCODER__DCN32_H__
+#define __DC_LINK_ENCODER__DCN32_H__
+
+#include "dcn30/dcn30_dio_link_encoder.h"
+
+void dcn32_link_encoder_construct(
+	struct dcn20_link_encoder *enc20,
+	const struct encoder_init_data *init_data,
+	const struct encoder_feature_support *enc_features,
+	const struct dcn10_link_enc_registers *link_regs,
+	const struct dcn10_link_enc_aux_registers *aux_regs,
+	const struct dcn10_link_enc_hpd_registers *hpd_regs,
+	const struct dcn10_link_enc_shift *link_shift,
+	const struct dcn10_link_enc_mask *link_mask);
+
+
+#endif /* __DC_LINK_ENCODER__DCN32_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.c
new file mode 100644
index 000000000000..62532da94913
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.c
@@ -0,0 +1,427 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ *  and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+
+#include "dc_bios_types.h"
+#include "dcn30/dcn30_dio_stream_encoder.h"
+#include "dcn32_dio_stream_encoder.h"
+#include "reg_helper.h"
+#include "hw_shared.h"
+#include "inc/link_dpcd.h"
+#include "dpcd_defs.h"
+
+#define DC_LOGGER \
+		enc1->base.ctx->logger
+
+#define REG(reg)\
+	(enc1->regs->reg)
+
+#undef FN
+#define FN(reg_name, field_name) \
+	enc1->se_shift->field_name, enc1->se_mask->field_name
+
+#define VBI_LINE_0 0
+#define HDMI_CLOCK_CHANNEL_RATE_MORE_340M 340000
+
+#define CTX \
+	enc1->base.ctx
+
+
+
+static void enc32_dp_set_odm_combine(
+	struct stream_encoder *enc,
+	bool odm_combine)
+{
+	//struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
+
+	//TODO: REG_UPDATE(DP_PIXEL_FORMAT, DP_PIXEL_COMBINE, odm_combine);
+}
+
+/* setup stream encoder in dvi mode */
+void enc32_stream_encoder_dvi_set_stream_attribute(
+	struct stream_encoder *enc,
+	struct dc_crtc_timing *crtc_timing,
+	bool is_dual_link)
+{
+	struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
+
+	if (!enc->ctx->dc->debug.avoid_vbios_exec_table) {
+		struct bp_encoder_control cntl = {0};
+
+		cntl.action = ENCODER_CONTROL_SETUP;
+		cntl.engine_id = enc1->base.id;
+		cntl.signal = is_dual_link ?
+			SIGNAL_TYPE_DVI_DUAL_LINK : SIGNAL_TYPE_DVI_SINGLE_LINK;
+		cntl.enable_dp_audio = false;
+		cntl.pixel_clock = crtc_timing->pix_clk_100hz / 10;
+		cntl.lanes_number = (is_dual_link) ? LANE_COUNT_EIGHT : LANE_COUNT_FOUR;
+
+		if (enc1->base.bp->funcs->encoder_control(
+				enc1->base.bp, &cntl) != BP_RESULT_OK)
+			return;
+
+	} else {
+
+		//Set pattern for clock channel, default vlue 0x63 does not work
+		REG_UPDATE(DIG_CLOCK_PATTERN, DIG_CLOCK_PATTERN, 0x1F);
+
+		//DIG_BE_TMDS_DVI_MODE : TMDS-DVI mode is already set in link_encoder_setup
+
+		//DIG_SOURCE_SELECT is already set in dig_connect_to_otg
+
+		/* DIG_START is removed from the register spec */
+	}
+
+	ASSERT(crtc_timing->pixel_encoding == PIXEL_ENCODING_RGB);
+	ASSERT(crtc_timing->display_color_depth == COLOR_DEPTH_888);
+	enc1_stream_encoder_set_stream_attribute_helper(enc1, crtc_timing);
+}
+
+/* setup stream encoder in hdmi mode */
+static void enc32_stream_encoder_hdmi_set_stream_attribute(
+	struct stream_encoder *enc,
+	struct dc_crtc_timing *crtc_timing,
+	int actual_pix_clk_khz,
+	bool enable_audio)
+{
+	struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
+
+	if (!enc->ctx->dc->debug.avoid_vbios_exec_table) {
+		struct bp_encoder_control cntl = {0};
+
+		cntl.action = ENCODER_CONTROL_SETUP;
+		cntl.engine_id = enc1->base.id;
+		cntl.signal = SIGNAL_TYPE_HDMI_TYPE_A;
+		cntl.enable_dp_audio = enable_audio;
+		cntl.pixel_clock = actual_pix_clk_khz;
+		cntl.lanes_number = LANE_COUNT_FOUR;
+
+		if (enc1->base.bp->funcs->encoder_control(
+				enc1->base.bp, &cntl) != BP_RESULT_OK)
+			return;
+
+	} else {
+
+		//Set pattern for clock channel, default vlue 0x63 does not work
+		REG_UPDATE(DIG_CLOCK_PATTERN, DIG_CLOCK_PATTERN, 0x1F);
+
+		//DIG_BE_TMDS_HDMI_MODE : TMDS-HDMI mode is already set in link_encoder_setup
+
+		//DIG_SOURCE_SELECT is already set in dig_connect_to_otg
+
+		/* DIG_START is removed from the register spec */
+	}
+
+	/* Configure pixel encoding */
+	enc1_stream_encoder_set_stream_attribute_helper(enc1, crtc_timing);
+
+	/* setup HDMI engine */
+	REG_UPDATE_6(HDMI_CONTROL,
+		HDMI_PACKET_GEN_VERSION, 1,
+		HDMI_KEEPOUT_MODE, 1,
+		HDMI_DEEP_COLOR_ENABLE, 0,
+		HDMI_DATA_SCRAMBLE_EN, 0,
+		HDMI_NO_EXTRA_NULL_PACKET_FILLED, 1,
+		HDMI_CLOCK_CHANNEL_RATE, 0);
+
+	/* Configure color depth */
+	switch (crtc_timing->display_color_depth) {
+	case COLOR_DEPTH_888:
+		REG_UPDATE(HDMI_CONTROL, HDMI_DEEP_COLOR_DEPTH, 0);
+		break;
+	case COLOR_DEPTH_101010:
+		if (crtc_timing->pixel_encoding == PIXEL_ENCODING_YCBCR422) {
+			REG_UPDATE_2(HDMI_CONTROL,
+					HDMI_DEEP_COLOR_DEPTH, 1,
+					HDMI_DEEP_COLOR_ENABLE, 0);
+		} else {
+			REG_UPDATE_2(HDMI_CONTROL,
+					HDMI_DEEP_COLOR_DEPTH, 1,
+					HDMI_DEEP_COLOR_ENABLE, 1);
+			}
+		break;
+	case COLOR_DEPTH_121212:
+		if (crtc_timing->pixel_encoding == PIXEL_ENCODING_YCBCR422) {
+			REG_UPDATE_2(HDMI_CONTROL,
+					HDMI_DEEP_COLOR_DEPTH, 2,
+					HDMI_DEEP_COLOR_ENABLE, 0);
+		} else {
+			REG_UPDATE_2(HDMI_CONTROL,
+					HDMI_DEEP_COLOR_DEPTH, 2,
+					HDMI_DEEP_COLOR_ENABLE, 1);
+			}
+		break;
+	case COLOR_DEPTH_161616:
+		REG_UPDATE_2(HDMI_CONTROL,
+				HDMI_DEEP_COLOR_DEPTH, 3,
+				HDMI_DEEP_COLOR_ENABLE, 1);
+		break;
+	default:
+		break;
+	}
+
+	if (actual_pix_clk_khz >= HDMI_CLOCK_CHANNEL_RATE_MORE_340M) {
+		/* enable HDMI data scrambler
+		 * HDMI_CLOCK_CHANNEL_RATE_MORE_340M
+		 * Clock channel frequency is 1/4 of character rate.
+		 */
+		REG_UPDATE_2(HDMI_CONTROL,
+			HDMI_DATA_SCRAMBLE_EN, 1,
+			HDMI_CLOCK_CHANNEL_RATE, 1);
+	} else if (crtc_timing->flags.LTE_340MCSC_SCRAMBLE) {
+
+		/* TODO: New feature for DCE11, still need to implement */
+
+		/* enable HDMI data scrambler
+		 * HDMI_CLOCK_CHANNEL_FREQ_EQUAL_TO_CHAR_RATE
+		 * Clock channel frequency is the same
+		 * as character rate
+		 */
+		REG_UPDATE_2(HDMI_CONTROL,
+			HDMI_DATA_SCRAMBLE_EN, 1,
+			HDMI_CLOCK_CHANNEL_RATE, 0);
+	}
+
+
+	/* Enable transmission of General Control packet on every frame */
+	REG_UPDATE_3(HDMI_VBI_PACKET_CONTROL,
+		HDMI_GC_CONT, 1,
+		HDMI_GC_SEND, 1,
+		HDMI_NULL_SEND, 1);
+
+	/* Disable Audio Content Protection packet transmission */
+	REG_UPDATE(HDMI_VBI_PACKET_CONTROL, HDMI_ACP_SEND, 0);
+
+	/* following belongs to audio */
+	/* Enable Audio InfoFrame packet transmission. */
+	REG_UPDATE(HDMI_INFOFRAME_CONTROL0, HDMI_AUDIO_INFO_SEND, 1);
+
+	/* update double-buffered AUDIO_INFO registers immediately */
+	ASSERT(enc->afmt);
+	enc->afmt->funcs->audio_info_immediate_update(enc->afmt);
+
+	/* Select line number on which to send Audio InfoFrame packets */
+	REG_UPDATE(HDMI_INFOFRAME_CONTROL1, HDMI_AUDIO_INFO_LINE,
+				VBI_LINE_0 + 2);
+
+	/* set HDMI GC AVMUTE */
+	REG_UPDATE(HDMI_GC, HDMI_GC_AVMUTE, 0);
+}
+
+
+
+static bool is_two_pixels_per_containter(const struct dc_crtc_timing *timing)
+{
+	bool two_pix = timing->pixel_encoding == PIXEL_ENCODING_YCBCR420;
+
+	two_pix = two_pix || (timing->flags.DSC && timing->pixel_encoding == PIXEL_ENCODING_YCBCR422
+			&& !timing->dsc_cfg.ycbcr422_simple);
+	return two_pix;
+}
+
+static void enc32_stream_encoder_dp_unblank(
+        struct dc_link *link,
+		struct stream_encoder *enc,
+		const struct encoder_unblank_param *param)
+{
+	struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
+
+	if (param->link_settings.link_rate != LINK_RATE_UNKNOWN) {
+		uint32_t n_vid = 0x8000;
+		uint32_t m_vid;
+		uint32_t n_multiply = 0;
+		uint64_t m_vid_l = n_vid;
+
+		/* YCbCr 4:2:0 : Computed VID_M will be 2X the input rate */
+		if (is_two_pixels_per_containter(&param->timing) || param->opp_cnt > 1) {
+			/*this logic should be the same in get_pixel_clock_parameters() */
+			n_multiply = 1;
+		}
+		/* M / N = Fstream / Flink
+		 * m_vid / n_vid = pixel rate / link rate
+		 */
+
+		m_vid_l *= param->timing.pix_clk_100hz / 10;
+		m_vid_l = div_u64(m_vid_l,
+			param->link_settings.link_rate
+				* LINK_RATE_REF_FREQ_IN_KHZ);
+
+		m_vid = (uint32_t) m_vid_l;
+
+		/* enable auto measurement */
+
+		REG_UPDATE(DP_VID_TIMING, DP_VID_M_N_GEN_EN, 0);
+
+		/* auto measurement need 1 full 0x8000 symbol cycle to kick in,
+		 * therefore program initial value for Mvid and Nvid
+		 */
+
+		REG_UPDATE(DP_VID_N, DP_VID_N, n_vid);
+
+		REG_UPDATE(DP_VID_M, DP_VID_M, m_vid);
+
+		REG_UPDATE_2(DP_VID_TIMING,
+				DP_VID_M_N_GEN_EN, 1,
+				DP_VID_N_MUL, n_multiply);
+	}
+
+	/* make sure stream is disabled before resetting steer fifo */
+	REG_UPDATE(DP_VID_STREAM_CNTL, DP_VID_STREAM_ENABLE, false);
+	REG_WAIT(DP_VID_STREAM_CNTL, DP_VID_STREAM_STATUS, 0, 10, 5000);
+
+	/* DIG_START is removed from the register spec */
+
+	/* switch DP encoder to CRTC data, but reset it the fifo first. It may happen
+	 * that it overflows during mode transition, and sometimes doesn't recover.
+	 */
+	REG_UPDATE(DP_STEER_FIFO, DP_STEER_FIFO_RESET, 1);
+	udelay(10);
+
+	REG_UPDATE(DP_STEER_FIFO, DP_STEER_FIFO_RESET, 0);
+
+	/* wait 100us for DIG/DP logic to prime
+	 * (i.e. a few video lines)
+	 */
+	udelay(100);
+
+	/* the hardware would start sending video at the start of the next DP
+	 * frame (i.e. rising edge of the vblank).
+	 * NOTE: We used to program DP_VID_STREAM_DIS_DEFER = 2 here, but this
+	 * register has no effect on enable transition! HW always guarantees
+	 * VID_STREAM enable at start of next frame, and this is not
+	 * programmable
+	 */
+
+	REG_UPDATE(DP_VID_STREAM_CNTL, DP_VID_STREAM_ENABLE, true);
+
+	dp_source_sequence_trace(link, DPCD_SOURCE_SEQ_AFTER_ENABLE_DP_VID_STREAM);
+}
+
+/* Set DSC-related configuration.
+ *   dsc_mode: 0 disables DSC, other values enable DSC in specified format
+ *   sc_bytes_per_pixel: DP_DSC_BYTES_PER_PIXEL removed in DCN32
+ *   dsc_slice_width: DP_DSC_SLICE_WIDTH removed in DCN32
+ */
+static void enc32_dp_set_dsc_config(struct stream_encoder *enc,
+					enum optc_dsc_mode dsc_mode,
+					uint32_t dsc_bytes_per_pixel,
+					uint32_t dsc_slice_width)
+{
+	struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
+
+	REG_UPDATE(DP_DSC_CNTL,	DP_DSC_MODE, dsc_mode);
+}
+
+/* this function read dsc related register fields to be logged later in dcn10_log_hw_state
+ * into a dcn_dsc_state struct.
+ */
+static void enc32_read_state(struct stream_encoder *enc, struct enc_state *s)
+{
+	struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
+
+	//if dsc is enabled, continue to read
+	REG_GET(DP_DSC_CNTL, DP_DSC_MODE, &s->dsc_mode);
+	if (s->dsc_mode) {
+		REG_GET(DP_GSP11_CNTL, DP_SEC_GSP11_LINE_NUM, &s->sec_gsp_pps_line_num);
+
+		REG_GET(DP_MSA_VBID_MISC, DP_VBID6_LINE_REFERENCE, &s->vbid6_line_reference);
+		REG_GET(DP_MSA_VBID_MISC, DP_VBID6_LINE_NUM, &s->vbid6_line_num);
+
+		REG_GET(DP_GSP11_CNTL, DP_SEC_GSP11_ENABLE, &s->sec_gsp_pps_enable);
+		REG_GET(DP_SEC_CNTL, DP_SEC_STREAM_ENABLE, &s->sec_stream_enable);
+	}
+}
+
+
+static const struct stream_encoder_funcs dcn32_str_enc_funcs = {
+	.dp_set_odm_combine =
+		enc32_dp_set_odm_combine,
+	.dp_set_stream_attribute =
+		enc2_stream_encoder_dp_set_stream_attribute,
+	.hdmi_set_stream_attribute =
+		enc32_stream_encoder_hdmi_set_stream_attribute,
+	.dvi_set_stream_attribute =
+		enc32_stream_encoder_dvi_set_stream_attribute,
+	.set_throttled_vcp_size =
+		enc1_stream_encoder_set_throttled_vcp_size,
+	.update_hdmi_info_packets =
+		enc3_stream_encoder_update_hdmi_info_packets,
+	.stop_hdmi_info_packets =
+		enc3_stream_encoder_stop_hdmi_info_packets,
+	.update_dp_info_packets =
+		enc3_stream_encoder_update_dp_info_packets,
+	.stop_dp_info_packets =
+		enc1_stream_encoder_stop_dp_info_packets,
+	.reset_fifo =
+		enc1_stream_encoder_reset_fifo,
+	.dp_blank =
+		enc1_stream_encoder_dp_blank,
+	.dp_unblank =
+		enc32_stream_encoder_dp_unblank,
+	.audio_mute_control = enc3_audio_mute_control,
+
+	.dp_audio_setup = enc3_se_dp_audio_setup,
+	.dp_audio_enable = enc3_se_dp_audio_enable,
+	.dp_audio_disable = enc1_se_dp_audio_disable,
+
+	.hdmi_audio_setup = enc3_se_hdmi_audio_setup,
+	.hdmi_audio_disable = enc1_se_hdmi_audio_disable,
+	.setup_stereo_sync  = enc1_setup_stereo_sync,
+	.set_avmute = enc1_stream_encoder_set_avmute,
+	.dig_connect_to_otg = enc1_dig_connect_to_otg,
+	.dig_source_otg = enc1_dig_source_otg,
+
+	.dp_get_pixel_format  = enc1_stream_encoder_dp_get_pixel_format,
+
+	.enc_read_state = enc32_read_state,
+	.dp_set_dsc_config = enc32_dp_set_dsc_config,
+	.dp_set_dsc_pps_info_packet = enc3_dp_set_dsc_pps_info_packet,
+	.set_dynamic_metadata = enc2_set_dynamic_metadata,
+	.hdmi_reset_stream_attribute = enc1_reset_hdmi_stream_attribute,
+};
+
+void dcn32_dio_stream_encoder_construct(
+	struct dcn10_stream_encoder *enc1,
+	struct dc_context *ctx,
+	struct dc_bios *bp,
+	enum engine_id eng_id,
+	struct vpg *vpg,
+	struct afmt *afmt,
+	const struct dcn10_stream_enc_registers *regs,
+	const struct dcn10_stream_encoder_shift *se_shift,
+	const struct dcn10_stream_encoder_mask *se_mask)
+{
+	enc1->base.funcs = &dcn32_str_enc_funcs;
+	enc1->base.ctx = ctx;
+	enc1->base.id = eng_id;
+	enc1->base.bp = bp;
+	enc1->base.vpg = vpg;
+	enc1->base.afmt = afmt;
+	enc1->regs = regs;
+	enc1->se_shift = se_shift;
+	enc1->se_mask = se_mask;
+	enc1->base.stream_enc_inst = vpg->inst;
+}
+
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h
new file mode 100644
index 000000000000..7241080b1553
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h
@@ -0,0 +1,254 @@
+/*
+ * Copyright 2021 - Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ *  and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_DIO_STREAM_ENCODER_DCN32_H__
+#define __DC_DIO_STREAM_ENCODER_DCN32_H__
+
+#include "dcn30/dcn30_vpg.h"
+#include "dcn30/dcn30_afmt.h"
+#include "stream_encoder.h"
+#include "dcn20/dcn20_stream_encoder.h"
+
+#define SE_DCN32_REG_LIST(id)\
+	SRI(AFMT_CNTL, DIG, id), \
+	SRI(DIG_FE_CNTL, DIG, id), \
+	SRI(HDMI_CONTROL, DIG, id), \
+	SRI(HDMI_DB_CONTROL, DIG, id), \
+	SRI(HDMI_GC, DIG, id), \
+	SRI(HDMI_GENERIC_PACKET_CONTROL0, DIG, id), \
+	SRI(HDMI_GENERIC_PACKET_CONTROL1, DIG, id), \
+	SRI(HDMI_GENERIC_PACKET_CONTROL2, DIG, id), \
+	SRI(HDMI_GENERIC_PACKET_CONTROL3, DIG, id), \
+	SRI(HDMI_GENERIC_PACKET_CONTROL4, DIG, id), \
+	SRI(HDMI_GENERIC_PACKET_CONTROL5, DIG, id), \
+	SRI(HDMI_GENERIC_PACKET_CONTROL6, DIG, id), \
+	SRI(HDMI_GENERIC_PACKET_CONTROL7, DIG, id), \
+	SRI(HDMI_GENERIC_PACKET_CONTROL8, DIG, id), \
+	SRI(HDMI_GENERIC_PACKET_CONTROL9, DIG, id), \
+	SRI(HDMI_GENERIC_PACKET_CONTROL10, DIG, id), \
+	SRI(HDMI_INFOFRAME_CONTROL0, DIG, id), \
+	SRI(HDMI_INFOFRAME_CONTROL1, DIG, id), \
+	SRI(HDMI_VBI_PACKET_CONTROL, DIG, id), \
+	SRI(HDMI_AUDIO_PACKET_CONTROL, DIG, id),\
+	SRI(HDMI_ACR_PACKET_CONTROL, DIG, id),\
+	SRI(HDMI_ACR_32_0, DIG, id),\
+	SRI(HDMI_ACR_32_1, DIG, id),\
+	SRI(HDMI_ACR_44_0, DIG, id),\
+	SRI(HDMI_ACR_44_1, DIG, id),\
+	SRI(HDMI_ACR_48_0, DIG, id),\
+	SRI(HDMI_ACR_48_1, DIG, id),\
+	SRI(DP_DB_CNTL, DP, id), \
+	SRI(DP_MSA_MISC, DP, id), \
+	SRI(DP_MSA_VBID_MISC, DP, id), \
+	SRI(DP_MSA_COLORIMETRY, DP, id), \
+	SRI(DP_MSA_TIMING_PARAM1, DP, id), \
+	SRI(DP_MSA_TIMING_PARAM2, DP, id), \
+	SRI(DP_MSA_TIMING_PARAM3, DP, id), \
+	SRI(DP_MSA_TIMING_PARAM4, DP, id), \
+	SRI(DP_MSE_RATE_CNTL, DP, id), \
+	SRI(DP_MSE_RATE_UPDATE, DP, id), \
+	SRI(DP_PIXEL_FORMAT, DP, id), \
+	SRI(DP_SEC_CNTL, DP, id), \
+	SRI(DP_SEC_CNTL2, DP, id), \
+	SRI(DP_SEC_CNTL6, DP, id), \
+	SRI(DP_STEER_FIFO, DP, id), \
+	SRI(DP_VID_M, DP, id), \
+	SRI(DP_VID_N, DP, id), \
+	SRI(DP_VID_STREAM_CNTL, DP, id), \
+	SRI(DP_VID_TIMING, DP, id), \
+	SRI(DP_SEC_AUD_N, DP, id), \
+	SRI(DP_SEC_TIMESTAMP, DP, id), \
+	SRI(DP_DSC_CNTL, DP, id), \
+	SRI(DP_SEC_METADATA_TRANSMISSION, DP, id), \
+	SRI(HDMI_METADATA_PACKET_CONTROL, DIG, id), \
+	SRI(DP_SEC_FRAMING4, DP, id), \
+	SRI(DP_GSP11_CNTL, DP, id), \
+	SRI(DME_CONTROL, DME, id),\
+	SRI(DP_SEC_METADATA_TRANSMISSION, DP, id), \
+	SRI(HDMI_METADATA_PACKET_CONTROL, DIG, id), \
+	SRI(DIG_FE_CNTL, DIG, id), \
+	SRI(DIG_CLOCK_PATTERN, DIG, id)
+
+
+#define SE_COMMON_MASK_SH_LIST_DCN32_BASE(mask_sh)\
+	SE_SF(DP0_DP_PIXEL_FORMAT, DP_PIXEL_ENCODING, mask_sh),\
+	SE_SF(DP0_DP_PIXEL_FORMAT, DP_COMPONENT_DEPTH, mask_sh),\
+	SE_SF(DIG0_HDMI_CONTROL, HDMI_PACKET_GEN_VERSION, mask_sh),\
+	SE_SF(DIG0_HDMI_CONTROL, HDMI_KEEPOUT_MODE, mask_sh),\
+	SE_SF(DIG0_HDMI_CONTROL, HDMI_DEEP_COLOR_ENABLE, mask_sh),\
+	SE_SF(DIG0_HDMI_CONTROL, HDMI_DEEP_COLOR_DEPTH, mask_sh),\
+	SE_SF(DIG0_HDMI_CONTROL, HDMI_DATA_SCRAMBLE_EN, mask_sh),\
+	SE_SF(DIG0_HDMI_CONTROL, HDMI_NO_EXTRA_NULL_PACKET_FILLED, mask_sh),\
+	SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_GC_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_GC_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_NULL_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_INFOFRAME_CONTROL0, HDMI_AUDIO_INFO_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_INFOFRAME_CONTROL1, HDMI_AUDIO_INFO_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GC, HDMI_GC_AVMUTE, mask_sh),\
+	SE_SF(DP0_DP_MSE_RATE_CNTL, DP_MSE_RATE_X, mask_sh),\
+	SE_SF(DP0_DP_MSE_RATE_CNTL, DP_MSE_RATE_Y, mask_sh),\
+	SE_SF(DP0_DP_MSE_RATE_UPDATE, DP_MSE_RATE_UPDATE_PENDING, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_GSP0_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_STREAM_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_GSP1_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_GSP2_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_GSP3_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_MPG_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL1, DP_SEC_GSP5_LINE_REFERENCE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL2, DP_SEC_GSP4_SEND, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL2, DP_SEC_GSP4_SEND_PENDING, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL4, DP_SEC_GSP4_LINE_NUM, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL5, DP_SEC_GSP5_LINE_NUM, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL2, DP_SEC_GSP4_SEND_ANY_LINE, mask_sh),\
+	SE_SF(DP0_DP_VID_STREAM_CNTL, DP_VID_STREAM_DIS_DEFER, mask_sh),\
+	SE_SF(DP0_DP_VID_STREAM_CNTL, DP_VID_STREAM_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_VID_STREAM_CNTL, DP_VID_STREAM_STATUS, mask_sh),\
+	SE_SF(DP0_DP_STEER_FIFO, DP_STEER_FIFO_RESET, mask_sh),\
+	SE_SF(DP0_DP_VID_TIMING, DP_VID_M_N_GEN_EN, mask_sh),\
+	SE_SF(DP0_DP_VID_N, DP_VID_N, mask_sh),\
+	SE_SF(DP0_DP_VID_M, DP_VID_M, mask_sh),\
+	SE_SF(DIG0_HDMI_AUDIO_PACKET_CONTROL, HDMI_AUDIO_DELAY_EN, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_PACKET_CONTROL, HDMI_ACR_AUTO_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_PACKET_CONTROL, HDMI_ACR_SOURCE, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_PACKET_CONTROL, HDMI_ACR_AUDIO_PRIORITY, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_32_0, HDMI_ACR_CTS_32, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_32_1, HDMI_ACR_N_32, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_44_0, HDMI_ACR_CTS_44, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_44_1, HDMI_ACR_N_44, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_48_0, HDMI_ACR_CTS_48, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_48_1, HDMI_ACR_N_48, mask_sh),\
+	SE_SF(DP0_DP_SEC_AUD_N, DP_SEC_AUD_N, mask_sh),\
+	SE_SF(DP0_DP_SEC_TIMESTAMP, DP_SEC_TIMESTAMP_MODE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_ASP_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_ATP_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_AIP_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_ACM_ENABLE, mask_sh),\
+	SE_SF(DIG0_AFMT_CNTL, AFMT_AUDIO_CLOCK_EN, mask_sh),\
+	SE_SF(DIG0_HDMI_CONTROL, HDMI_CLOCK_CHANNEL_RATE, mask_sh),\
+	SE_SF(DIG0_DIG_FE_CNTL, TMDS_PIXEL_ENCODING, mask_sh),\
+	SE_SF(DIG0_DIG_FE_CNTL, TMDS_COLOR_FORMAT, mask_sh),\
+	SE_SF(DIG0_DIG_FE_CNTL, DIG_STEREOSYNC_SELECT, mask_sh),\
+	SE_SF(DIG0_DIG_FE_CNTL, DIG_STEREOSYNC_GATE_EN, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_GSP4_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_GSP5_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_GSP6_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_GSP7_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL2, DP_SEC_GSP7_SEND, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL6, DP_SEC_GSP7_LINE_NUM, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL2, DP_SEC_GSP11_PPS, mask_sh),\
+	SE_SF(DP0_DP_GSP11_CNTL, DP_SEC_GSP11_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_GSP11_CNTL, DP_SEC_GSP11_LINE_NUM, mask_sh),\
+	SE_SF(DP0_DP_DB_CNTL, DP_DB_DISABLE, mask_sh),\
+	SE_SF(DP0_DP_MSA_COLORIMETRY, DP_MSA_MISC0, mask_sh),\
+	SE_SF(DP0_DP_MSA_TIMING_PARAM1, DP_MSA_HTOTAL, mask_sh),\
+	SE_SF(DP0_DP_MSA_TIMING_PARAM1, DP_MSA_VTOTAL, mask_sh),\
+	SE_SF(DP0_DP_MSA_TIMING_PARAM2, DP_MSA_HSTART, mask_sh),\
+	SE_SF(DP0_DP_MSA_TIMING_PARAM2, DP_MSA_VSTART, mask_sh),\
+	SE_SF(DP0_DP_MSA_TIMING_PARAM3, DP_MSA_HSYNCWIDTH, mask_sh),\
+	SE_SF(DP0_DP_MSA_TIMING_PARAM3, DP_MSA_HSYNCPOLARITY, mask_sh),\
+	SE_SF(DP0_DP_MSA_TIMING_PARAM3, DP_MSA_VSYNCWIDTH, mask_sh),\
+	SE_SF(DP0_DP_MSA_TIMING_PARAM3, DP_MSA_VSYNCPOLARITY, mask_sh),\
+	SE_SF(DP0_DP_MSA_TIMING_PARAM4, DP_MSA_HWIDTH, mask_sh),\
+	SE_SF(DP0_DP_MSA_TIMING_PARAM4, DP_MSA_VHEIGHT, mask_sh),\
+	SE_SF(DIG0_HDMI_DB_CONTROL, HDMI_DB_DISABLE, mask_sh),\
+	SE_SF(DP0_DP_VID_TIMING, DP_VID_N_MUL, mask_sh),\
+	SE_SF(DIG0_DIG_FE_CNTL, DIG_SOURCE_SELECT, mask_sh), \
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC0_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC0_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC1_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC1_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC2_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC2_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC3_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC3_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC4_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC4_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC5_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC5_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC6_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC6_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC7_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC7_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC8_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC8_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC9_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC9_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC10_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC10_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC11_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC11_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC12_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC12_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC13_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC13_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC14_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL6, HDMI_GENERIC14_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL1, HDMI_GENERIC0_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL1, HDMI_GENERIC1_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL2, HDMI_GENERIC2_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL2, HDMI_GENERIC3_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL3, HDMI_GENERIC4_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL3, HDMI_GENERIC5_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL4, HDMI_GENERIC6_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL4, HDMI_GENERIC7_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL7, HDMI_GENERIC8_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL7, HDMI_GENERIC9_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL8, HDMI_GENERIC10_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL8, HDMI_GENERIC11_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL9, HDMI_GENERIC12_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL9, HDMI_GENERIC13_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL10, HDMI_GENERIC14_LINE, mask_sh),\
+	SE_SF(DP0_DP_DSC_CNTL, DP_DSC_MODE, mask_sh),\
+	SE_SF(DP0_DP_MSA_VBID_MISC, DP_VBID6_LINE_REFERENCE, mask_sh),\
+	SE_SF(DP0_DP_MSA_VBID_MISC, DP_VBID6_LINE_NUM, mask_sh),\
+	SE_SF(DME0_DME_CONTROL, METADATA_ENGINE_EN, mask_sh),\
+	SE_SF(DME0_DME_CONTROL, METADATA_HUBP_REQUESTOR_ID, mask_sh),\
+	SE_SF(DME0_DME_CONTROL, METADATA_STREAM_TYPE, mask_sh),\
+	SE_SF(DP0_DP_SEC_METADATA_TRANSMISSION, DP_SEC_METADATA_PACKET_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_METADATA_TRANSMISSION, DP_SEC_METADATA_PACKET_LINE_REFERENCE, mask_sh),\
+	SE_SF(DP0_DP_SEC_METADATA_TRANSMISSION, DP_SEC_METADATA_PACKET_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_METADATA_PACKET_CONTROL, HDMI_METADATA_PACKET_ENABLE, mask_sh),\
+	SE_SF(DIG0_HDMI_METADATA_PACKET_CONTROL, HDMI_METADATA_PACKET_LINE_REFERENCE, mask_sh),\
+	SE_SF(DIG0_HDMI_METADATA_PACKET_CONTROL, HDMI_METADATA_PACKET_LINE, mask_sh),\
+	SE_SF(DIG0_DIG_FE_CNTL, DOLBY_VISION_EN, mask_sh),\
+	SE_SF(DP0_DP_SEC_FRAMING4, DP_SST_SDP_SPLITTING, mask_sh),\
+	SE_SF(DIG0_DIG_CLOCK_PATTERN, DIG_CLOCK_PATTERN, mask_sh)
+
+#define SE_COMMON_MASK_SH_LIST_DCN32(mask_sh)\
+	SE_COMMON_MASK_SH_LIST_DCN32_BASE(mask_sh),\
+	SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_ACP_SEND, mask_sh)
+
+void dcn32_dio_stream_encoder_construct(
+	struct dcn10_stream_encoder *enc1,
+	struct dc_context *ctx,
+	struct dc_bios *bp,
+	enum engine_id eng_id,
+	struct vpg *vpg,
+	struct afmt *afmt,
+	const struct dcn10_stream_enc_registers *regs,
+	const struct dcn10_stream_encoder_shift *se_shift,
+	const struct dcn10_stream_encoder_mask *se_mask);
+
+#endif /* __DC_DIO_STREAM_ENCODER_DCN32_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.c
new file mode 100644
index 000000000000..f349cbe2a0f0
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.c
@@ -0,0 +1,164 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+#include "core_types.h"
+#include "reg_helper.h"
+#include "dcn32_dpp.h"
+#include "basics/conversion.h"
+#include "dcn30/dcn30_cm_common.h"
+
+/* Compute the maximum number of lines that we can fit in the line buffer */
+void dscl32_calc_lb_num_partitions(
+		const struct scaler_data *scl_data,
+		enum lb_memory_config lb_config,
+		int *num_part_y,
+		int *num_part_c)
+{
+	int memory_line_size_y, memory_line_size_c, memory_line_size_a,
+	lb_memory_size, lb_memory_size_c, lb_memory_size_a, num_partitions_a;
+
+	int line_size = scl_data->viewport.width < scl_data->recout.width ?
+			scl_data->viewport.width : scl_data->recout.width;
+	int line_size_c = scl_data->viewport_c.width < scl_data->recout.width ?
+			scl_data->viewport_c.width : scl_data->recout.width;
+
+	if (line_size == 0)
+		line_size = 1;
+
+	if (line_size_c == 0)
+		line_size_c = 1;
+
+	memory_line_size_y = (line_size + 5) / 6; /* +5 to ceil */
+	memory_line_size_c = (line_size_c + 5) / 6; /* +5 to ceil */
+	memory_line_size_a = (line_size + 5) / 6; /* +5 to ceil */
+
+	if (lb_config == LB_MEMORY_CONFIG_1) {
+		lb_memory_size = 970;
+		lb_memory_size_c = 970;
+		lb_memory_size_a = 970;
+	} else if (lb_config == LB_MEMORY_CONFIG_2) {
+		lb_memory_size = 1290;
+		lb_memory_size_c = 1290;
+		lb_memory_size_a = 1290;
+	} else if (lb_config == LB_MEMORY_CONFIG_3) {
+		if (scl_data->viewport.width  == scl_data->h_active &&
+			scl_data->viewport.height == scl_data->v_active) {
+			/* 420 mode: luma using all 3 mem from Y, plus 3rd mem from Cr and Cb */
+			/* use increased LB size for calculation only if Scaler not enabled */
+			lb_memory_size = 970 + 1290 + 1170 + 1170 + 1170;
+			lb_memory_size_c = 970 + 1290;
+			lb_memory_size_a = 970 + 1290 + 1170;
+		} else {
+			/* 420 mode: luma using all 3 mem from Y, plus 3rd mem from Cr and Cb */
+			lb_memory_size = 970 + 1290 + 484 + 484 + 484;
+			lb_memory_size_c = 970 + 1290;
+			lb_memory_size_a = 970 + 1290 + 484;
+		}
+	} else {
+		if (scl_data->viewport.width  == scl_data->h_active &&
+			scl_data->viewport.height == scl_data->v_active) {
+			/* use increased LB size for calculation only if Scaler not enabled */
+			lb_memory_size = 970 + 1290 + 1170;
+			lb_memory_size_c = 970 + 1290 + 1170;
+			lb_memory_size_a = 970 + 1290 + 1170;
+		} else {
+			lb_memory_size = 970 + 1290 + 484;
+			lb_memory_size_c = 970 + 1290 + 484;
+			lb_memory_size_a = 970 + 1290 + 484;
+		}
+	}
+	*num_part_y = lb_memory_size / memory_line_size_y;
+	*num_part_c = lb_memory_size_c / memory_line_size_c;
+	num_partitions_a = lb_memory_size_a / memory_line_size_a;
+
+	if (scl_data->lb_params.alpha_en
+			&& (num_partitions_a < *num_part_y))
+		*num_part_y = num_partitions_a;
+
+	if (*num_part_y > 32)
+		*num_part_y = 32;
+	if (*num_part_c > 32)
+		*num_part_c = 32;
+}
+
+static struct dpp_funcs dcn32_dpp_funcs = {
+	.dpp_program_gamcor_lut		= dpp3_program_gamcor_lut,
+	.dpp_read_state				= dpp30_read_state,
+	.dpp_reset					= dpp_reset,
+	.dpp_set_scaler				= dpp1_dscl_set_scaler_manual_scale,
+	.dpp_get_optimal_number_of_taps	= dpp3_get_optimal_number_of_taps,
+	.dpp_set_gamut_remap		= dpp3_cm_set_gamut_remap,
+	.dpp_set_csc_adjustment		= NULL,
+	.dpp_set_csc_default		= NULL,
+	.dpp_program_regamma_pwl	= NULL,
+	.dpp_set_pre_degam			= dpp3_set_pre_degam,
+	.dpp_program_input_lut		= NULL,
+	.dpp_full_bypass			= dpp1_full_bypass,
+	.dpp_setup					= dpp3_cnv_setup,
+	.dpp_program_degamma_pwl	= NULL,
+	.dpp_program_cm_dealpha		= dpp3_program_cm_dealpha,
+	.dpp_program_cm_bias		= dpp3_program_cm_bias,
+
+	.dpp_program_blnd_lut		= NULL, // BLNDGAM is removed completely in DCN3.2 DPP
+	.dpp_program_shaper_lut		= NULL, // CM SHAPER block is removed in DCN3.2 DPP, (it is in MPCC, programmable before or after BLND)
+	.dpp_program_3dlut			= NULL, // CM 3DLUT block is removed in DCN3.2 DPP, (it is in MPCC, programmable before or after BLND)
+
+	.dpp_program_bias_and_scale	= NULL,
+	.dpp_cnv_set_alpha_keyer	= dpp2_cnv_set_alpha_keyer,
+	.set_cursor_attributes		= dpp3_set_cursor_attributes,
+	.set_cursor_position		= dpp1_set_cursor_position,
+	.set_optional_cursor_attributes	= dpp1_cnv_set_optional_cursor_attributes,
+	.dpp_dppclk_control			= dpp1_dppclk_control,
+	.dpp_set_hdr_multiplier		= dpp3_set_hdr_multiplier,
+};
+
+
+static struct dpp_caps dcn32_dpp_cap = {
+	.dscl_data_proc_format = DSCL_DATA_PRCESSING_FLOAT_FORMAT,
+	.max_lb_partitions = 31,
+	.dscl_calc_lb_num_partitions = dscl32_calc_lb_num_partitions,
+};
+
+bool dpp32_construct(
+	struct dcn3_dpp *dpp,
+	struct dc_context *ctx,
+	uint32_t inst,
+	const struct dcn3_dpp_registers *tf_regs,
+	const struct dcn3_dpp_shift *tf_shift,
+	const struct dcn3_dpp_mask *tf_mask)
+{
+	dpp->base.ctx = ctx;
+
+	dpp->base.inst = inst;
+	dpp->base.funcs = &dcn32_dpp_funcs;
+	dpp->base.caps = &dcn32_dpp_cap;
+
+	dpp->tf_regs = tf_regs;
+	dpp->tf_shift = tf_shift;
+	dpp->tf_mask = tf_mask;
+
+	return true;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.h
new file mode 100644
index 000000000000..572958d287eb
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dpp.h
@@ -0,0 +1,38 @@
+/* Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DCN32_DPP_H__
+#define __DCN32_DPP_H__
+
+#include "dcn20/dcn20_dpp.h"
+#include "dcn30/dcn30_dpp.h"
+
+bool dpp32_construct(struct dcn3_dpp *dpp3,
+	struct dc_context *ctx,
+	uint32_t inst,
+	const struct dcn3_dpp_registers *tf_regs,
+	const struct dcn3_dpp_shift *tf_shift,
+	const struct dcn3_dpp_mask *tf_mask);
+
+#endif /* __DCN32_DPP_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.c
new file mode 100644
index 000000000000..4dbad8d4b4fc
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.c
@@ -0,0 +1,90 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ *  and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+#include "dc_bios_types.h"
+#include "dcn31/dcn31_hpo_dp_link_encoder.h"
+#include "dcn32_hpo_dp_link_encoder.h"
+#include "reg_helper.h"
+#include "dc_link.h"
+#include "stream_encoder.h"
+
+#define DC_LOGGER \
+		enc3->base.ctx->logger
+
+#define REG(reg)\
+	(enc3->regs->reg)
+
+#undef FN
+#define FN(reg_name, field_name) \
+	enc3->hpo_le_shift->field_name, enc3->hpo_le_mask->field_name
+
+#define CTX \
+	enc3->base.ctx
+
+static bool dcn32_hpo_dp_link_enc_is_in_alt_mode(
+		struct hpo_dp_link_encoder *enc)
+{
+	struct dcn31_hpo_dp_link_encoder *enc3 = DCN3_1_HPO_DP_LINK_ENC_FROM_HPO_LINK_ENC(enc);
+	uint32_t dp_alt_mode_disable = 0;
+
+	ASSERT((enc->transmitter >= TRANSMITTER_UNIPHY_A) && (enc->transmitter <= TRANSMITTER_UNIPHY_E));
+
+	/* if value == 1 alt mode is disabled, otherwise it is enabled */
+	REG_GET(RDPCSTX_PHY_CNTL6[enc->transmitter], RDPCS_PHY_DPALT_DISABLE, &dp_alt_mode_disable);
+	return (dp_alt_mode_disable == 0);
+}
+
+
+
+static struct hpo_dp_link_encoder_funcs dcn32_hpo_dp_link_encoder_funcs = {
+	.enable_link_phy = dcn31_hpo_dp_link_enc_enable_dp_output,
+	.disable_link_phy = dcn31_hpo_dp_link_enc_disable_output,
+	.link_enable = dcn31_hpo_dp_link_enc_enable,
+	.link_disable = dcn31_hpo_dp_link_enc_disable,
+	.set_link_test_pattern = dcn31_hpo_dp_link_enc_set_link_test_pattern,
+	.update_stream_allocation_table = dcn31_hpo_dp_link_enc_update_stream_allocation_table,
+	.set_throttled_vcp_size = dcn31_hpo_dp_link_enc_set_throttled_vcp_size,
+	.is_in_alt_mode = dcn32_hpo_dp_link_enc_is_in_alt_mode,
+	.read_state = dcn31_hpo_dp_link_enc_read_state,
+	.set_ffe = dcn31_hpo_dp_link_enc_set_ffe,
+};
+
+void hpo_dp_link_encoder32_construct(struct dcn31_hpo_dp_link_encoder *enc31,
+		struct dc_context *ctx,
+		uint32_t inst,
+		const struct dcn31_hpo_dp_link_encoder_registers *hpo_le_regs,
+		const struct dcn31_hpo_dp_link_encoder_shift *hpo_le_shift,
+		const struct dcn31_hpo_dp_link_encoder_mask *hpo_le_mask)
+{
+	enc31->base.ctx = ctx;
+
+	enc31->base.inst = inst;
+	enc31->base.funcs = &dcn32_hpo_dp_link_encoder_funcs;
+	enc31->base.hpd_source = HPD_SOURCEID_UNKNOWN;
+	enc31->base.transmitter = TRANSMITTER_UNKNOWN;
+
+	enc31->regs = hpo_le_regs;
+	enc31->hpo_le_shift = hpo_le_shift;
+	enc31->hpo_le_mask = hpo_le_mask;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.h
new file mode 100644
index 000000000000..9db1323e1933
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hpo_dp_link_encoder.h
@@ -0,0 +1,63 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DAL_DCN32_HPO_DP_LINK_ENCODER_H__
+#define __DAL_DCN32_HPO_DP_LINK_ENCODER_H__
+
+#include "link_encoder.h"
+
+#define DCN3_2_HPO_DP_LINK_ENC_MASK_SH_LIST(mask_sh)\
+	SE_SF(DP_LINK_ENC0_DP_LINK_ENC_CLOCK_CONTROL, DP_LINK_ENC_CLOCK_EN, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_CONTROL, DPHY_RESET, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_CONTROL, DPHY_ENABLE, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_CONTROL, PRECODER_ENABLE, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_CONTROL, MODE, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_CONTROL, NUM_LANES, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_STATUS, STATUS, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_STATUS, SAT_UPDATE_PENDING, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_STATUS, RATE_UPDATE_PENDING, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_TP_CUSTOM0, TP_CUSTOM, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_TP_CONFIG, TP_SELECT0, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_TP_CONFIG, TP_SELECT1, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_TP_CONFIG, TP_SELECT2, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_TP_CONFIG, TP_SELECT3, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_TP_CONFIG, TP_PRBS_SEL0, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_TP_CONFIG, TP_PRBS_SEL1, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_TP_CONFIG, TP_PRBS_SEL2, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_TP_CONFIG, TP_PRBS_SEL3, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_SAT_VC0, SAT_STREAM_SOURCE, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_SAT_VC0, SAT_SLOT_COUNT, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_VC_RATE_CNTL0, STREAM_VC_RATE_X, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_VC_RATE_CNTL0, STREAM_VC_RATE_Y, mask_sh),\
+	SE_SF(DP_DPHY_SYM320_DP_DPHY_SYM32_SAT_UPDATE, SAT_UPDATE, mask_sh)
+
+void hpo_dp_link_encoder32_construct(struct dcn31_hpo_dp_link_encoder *enc31,
+	struct dc_context *ctx,
+	uint32_t inst,
+	const struct dcn31_hpo_dp_link_encoder_registers *hpo_le_regs,
+	const struct dcn31_hpo_dp_link_encoder_shift *hpo_le_shift,
+	const struct dcn31_hpo_dp_link_encoder_mask *hpo_le_mask);
+
+#endif   // __DAL_DCN32_HPO_DP_LINK_ENCODER_H__
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.c
new file mode 100644
index 000000000000..27813374f2bb
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.c
@@ -0,0 +1,964 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+
+#include "dcn30/dcn30_hubbub.h"
+#include "dcn32_hubbub.h"
+#include "dm_services.h"
+#include "reg_helper.h"
+
+
+#define CTX \
+	hubbub2->base.ctx
+#define DC_LOGGER \
+	hubbub2->base.ctx->logger
+#define REG(reg)\
+	hubbub2->regs->reg
+
+#undef FN
+#define FN(reg_name, field_name) \
+	hubbub2->shifts->field_name, hubbub2->masks->field_name
+
+#define DCN32_CRB_SEGMENT_SIZE_KB 64
+
+static void dcn32_init_crb(struct hubbub *hubbub)
+{
+	struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub);
+
+	REG_GET(DCHUBBUB_DET0_CTRL, DET0_SIZE_CURRENT,
+		&hubbub2->det0_size);
+
+	REG_GET(DCHUBBUB_DET1_CTRL, DET1_SIZE_CURRENT,
+		&hubbub2->det1_size);
+
+	REG_GET(DCHUBBUB_DET2_CTRL, DET2_SIZE_CURRENT,
+		&hubbub2->det2_size);
+
+	REG_GET(DCHUBBUB_DET3_CTRL, DET3_SIZE_CURRENT,
+		&hubbub2->det3_size);
+
+	REG_GET(DCHUBBUB_COMPBUF_CTRL, COMPBUF_SIZE_CURRENT,
+		&hubbub2->compbuf_size_segments);
+
+	REG_SET_2(COMPBUF_RESERVED_SPACE, 0,
+			COMPBUF_RESERVED_SPACE_64B, hubbub2->pixel_chunk_size / 32,
+			COMPBUF_RESERVED_SPACE_ZS, hubbub2->pixel_chunk_size / 128);
+	REG_UPDATE(DCHUBBUB_DEBUG_CTRL_0, DET_DEPTH, 0x17F);
+}
+
+static void dcn32_program_det_size(struct hubbub *hubbub, int hubp_inst, unsigned int det_buffer_size_in_kbyte)
+{
+	struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub);
+
+	unsigned int det_size_segments = (det_buffer_size_in_kbyte + DCN32_CRB_SEGMENT_SIZE_KB - 1) / DCN32_CRB_SEGMENT_SIZE_KB;
+
+	switch (hubp_inst) {
+	case 0:
+		REG_UPDATE(DCHUBBUB_DET0_CTRL,
+					DET0_SIZE, det_size_segments);
+		hubbub2->det0_size = det_size_segments;
+		break;
+	case 1:
+		REG_UPDATE(DCHUBBUB_DET1_CTRL,
+					DET1_SIZE, det_size_segments);
+		hubbub2->det1_size = det_size_segments;
+		break;
+	case 2:
+		REG_UPDATE(DCHUBBUB_DET2_CTRL,
+					DET2_SIZE, det_size_segments);
+		hubbub2->det2_size = det_size_segments;
+		break;
+	case 3:
+		REG_UPDATE(DCHUBBUB_DET3_CTRL,
+					DET3_SIZE, det_size_segments);
+		hubbub2->det3_size = det_size_segments;
+		break;
+	default:
+		break;
+	}
+	/* Should never be hit, if it is we have an erroneous hw config*/
+	ASSERT(hubbub2->det0_size + hubbub2->det1_size + hubbub2->det2_size
+			+ hubbub2->det3_size + hubbub2->compbuf_size_segments <= hubbub2->crb_size_segs);
+}
+
+static void dcn32_program_compbuf_size(struct hubbub *hubbub, unsigned int compbuf_size_kb, bool safe_to_increase)
+{
+	struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub);
+	unsigned int compbuf_size_segments = (compbuf_size_kb + DCN32_CRB_SEGMENT_SIZE_KB - 1) / DCN32_CRB_SEGMENT_SIZE_KB;
+
+	if (safe_to_increase || compbuf_size_segments <= hubbub2->compbuf_size_segments) {
+		if (compbuf_size_segments > hubbub2->compbuf_size_segments) {
+			REG_WAIT(DCHUBBUB_DET0_CTRL, DET0_SIZE_CURRENT, hubbub2->det0_size, 1, 100);
+			REG_WAIT(DCHUBBUB_DET1_CTRL, DET1_SIZE_CURRENT, hubbub2->det1_size, 1, 100);
+			REG_WAIT(DCHUBBUB_DET2_CTRL, DET2_SIZE_CURRENT, hubbub2->det2_size, 1, 100);
+			REG_WAIT(DCHUBBUB_DET3_CTRL, DET3_SIZE_CURRENT, hubbub2->det3_size, 1, 100);
+		}
+		/* Should never be hit, if it is we have an erroneous hw config*/
+		ASSERT(hubbub2->det0_size + hubbub2->det1_size + hubbub2->det2_size
+				+ hubbub2->det3_size + compbuf_size_segments <= hubbub2->crb_size_segs);
+		REG_UPDATE(DCHUBBUB_COMPBUF_CTRL, COMPBUF_SIZE, compbuf_size_segments);
+		REG_WAIT(DCHUBBUB_COMPBUF_CTRL, COMPBUF_SIZE_CURRENT, compbuf_size_segments, 1, 100);
+		hubbub2->compbuf_size_segments = compbuf_size_segments;
+	}
+}
+
+static uint32_t convert_and_clamp(
+	uint32_t wm_ns,
+	uint32_t refclk_mhz,
+	uint32_t clamp_value)
+{
+	uint32_t ret_val = 0;
+	ret_val = wm_ns * refclk_mhz;
+
+	ret_val /= 1000;
+
+	if (ret_val > clamp_value)
+		ret_val = clamp_value;
+
+	return ret_val;
+}
+
+static bool hubbub32_program_urgent_watermarks(
+		struct hubbub *hubbub,
+		struct dcn_watermark_set *watermarks,
+		unsigned int refclk_mhz,
+		bool safe_to_lower)
+{
+	struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub);
+	uint32_t prog_wm_value;
+	bool wm_pending = false;
+
+	/* Repeat for water mark set A, B, C and D. */
+	/* clock state A */
+	if (safe_to_lower || watermarks->a.urgent_ns > hubbub2->watermarks.a.urgent_ns) {
+		hubbub2->watermarks.a.urgent_ns = watermarks->a.urgent_ns;
+		prog_wm_value = convert_and_clamp(watermarks->a.urgent_ns,
+				refclk_mhz, 0x3fff);
+		REG_SET(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A, 0,
+				DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A, prog_wm_value);
+
+		DC_LOG_BANDWIDTH_CALCS("URGENCY_WATERMARK_A calculated =%d\n"
+			"HW register value = 0x%x\n",
+			watermarks->a.urgent_ns, prog_wm_value);
+	} else if (watermarks->a.urgent_ns < hubbub2->watermarks.a.urgent_ns)
+		wm_pending = true;
+
+	/* determine the transfer time for a quantity of data for a particular requestor.*/
+	if (safe_to_lower || watermarks->a.frac_urg_bw_flip
+			> hubbub2->watermarks.a.frac_urg_bw_flip) {
+		hubbub2->watermarks.a.frac_urg_bw_flip = watermarks->a.frac_urg_bw_flip;
+
+		REG_SET(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_A, 0,
+				DCHUBBUB_ARB_FRAC_URG_BW_FLIP_A, watermarks->a.frac_urg_bw_flip);
+	} else if (watermarks->a.frac_urg_bw_flip
+			< hubbub2->watermarks.a.frac_urg_bw_flip)
+		wm_pending = true;
+
+	if (safe_to_lower || watermarks->a.frac_urg_bw_nom
+			> hubbub2->watermarks.a.frac_urg_bw_nom) {
+		hubbub2->watermarks.a.frac_urg_bw_nom = watermarks->a.frac_urg_bw_nom;
+
+		REG_SET(DCHUBBUB_ARB_FRAC_URG_BW_NOM_A, 0,
+				DCHUBBUB_ARB_FRAC_URG_BW_NOM_A, watermarks->a.frac_urg_bw_nom);
+	} else if (watermarks->a.frac_urg_bw_nom
+			< hubbub2->watermarks.a.frac_urg_bw_nom)
+		wm_pending = true;
+
+	if (safe_to_lower || watermarks->a.urgent_latency_ns > hubbub2->watermarks.a.urgent_latency_ns) {
+		hubbub2->watermarks.a.urgent_latency_ns = watermarks->a.urgent_latency_ns;
+		prog_wm_value = convert_and_clamp(watermarks->a.urgent_latency_ns,
+				refclk_mhz, 0x3fff);
+		REG_SET(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_A, 0,
+				DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_A, prog_wm_value);
+	} else if (watermarks->a.urgent_latency_ns < hubbub2->watermarks.a.urgent_latency_ns)
+		wm_pending = true;
+
+	/* clock state B */
+	if (safe_to_lower || watermarks->b.urgent_ns > hubbub2->watermarks.b.urgent_ns) {
+		hubbub2->watermarks.b.urgent_ns = watermarks->b.urgent_ns;
+		prog_wm_value = convert_and_clamp(watermarks->b.urgent_ns,
+				refclk_mhz, 0x3fff);
+		REG_SET(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_B, 0,
+				DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_B, prog_wm_value);
+
+		DC_LOG_BANDWIDTH_CALCS("URGENCY_WATERMARK_B calculated =%d\n"
+			"HW register value = 0x%x\n",
+			watermarks->b.urgent_ns, prog_wm_value);
+	} else if (watermarks->b.urgent_ns < hubbub2->watermarks.b.urgent_ns)
+		wm_pending = true;
+
+	/* determine the transfer time for a quantity of data for a particular requestor.*/
+	if (safe_to_lower || watermarks->b.frac_urg_bw_flip
+			> hubbub2->watermarks.b.frac_urg_bw_flip) {
+		hubbub2->watermarks.b.frac_urg_bw_flip = watermarks->b.frac_urg_bw_flip;
+
+		REG_SET(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_B, 0,
+				DCHUBBUB_ARB_FRAC_URG_BW_FLIP_B, watermarks->b.frac_urg_bw_flip);
+	} else if (watermarks->b.frac_urg_bw_flip
+			< hubbub2->watermarks.b.frac_urg_bw_flip)
+		wm_pending = true;
+
+	if (safe_to_lower || watermarks->b.frac_urg_bw_nom
+			> hubbub2->watermarks.b.frac_urg_bw_nom) {
+		hubbub2->watermarks.b.frac_urg_bw_nom = watermarks->b.frac_urg_bw_nom;
+
+		REG_SET(DCHUBBUB_ARB_FRAC_URG_BW_NOM_B, 0,
+				DCHUBBUB_ARB_FRAC_URG_BW_NOM_B, watermarks->b.frac_urg_bw_nom);
+	} else if (watermarks->b.frac_urg_bw_nom
+			< hubbub2->watermarks.b.frac_urg_bw_nom)
+		wm_pending = true;
+
+	if (safe_to_lower || watermarks->b.urgent_latency_ns > hubbub2->watermarks.b.urgent_latency_ns) {
+		hubbub2->watermarks.b.urgent_latency_ns = watermarks->b.urgent_latency_ns;
+		prog_wm_value = convert_and_clamp(watermarks->b.urgent_latency_ns,
+				refclk_mhz, 0x3fff);
+		REG_SET(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_B, 0,
+				DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_B, prog_wm_value);
+	} else if (watermarks->b.urgent_latency_ns < hubbub2->watermarks.b.urgent_latency_ns)
+		wm_pending = true;
+
+	/* clock state C */
+	if (safe_to_lower || watermarks->c.urgent_ns > hubbub2->watermarks.c.urgent_ns) {
+		hubbub2->watermarks.c.urgent_ns = watermarks->c.urgent_ns;
+		prog_wm_value = convert_and_clamp(watermarks->c.urgent_ns,
+				refclk_mhz, 0x3fff);
+		REG_SET(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_C, 0,
+				DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_C, prog_wm_value);
+
+		DC_LOG_BANDWIDTH_CALCS("URGENCY_WATERMARK_C calculated =%d\n"
+			"HW register value = 0x%x\n",
+			watermarks->c.urgent_ns, prog_wm_value);
+	} else if (watermarks->c.urgent_ns < hubbub2->watermarks.c.urgent_ns)
+		wm_pending = true;
+
+	/* determine the transfer time for a quantity of data for a particular requestor.*/
+	if (safe_to_lower || watermarks->c.frac_urg_bw_flip
+			> hubbub2->watermarks.c.frac_urg_bw_flip) {
+		hubbub2->watermarks.c.frac_urg_bw_flip = watermarks->c.frac_urg_bw_flip;
+
+		REG_SET(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_C, 0,
+				DCHUBBUB_ARB_FRAC_URG_BW_FLIP_C, watermarks->c.frac_urg_bw_flip);
+	} else if (watermarks->c.frac_urg_bw_flip
+			< hubbub2->watermarks.c.frac_urg_bw_flip)
+		wm_pending = true;
+
+	if (safe_to_lower || watermarks->c.frac_urg_bw_nom
+			> hubbub2->watermarks.c.frac_urg_bw_nom) {
+		hubbub2->watermarks.c.frac_urg_bw_nom = watermarks->c.frac_urg_bw_nom;
+
+		REG_SET(DCHUBBUB_ARB_FRAC_URG_BW_NOM_C, 0,
+				DCHUBBUB_ARB_FRAC_URG_BW_NOM_C, watermarks->c.frac_urg_bw_nom);
+	} else if (watermarks->c.frac_urg_bw_nom
+			< hubbub2->watermarks.c.frac_urg_bw_nom)
+		wm_pending = true;
+
+	if (safe_to_lower || watermarks->c.urgent_latency_ns > hubbub2->watermarks.c.urgent_latency_ns) {
+		hubbub2->watermarks.c.urgent_latency_ns = watermarks->c.urgent_latency_ns;
+		prog_wm_value = convert_and_clamp(watermarks->c.urgent_latency_ns,
+				refclk_mhz, 0x3fff);
+		REG_SET(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_C, 0,
+				DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_C, prog_wm_value);
+	} else if (watermarks->c.urgent_latency_ns < hubbub2->watermarks.c.urgent_latency_ns)
+		wm_pending = true;
+
+	/* clock state D */
+	if (safe_to_lower || watermarks->d.urgent_ns > hubbub2->watermarks.d.urgent_ns) {
+		hubbub2->watermarks.d.urgent_ns = watermarks->d.urgent_ns;
+		prog_wm_value = convert_and_clamp(watermarks->d.urgent_ns,
+				refclk_mhz, 0x3fff);
+		REG_SET(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_D, 0,
+				DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_D, prog_wm_value);
+
+		DC_LOG_BANDWIDTH_CALCS("URGENCY_WATERMARK_D calculated =%d\n"
+			"HW register value = 0x%x\n",
+			watermarks->d.urgent_ns, prog_wm_value);
+	} else if (watermarks->d.urgent_ns < hubbub2->watermarks.d.urgent_ns)
+		wm_pending = true;
+
+	/* determine the transfer time for a quantity of data for a particular requestor.*/
+	if (safe_to_lower || watermarks->d.frac_urg_bw_flip
+			> hubbub2->watermarks.d.frac_urg_bw_flip) {
+		hubbub2->watermarks.d.frac_urg_bw_flip = watermarks->d.frac_urg_bw_flip;
+
+		REG_SET(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_D, 0,
+				DCHUBBUB_ARB_FRAC_URG_BW_FLIP_D, watermarks->d.frac_urg_bw_flip);
+	} else if (watermarks->d.frac_urg_bw_flip
+			< hubbub2->watermarks.d.frac_urg_bw_flip)
+		wm_pending = true;
+
+	if (safe_to_lower || watermarks->d.frac_urg_bw_nom
+			> hubbub2->watermarks.d.frac_urg_bw_nom) {
+		hubbub2->watermarks.d.frac_urg_bw_nom = watermarks->d.frac_urg_bw_nom;
+
+		REG_SET(DCHUBBUB_ARB_FRAC_URG_BW_NOM_D, 0,
+				DCHUBBUB_ARB_FRAC_URG_BW_NOM_D, watermarks->d.frac_urg_bw_nom);
+	} else if (watermarks->d.frac_urg_bw_nom
+			< hubbub2->watermarks.d.frac_urg_bw_nom)
+		wm_pending = true;
+
+	if (safe_to_lower || watermarks->d.urgent_latency_ns > hubbub2->watermarks.d.urgent_latency_ns) {
+		hubbub2->watermarks.d.urgent_latency_ns = watermarks->d.urgent_latency_ns;
+		prog_wm_value = convert_and_clamp(watermarks->d.urgent_latency_ns,
+				refclk_mhz, 0x3fff);
+		REG_SET(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_D, 0,
+				DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_D, prog_wm_value);
+	} else if (watermarks->d.urgent_latency_ns < hubbub2->watermarks.d.urgent_latency_ns)
+		wm_pending = true;
+
+	return wm_pending;
+}
+
+static bool hubbub32_program_stutter_watermarks(
+		struct hubbub *hubbub,
+		struct dcn_watermark_set *watermarks,
+		unsigned int refclk_mhz,
+		bool safe_to_lower)
+{
+	struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub);
+	uint32_t prog_wm_value;
+	bool wm_pending = false;
+
+	/* clock state A */
+	if (safe_to_lower || watermarks->a.cstate_pstate.cstate_enter_plus_exit_ns
+			> hubbub2->watermarks.a.cstate_pstate.cstate_enter_plus_exit_ns) {
+		hubbub2->watermarks.a.cstate_pstate.cstate_enter_plus_exit_ns =
+				watermarks->a.cstate_pstate.cstate_enter_plus_exit_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->a.cstate_pstate.cstate_enter_plus_exit_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_A, 0,
+				DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_A, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("SR_ENTER_EXIT_WATERMARK_A calculated =%d\n"
+			"HW register value = 0x%x\n",
+			watermarks->a.cstate_pstate.cstate_enter_plus_exit_ns, prog_wm_value);
+	} else if (watermarks->a.cstate_pstate.cstate_enter_plus_exit_ns
+			< hubbub2->watermarks.a.cstate_pstate.cstate_enter_plus_exit_ns)
+		wm_pending = true;
+
+	if (safe_to_lower || watermarks->a.cstate_pstate.cstate_exit_ns
+			> hubbub2->watermarks.a.cstate_pstate.cstate_exit_ns) {
+		hubbub2->watermarks.a.cstate_pstate.cstate_exit_ns =
+				watermarks->a.cstate_pstate.cstate_exit_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->a.cstate_pstate.cstate_exit_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_A, 0,
+				DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_A, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("SR_EXIT_WATERMARK_A calculated =%d\n"
+			"HW register value = 0x%x\n",
+			watermarks->a.cstate_pstate.cstate_exit_ns, prog_wm_value);
+	} else if (watermarks->a.cstate_pstate.cstate_exit_ns
+			< hubbub2->watermarks.a.cstate_pstate.cstate_exit_ns)
+		wm_pending = true;
+
+	/* clock state B */
+	if (safe_to_lower || watermarks->b.cstate_pstate.cstate_enter_plus_exit_ns
+			> hubbub2->watermarks.b.cstate_pstate.cstate_enter_plus_exit_ns) {
+		hubbub2->watermarks.b.cstate_pstate.cstate_enter_plus_exit_ns =
+				watermarks->b.cstate_pstate.cstate_enter_plus_exit_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->b.cstate_pstate.cstate_enter_plus_exit_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_B, 0,
+				DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_B, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("SR_ENTER_EXIT_WATERMARK_B calculated =%d\n"
+			"HW register value = 0x%x\n",
+			watermarks->b.cstate_pstate.cstate_enter_plus_exit_ns, prog_wm_value);
+	} else if (watermarks->b.cstate_pstate.cstate_enter_plus_exit_ns
+			< hubbub2->watermarks.b.cstate_pstate.cstate_enter_plus_exit_ns)
+		wm_pending = true;
+
+	if (safe_to_lower || watermarks->b.cstate_pstate.cstate_exit_ns
+			> hubbub2->watermarks.b.cstate_pstate.cstate_exit_ns) {
+		hubbub2->watermarks.b.cstate_pstate.cstate_exit_ns =
+				watermarks->b.cstate_pstate.cstate_exit_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->b.cstate_pstate.cstate_exit_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_B, 0,
+				DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_B, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("SR_EXIT_WATERMARK_B calculated =%d\n"
+			"HW register value = 0x%x\n",
+			watermarks->b.cstate_pstate.cstate_exit_ns, prog_wm_value);
+	} else if (watermarks->b.cstate_pstate.cstate_exit_ns
+			< hubbub2->watermarks.b.cstate_pstate.cstate_exit_ns)
+		wm_pending = true;
+
+	/* clock state C */
+	if (safe_to_lower || watermarks->c.cstate_pstate.cstate_enter_plus_exit_ns
+			> hubbub2->watermarks.c.cstate_pstate.cstate_enter_plus_exit_ns) {
+		hubbub2->watermarks.c.cstate_pstate.cstate_enter_plus_exit_ns =
+				watermarks->c.cstate_pstate.cstate_enter_plus_exit_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->c.cstate_pstate.cstate_enter_plus_exit_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_C, 0,
+				DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_C, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("SR_ENTER_EXIT_WATERMARK_C calculated =%d\n"
+			"HW register value = 0x%x\n",
+			watermarks->c.cstate_pstate.cstate_enter_plus_exit_ns, prog_wm_value);
+	} else if (watermarks->c.cstate_pstate.cstate_enter_plus_exit_ns
+			< hubbub2->watermarks.c.cstate_pstate.cstate_enter_plus_exit_ns)
+		wm_pending = true;
+
+	if (safe_to_lower || watermarks->c.cstate_pstate.cstate_exit_ns
+			> hubbub2->watermarks.c.cstate_pstate.cstate_exit_ns) {
+		hubbub2->watermarks.c.cstate_pstate.cstate_exit_ns =
+				watermarks->c.cstate_pstate.cstate_exit_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->c.cstate_pstate.cstate_exit_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_C, 0,
+				DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_C, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("SR_EXIT_WATERMARK_C calculated =%d\n"
+			"HW register value = 0x%x\n",
+			watermarks->c.cstate_pstate.cstate_exit_ns, prog_wm_value);
+	} else if (watermarks->c.cstate_pstate.cstate_exit_ns
+			< hubbub2->watermarks.c.cstate_pstate.cstate_exit_ns)
+		wm_pending = true;
+
+	/* clock state D */
+	if (safe_to_lower || watermarks->d.cstate_pstate.cstate_enter_plus_exit_ns
+			> hubbub2->watermarks.d.cstate_pstate.cstate_enter_plus_exit_ns) {
+		hubbub2->watermarks.d.cstate_pstate.cstate_enter_plus_exit_ns =
+				watermarks->d.cstate_pstate.cstate_enter_plus_exit_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->d.cstate_pstate.cstate_enter_plus_exit_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_D, 0,
+				DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_D, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("SR_ENTER_EXIT_WATERMARK_D calculated =%d\n"
+			"HW register value = 0x%x\n",
+			watermarks->d.cstate_pstate.cstate_enter_plus_exit_ns, prog_wm_value);
+	} else if (watermarks->d.cstate_pstate.cstate_enter_plus_exit_ns
+			< hubbub2->watermarks.d.cstate_pstate.cstate_enter_plus_exit_ns)
+		wm_pending = true;
+
+	if (safe_to_lower || watermarks->d.cstate_pstate.cstate_exit_ns
+			> hubbub2->watermarks.d.cstate_pstate.cstate_exit_ns) {
+		hubbub2->watermarks.d.cstate_pstate.cstate_exit_ns =
+				watermarks->d.cstate_pstate.cstate_exit_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->d.cstate_pstate.cstate_exit_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_D, 0,
+				DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_D, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("SR_EXIT_WATERMARK_D calculated =%d\n"
+			"HW register value = 0x%x\n",
+			watermarks->d.cstate_pstate.cstate_exit_ns, prog_wm_value);
+	} else if (watermarks->d.cstate_pstate.cstate_exit_ns
+			< hubbub2->watermarks.d.cstate_pstate.cstate_exit_ns)
+		wm_pending = true;
+
+	return wm_pending;
+}
+
+
+static bool hubbub32_program_pstate_watermarks(
+		struct hubbub *hubbub,
+		struct dcn_watermark_set *watermarks,
+		unsigned int refclk_mhz,
+		bool safe_to_lower)
+{
+	struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub);
+	uint32_t prog_wm_value;
+
+	bool wm_pending = false;
+
+	/* Section for UCLK_PSTATE_CHANGE_WATERMARKS */
+	/* clock state A */
+	if (safe_to_lower || watermarks->a.cstate_pstate.pstate_change_ns
+			> hubbub2->watermarks.a.cstate_pstate.pstate_change_ns) {
+		hubbub2->watermarks.a.cstate_pstate.pstate_change_ns =
+				watermarks->a.cstate_pstate.pstate_change_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->a.cstate_pstate.pstate_change_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_A, 0,
+				DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_A, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("DRAM_CLK_CHANGE_WATERMARK_A calculated =%d\n"
+			"HW register value = 0x%x\n\n",
+			watermarks->a.cstate_pstate.pstate_change_ns, prog_wm_value);
+	} else if (watermarks->a.cstate_pstate.pstate_change_ns
+			< hubbub2->watermarks.a.cstate_pstate.pstate_change_ns)
+		wm_pending = true;
+
+	/* clock state B */
+	if (safe_to_lower || watermarks->b.cstate_pstate.pstate_change_ns
+			> hubbub2->watermarks.b.cstate_pstate.pstate_change_ns) {
+		hubbub2->watermarks.b.cstate_pstate.pstate_change_ns =
+				watermarks->b.cstate_pstate.pstate_change_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->b.cstate_pstate.pstate_change_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_B, 0,
+				DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_B, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("DRAM_CLK_CHANGE_WATERMARK_B calculated =%d\n"
+			"HW register value = 0x%x\n\n",
+			watermarks->b.cstate_pstate.pstate_change_ns, prog_wm_value);
+	} else if (watermarks->b.cstate_pstate.pstate_change_ns
+			< hubbub2->watermarks.b.cstate_pstate.pstate_change_ns)
+		wm_pending = true;
+
+	/* clock state C */
+	if (safe_to_lower || watermarks->c.cstate_pstate.pstate_change_ns
+			> hubbub2->watermarks.c.cstate_pstate.pstate_change_ns) {
+		hubbub2->watermarks.c.cstate_pstate.pstate_change_ns =
+				watermarks->c.cstate_pstate.pstate_change_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->c.cstate_pstate.pstate_change_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_C, 0,
+				DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_C, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("DRAM_CLK_CHANGE_WATERMARK_C calculated =%d\n"
+			"HW register value = 0x%x\n\n",
+			watermarks->c.cstate_pstate.pstate_change_ns, prog_wm_value);
+	} else if (watermarks->c.cstate_pstate.pstate_change_ns
+			< hubbub2->watermarks.c.cstate_pstate.pstate_change_ns)
+		wm_pending = true;
+
+	/* clock state D */
+	if (safe_to_lower || watermarks->d.cstate_pstate.pstate_change_ns
+			> hubbub2->watermarks.d.cstate_pstate.pstate_change_ns) {
+		hubbub2->watermarks.d.cstate_pstate.pstate_change_ns =
+				watermarks->d.cstate_pstate.pstate_change_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->d.cstate_pstate.pstate_change_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_D, 0,
+				DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_D, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("DRAM_CLK_CHANGE_WATERMARK_D calculated =%d\n"
+			"HW register value = 0x%x\n\n",
+			watermarks->d.cstate_pstate.pstate_change_ns, prog_wm_value);
+	} else if (watermarks->d.cstate_pstate.pstate_change_ns
+			< hubbub2->watermarks.d.cstate_pstate.pstate_change_ns)
+		wm_pending = true;
+
+	/* Section for FCLK_PSTATE_CHANGE_WATERMARKS */
+	/* clock state A */
+	if (safe_to_lower || watermarks->a.cstate_pstate.fclk_pstate_change_ns
+			> hubbub2->watermarks.a.cstate_pstate.fclk_pstate_change_ns) {
+		hubbub2->watermarks.a.cstate_pstate.fclk_pstate_change_ns =
+				watermarks->a.cstate_pstate.fclk_pstate_change_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->a.cstate_pstate.fclk_pstate_change_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_A, 0,
+				DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_A, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("FCLK_CHANGE_WATERMARK_A calculated =%d\n"
+			"HW register value = 0x%x\n\n",
+			watermarks->a.cstate_pstate.fclk_pstate_change_ns, prog_wm_value);
+	} else if (watermarks->a.cstate_pstate.fclk_pstate_change_ns
+			< hubbub2->watermarks.a.cstate_pstate.fclk_pstate_change_ns)
+		wm_pending = true;
+
+	/* clock state B */
+	if (safe_to_lower || watermarks->b.cstate_pstate.fclk_pstate_change_ns
+			> hubbub2->watermarks.b.cstate_pstate.fclk_pstate_change_ns) {
+		hubbub2->watermarks.b.cstate_pstate.fclk_pstate_change_ns =
+				watermarks->b.cstate_pstate.fclk_pstate_change_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->b.cstate_pstate.fclk_pstate_change_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_B, 0,
+				DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_B, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("FCLK_CHANGE_WATERMARK_B calculated =%d\n"
+			"HW register value = 0x%x\n\n",
+			watermarks->b.cstate_pstate.fclk_pstate_change_ns, prog_wm_value);
+	} else if (watermarks->b.cstate_pstate.fclk_pstate_change_ns
+			< hubbub2->watermarks.b.cstate_pstate.fclk_pstate_change_ns)
+		wm_pending = true;
+
+	/* clock state C */
+	if (safe_to_lower || watermarks->c.cstate_pstate.fclk_pstate_change_ns
+			> hubbub2->watermarks.c.cstate_pstate.fclk_pstate_change_ns) {
+		hubbub2->watermarks.c.cstate_pstate.fclk_pstate_change_ns =
+				watermarks->c.cstate_pstate.fclk_pstate_change_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->c.cstate_pstate.fclk_pstate_change_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_C, 0,
+				DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_C, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("FCLK_CHANGE_WATERMARK_C calculated =%d\n"
+			"HW register value = 0x%x\n\n",
+			watermarks->c.cstate_pstate.fclk_pstate_change_ns, prog_wm_value);
+	} else if (watermarks->c.cstate_pstate.fclk_pstate_change_ns
+			< hubbub2->watermarks.c.cstate_pstate.fclk_pstate_change_ns)
+		wm_pending = true;
+
+	/* clock state D */
+	if (safe_to_lower || watermarks->d.cstate_pstate.fclk_pstate_change_ns
+			> hubbub2->watermarks.d.cstate_pstate.fclk_pstate_change_ns) {
+		hubbub2->watermarks.d.cstate_pstate.fclk_pstate_change_ns =
+				watermarks->d.cstate_pstate.fclk_pstate_change_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->d.cstate_pstate.fclk_pstate_change_ns,
+				refclk_mhz, 0xffff);
+		REG_SET(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_D, 0,
+				DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_D, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("FCLK_CHANGE_WATERMARK_D calculated =%d\n"
+			"HW register value = 0x%x\n\n",
+			watermarks->d.cstate_pstate.fclk_pstate_change_ns, prog_wm_value);
+	} else if (watermarks->d.cstate_pstate.fclk_pstate_change_ns
+			< hubbub2->watermarks.d.cstate_pstate.fclk_pstate_change_ns)
+		wm_pending = true;
+
+	return wm_pending;
+}
+
+
+static bool hubbub32_program_usr_watermarks(
+		struct hubbub *hubbub,
+		struct dcn_watermark_set *watermarks,
+		unsigned int refclk_mhz,
+		bool safe_to_lower)
+{
+	struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub);
+	uint32_t prog_wm_value;
+
+	bool wm_pending = false;
+
+	/* clock state A */
+	if (safe_to_lower || watermarks->a.usr_retraining_ns
+			> hubbub2->watermarks.a.usr_retraining_ns) {
+		hubbub2->watermarks.a.usr_retraining_ns = watermarks->a.usr_retraining_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->a.usr_retraining_ns,
+				refclk_mhz, 0x3fff);
+		REG_SET(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_A, 0,
+				DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_A, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("USR_RETRAINING_WATERMARK_A calculated =%d\n"
+			"HW register value = 0x%x\n\n",
+			watermarks->a.usr_retraining_ns, prog_wm_value);
+	} else if (watermarks->a.usr_retraining_ns
+			< hubbub2->watermarks.a.usr_retraining_ns)
+		wm_pending = true;
+
+	/* clock state B */
+	if (safe_to_lower || watermarks->b.usr_retraining_ns
+			> hubbub2->watermarks.b.usr_retraining_ns) {
+		hubbub2->watermarks.b.usr_retraining_ns = watermarks->b.usr_retraining_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->b.usr_retraining_ns,
+				refclk_mhz, 0x3fff);
+		REG_SET(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_B, 0,
+				DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_B, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("USR_RETRAINING_WATERMARK_B calculated =%d\n"
+			"HW register value = 0x%x\n\n",
+			watermarks->b.usr_retraining_ns, prog_wm_value);
+	} else if (watermarks->b.usr_retraining_ns
+			< hubbub2->watermarks.b.usr_retraining_ns)
+		wm_pending = true;
+
+	/* clock state C */
+	if (safe_to_lower || watermarks->c.usr_retraining_ns
+			> hubbub2->watermarks.c.usr_retraining_ns) {
+		hubbub2->watermarks.c.usr_retraining_ns =
+				watermarks->c.usr_retraining_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->c.usr_retraining_ns,
+				refclk_mhz, 0x3fff);
+		REG_SET(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_C, 0,
+				DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_C, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("USR_RETRAINING_WATERMARK_C calculated =%d\n"
+			"HW register value = 0x%x\n\n",
+			watermarks->c.usr_retraining_ns, prog_wm_value);
+	} else if (watermarks->c.usr_retraining_ns
+			< hubbub2->watermarks.c.usr_retraining_ns)
+		wm_pending = true;
+
+	/* clock state D */
+	if (safe_to_lower || watermarks->d.usr_retraining_ns
+			> hubbub2->watermarks.d.usr_retraining_ns) {
+		hubbub2->watermarks.d.usr_retraining_ns =
+				watermarks->d.usr_retraining_ns;
+		prog_wm_value = convert_and_clamp(
+				watermarks->d.usr_retraining_ns,
+				refclk_mhz, 0x3fff);
+		REG_SET(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_D, 0,
+				DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_D, prog_wm_value);
+		DC_LOG_BANDWIDTH_CALCS("USR_RETRAINING_WATERMARK_D calculated =%d\n"
+			"HW register value = 0x%x\n\n",
+			watermarks->d.usr_retraining_ns, prog_wm_value);
+	} else if (watermarks->d.usr_retraining_ns
+			< hubbub2->watermarks.d.usr_retraining_ns)
+		wm_pending = true;
+
+	return wm_pending;
+}
+
+void hubbub32_force_usr_retraining_allow(struct hubbub *hubbub, bool allow)
+{
+	struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub);
+
+	/*
+	 * DCHUBBUB_ARB_ALLOW_USR_RETRAINING_FORCE_ENABLE = 1 means enabling forcing value
+	 * DCHUBBUB_ARB_ALLOW_USR_RETRAINING_FORCE_VALUE = 1 or 0,  means value to be forced when force enable
+	 */
+
+	REG_UPDATE_2(DCHUBBUB_ARB_USR_RETRAINING_CNTL,
+			DCHUBBUB_ARB_ALLOW_USR_RETRAINING_FORCE_VALUE, allow,
+			DCHUBBUB_ARB_ALLOW_USR_RETRAINING_FORCE_ENABLE, allow);
+}
+
+static bool hubbub32_program_watermarks(
+		struct hubbub *hubbub,
+		struct dcn_watermark_set *watermarks,
+		unsigned int refclk_mhz,
+		bool safe_to_lower)
+{
+	bool wm_pending = false;
+
+	if (hubbub32_program_urgent_watermarks(hubbub, watermarks, refclk_mhz, safe_to_lower))
+		wm_pending = true;
+
+	if (hubbub32_program_stutter_watermarks(hubbub, watermarks, refclk_mhz, safe_to_lower))
+		wm_pending = true;
+
+	if (hubbub32_program_pstate_watermarks(hubbub, watermarks, refclk_mhz, safe_to_lower))
+		wm_pending = true;
+
+	if (hubbub32_program_usr_watermarks(hubbub, watermarks, refclk_mhz, safe_to_lower))
+		wm_pending = true;
+
+	/*
+	 * The DCHub arbiter has a mechanism to dynamically rate limit the DCHub request stream to the fabric.
+	 * If the memory controller is fully utilized and the DCHub requestors are
+	 * well ahead of their amortized schedule, then it is safe to prevent the next winner
+	 * from being committed and sent to the fabric.
+	 * The utilization of the memory controller is approximated by ensuring that
+	 * the number of outstanding requests is greater than a threshold specified
+	 * by the ARB_MIN_REQ_OUTSTANDING. To determine that the DCHub requestors are well ahead of the amortized schedule,
+	 * the slack of the next winner is compared with the ARB_SAT_LEVEL in DLG RefClk cycles.
+	 *
+	 * TODO: Revisit request limit after figure out right number. request limit for RM isn't decided yet, set maximum value (0x1FF)
+	 * to turn off it for now.
+	 */
+	/*REG_SET(DCHUBBUB_ARB_SAT_LEVEL, 0,
+			DCHUBBUB_ARB_SAT_LEVEL, 60 * refclk_mhz);
+	REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
+			DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);*/
+
+	hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
+
+	hubbub32_force_usr_retraining_allow(hubbub, hubbub->ctx->dc->debug.force_usr_allow);
+
+	return wm_pending;
+}
+
+/* Copy values from WM set A to all other sets */
+void hubbub32_init_watermarks(struct hubbub *hubbub)
+{
+	struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub);
+	uint32_t reg;
+
+	reg = REG_READ(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A);
+	REG_WRITE(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_B, reg);
+	REG_WRITE(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_C, reg);
+	REG_WRITE(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_D, reg);
+
+	reg = REG_READ(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_A);
+	REG_WRITE(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_B, reg);
+	REG_WRITE(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_C, reg);
+	REG_WRITE(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_D, reg);
+
+	reg = REG_READ(DCHUBBUB_ARB_FRAC_URG_BW_NOM_A);
+	REG_WRITE(DCHUBBUB_ARB_FRAC_URG_BW_NOM_B, reg);
+	REG_WRITE(DCHUBBUB_ARB_FRAC_URG_BW_NOM_C, reg);
+	REG_WRITE(DCHUBBUB_ARB_FRAC_URG_BW_NOM_D, reg);
+
+	reg = REG_READ(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_A);
+	REG_WRITE(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_B, reg);
+	REG_WRITE(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_C, reg);
+	REG_WRITE(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_D, reg);
+
+	reg = REG_READ(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_A);
+	REG_WRITE(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_B, reg);
+	REG_WRITE(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_C, reg);
+	REG_WRITE(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_D, reg);
+
+	reg = REG_READ(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_A);
+	REG_WRITE(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_B, reg);
+	REG_WRITE(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_C, reg);
+	REG_WRITE(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_D, reg);
+
+	reg = REG_READ(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_A);
+	REG_WRITE(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_B, reg);
+	REG_WRITE(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_C, reg);
+	REG_WRITE(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_D, reg);
+
+	reg = REG_READ(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_A);
+	REG_WRITE(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_B, reg);
+	REG_WRITE(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_C, reg);
+	REG_WRITE(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_D, reg);
+
+	reg = REG_READ(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_A);
+	REG_WRITE(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_B, reg);
+	REG_WRITE(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_C, reg);
+	REG_WRITE(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_D, reg);
+}
+
+void hubbub32_wm_read_state(struct hubbub *hubbub,
+		struct dcn_hubbub_wm *wm)
+{
+	struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub);
+	struct dcn_hubbub_wm_set *s;
+
+	memset(wm, 0, sizeof(struct dcn_hubbub_wm));
+
+	s = &wm->sets[0];
+	s->wm_set = 0;
+	REG_GET(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A,
+			DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A, &s->data_urgent);
+
+	REG_GET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_A,
+			DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_A, &s->sr_enter);
+
+	REG_GET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_A,
+			DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_A, &s->sr_exit);
+
+	REG_GET(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_A,
+			 DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_A, &s->dram_clk_chanage);
+
+	REG_GET(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_A,
+			 DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_A, &s->usr_retrain);
+
+	REG_GET(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_A,
+			 DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_A, &s->fclk_pstate_change);
+
+	s = &wm->sets[1];
+	s->wm_set = 1;
+	REG_GET(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_B,
+			DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_B, &s->data_urgent);
+
+	REG_GET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_B,
+			DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_B, &s->sr_enter);
+
+	REG_GET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_B,
+			DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_B, &s->sr_exit);
+
+	REG_GET(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_B,
+			DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_B, &s->dram_clk_chanage);
+
+	REG_GET(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_B,
+			 DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_B, &s->usr_retrain);
+
+	REG_GET(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_B,
+			DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_B, &s->fclk_pstate_change);
+
+	s = &wm->sets[2];
+	s->wm_set = 2;
+	REG_GET(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_C,
+			DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_C, &s->data_urgent);
+
+	REG_GET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_C,
+			DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_C, &s->sr_enter);
+
+	REG_GET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_C,
+			DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_C, &s->sr_exit);
+
+	REG_GET(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_C,
+			DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_C, &s->dram_clk_chanage);
+
+	REG_GET(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_C,
+			 DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_C, &s->usr_retrain);
+
+	REG_GET(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_C,
+			DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_C, &s->fclk_pstate_change);
+
+	s = &wm->sets[3];
+	s->wm_set = 3;
+	REG_GET(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_D,
+			DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_D, &s->data_urgent);
+
+	REG_GET(DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_D,
+			DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_D, &s->sr_enter);
+
+	REG_GET(DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_D,
+			DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_D, &s->sr_exit);
+
+	REG_GET(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_D,
+			DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_D, &s->dram_clk_chanage);
+
+	REG_GET(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_D,
+			 DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_D, &s->usr_retrain);
+
+	REG_GET(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_D,
+			DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_D, &s->fclk_pstate_change);
+}
+
+void hubbub32_force_wm_propagate_to_pipes(struct hubbub *hubbub)
+{
+	struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub);
+	uint32_t refclk_mhz = hubbub->ctx->dc->res_pool->ref_clocks.dchub_ref_clock_inKhz / 1000;
+	uint32_t prog_wm_value = convert_and_clamp(hubbub2->watermarks.a.urgent_ns,
+			refclk_mhz, 0x3fff);
+
+	REG_SET(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A, 0,
+			DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A, prog_wm_value);
+}
+
+static const struct hubbub_funcs hubbub32_funcs = {
+	.update_dchub = hubbub2_update_dchub,
+	.init_dchub_sys_ctx = hubbub3_init_dchub_sys_ctx,
+	.init_vm_ctx = hubbub2_init_vm_ctx,
+	.dcc_support_swizzle = hubbub3_dcc_support_swizzle,
+	.dcc_support_pixel_format = hubbub2_dcc_support_pixel_format,
+	.get_dcc_compression_cap = hubbub3_get_dcc_compression_cap,
+	.wm_read_state = hubbub32_wm_read_state,
+	.get_dchub_ref_freq = hubbub2_get_dchub_ref_freq,
+	.program_watermarks = hubbub32_program_watermarks,
+	.allow_self_refresh_control = hubbub1_allow_self_refresh_control,
+	.is_allow_self_refresh_enabled = hubbub1_is_allow_self_refresh_enabled,
+	.force_wm_propagate_to_pipes = hubbub32_force_wm_propagate_to_pipes,
+	.force_pstate_change_control = hubbub3_force_pstate_change_control,
+	.init_watermarks = hubbub32_init_watermarks,
+	.program_det_size = dcn32_program_det_size,
+	.program_compbuf_size = dcn32_program_compbuf_size,
+	.init_crb = dcn32_init_crb,
+	.hubbub_read_state = hubbub2_read_state,
+	.force_usr_retraining_allow = hubbub32_force_usr_retraining_allow,
+};
+
+void hubbub32_construct(struct dcn20_hubbub *hubbub2,
+	struct dc_context *ctx,
+	const struct dcn_hubbub_registers *hubbub_regs,
+	const struct dcn_hubbub_shift *hubbub_shift,
+	const struct dcn_hubbub_mask *hubbub_mask,
+	int det_size_kb,
+	int pixel_chunk_size_kb,
+	int config_return_buffer_size_kb)
+{
+	hubbub2->base.ctx = ctx;
+	hubbub2->base.funcs = &hubbub32_funcs;
+	hubbub2->regs = hubbub_regs;
+	hubbub2->shifts = hubbub_shift;
+	hubbub2->masks = hubbub_mask;
+
+	hubbub2->debug_test_index_pstate = 0xB;
+	hubbub2->detile_buf_size = det_size_kb * 1024;
+	hubbub2->pixel_chunk_size = pixel_chunk_size_kb * 1024;
+	hubbub2->crb_size_segs = config_return_buffer_size_kb / DCN32_CRB_SEGMENT_SIZE_KB;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.h
new file mode 100644
index 000000000000..8d3ea8ee5b3b
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.h
@@ -0,0 +1,172 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_HUBBUB_DCN32_H__
+#define __DC_HUBBUB_DCN32_H__
+
+#include "dcn21/dcn21_hubbub.h"
+
+#define HUBBUB_REG_LIST_DCN32(id)\
+	SR(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A),\
+	SR(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_B),\
+	SR(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_C),\
+	SR(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_D),\
+	SR(DCHUBBUB_ARB_WATERMARK_CHANGE_CNTL),\
+	SR(DCHUBBUB_ARB_DRAM_STATE_CNTL),\
+	SR(DCHUBBUB_ARB_SAT_LEVEL),\
+	SR(DCHUBBUB_ARB_DF_REQ_OUTSTAND),\
+	SR(DCHUBBUB_GLOBAL_TIMER_CNTL), \
+	SR(DCHUBBUB_SOFT_RESET),\
+	SR(DCHUBBUB_CRC_CTRL), \
+	SR(DCN_VM_FB_LOCATION_BASE),\
+	SR(DCN_VM_FB_LOCATION_TOP),\
+	SR(DCN_VM_FB_OFFSET),\
+	SR(DCN_VM_AGP_BOT),\
+	SR(DCN_VM_AGP_TOP),\
+	SR(DCN_VM_AGP_BASE),\
+	HUBBUB_SR_WATERMARK_REG_LIST(), \
+	SR(DCHUBBUB_ARB_FRAC_URG_BW_NOM_A),\
+	SR(DCHUBBUB_ARB_FRAC_URG_BW_NOM_B),\
+	SR(DCHUBBUB_ARB_FRAC_URG_BW_NOM_C),\
+	SR(DCHUBBUB_ARB_FRAC_URG_BW_NOM_D),\
+	SR(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_A),\
+	SR(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_B),\
+	SR(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_C),\
+	SR(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_D),\
+	SR(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_A),\
+	SR(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_B),\
+	SR(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_C),\
+	SR(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_D),\
+	SR(DCHUBBUB_DET0_CTRL),\
+	SR(DCHUBBUB_DET1_CTRL),\
+	SR(DCHUBBUB_DET2_CTRL),\
+	SR(DCHUBBUB_DET3_CTRL),\
+	SR(DCHUBBUB_COMPBUF_CTRL),\
+	SR(COMPBUF_RESERVED_SPACE),\
+	SR(DCHUBBUB_ARB_USR_RETRAINING_CNTL),\
+	SR(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_A),\
+	SR(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_B),\
+	SR(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_C),\
+	SR(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_D),\
+	SR(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_A),\
+	SR(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_B),\
+	SR(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_C),\
+	SR(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_D),\
+	SR(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_A),\
+	SR(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_B),\
+	SR(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_C),\
+	SR(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_D),\
+	SR(DCN_VM_FAULT_ADDR_MSB),\
+	SR(DCN_VM_FAULT_ADDR_LSB),\
+	SR(DCN_VM_FAULT_CNTL),\
+	SR(DCN_VM_FAULT_STATUS)
+
+#define HUBBUB_MASK_SH_LIST_DCN32(mask_sh)\
+	HUBBUB_SF(DCHUBBUB_GLOBAL_TIMER_CNTL, DCHUBBUB_GLOBAL_TIMER_ENABLE, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_SOFT_RESET, DCHUBBUB_GLOBAL_SOFT_RESET, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_WATERMARK_CHANGE_CNTL, DCHUBBUB_ARB_WATERMARK_CHANGE_REQUEST, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_WATERMARK_CHANGE_CNTL, DCHUBBUB_ARB_WATERMARK_CHANGE_DONE_INTERRUPT_DISABLE, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_DRAM_STATE_CNTL, DCHUBBUB_ARB_ALLOW_SELF_REFRESH_FORCE_VALUE, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_DRAM_STATE_CNTL, DCHUBBUB_ARB_ALLOW_SELF_REFRESH_FORCE_ENABLE, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_DRAM_STATE_CNTL, DCHUBBUB_ARB_ALLOW_PSTATE_CHANGE_FORCE_VALUE, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_DRAM_STATE_CNTL, DCHUBBUB_ARB_ALLOW_PSTATE_CHANGE_FORCE_ENABLE, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_SAT_LEVEL, DCHUBBUB_ARB_SAT_LEVEL, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_DF_REQ_OUTSTAND, DCHUBBUB_ARB_MIN_REQ_OUTSTAND, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A, DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_A, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_B, DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_B, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_C, DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_C, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_D, DCHUBBUB_ARB_DATA_URGENCY_WATERMARK_D, mask_sh), \
+	HUBBUB_MASK_SH_LIST_STUTTER(mask_sh), \
+	HUBBUB_SF(DCHUBBUB_GLOBAL_TIMER_CNTL, DCHUBBUB_GLOBAL_TIMER_REFDIV, mask_sh), \
+	HUBBUB_SF(DCN_VM_FB_LOCATION_BASE, FB_BASE, mask_sh), \
+	HUBBUB_SF(DCN_VM_FB_LOCATION_TOP, FB_TOP, mask_sh), \
+	HUBBUB_SF(DCN_VM_FB_OFFSET, FB_OFFSET, mask_sh), \
+	HUBBUB_SF(DCN_VM_AGP_BOT, AGP_BOT, mask_sh), \
+	HUBBUB_SF(DCN_VM_AGP_TOP, AGP_TOP, mask_sh), \
+	HUBBUB_SF(DCN_VM_AGP_BASE, AGP_BASE, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_A, DCHUBBUB_ARB_FRAC_URG_BW_FLIP_A, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_B, DCHUBBUB_ARB_FRAC_URG_BW_FLIP_B, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_C, DCHUBBUB_ARB_FRAC_URG_BW_FLIP_C, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_FRAC_URG_BW_FLIP_D, DCHUBBUB_ARB_FRAC_URG_BW_FLIP_D, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_FRAC_URG_BW_NOM_A, DCHUBBUB_ARB_FRAC_URG_BW_NOM_A, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_FRAC_URG_BW_NOM_B, DCHUBBUB_ARB_FRAC_URG_BW_NOM_B, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_FRAC_URG_BW_NOM_C, DCHUBBUB_ARB_FRAC_URG_BW_NOM_C, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_FRAC_URG_BW_NOM_D, DCHUBBUB_ARB_FRAC_URG_BW_NOM_D, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_A, DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_A, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_B, DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_B, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_C, DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_C, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_D, DCHUBBUB_ARB_REFCYC_PER_TRIP_TO_MEMORY_D, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_DET0_CTRL, DET0_SIZE, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_DET0_CTRL, DET0_SIZE_CURRENT, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_DET1_CTRL, DET1_SIZE, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_DET1_CTRL, DET1_SIZE_CURRENT, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_DET2_CTRL, DET2_SIZE, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_DET2_CTRL, DET2_SIZE_CURRENT, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_DET3_CTRL, DET3_SIZE, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_DET3_CTRL, DET3_SIZE_CURRENT, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_COMPBUF_CTRL, COMPBUF_SIZE, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_COMPBUF_CTRL, COMPBUF_SIZE_CURRENT, mask_sh),\
+	HUBBUB_SF(COMPBUF_RESERVED_SPACE, COMPBUF_RESERVED_SPACE_64B, mask_sh),\
+	HUBBUB_SF(COMPBUF_RESERVED_SPACE, COMPBUF_RESERVED_SPACE_ZS, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_ARB_USR_RETRAINING_CNTL, DCHUBBUB_ARB_ALLOW_USR_RETRAINING_FORCE_VALUE, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_ARB_USR_RETRAINING_CNTL, DCHUBBUB_ARB_ALLOW_USR_RETRAINING_FORCE_ENABLE, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_ARB_USR_RETRAINING_CNTL, DCHUBBUB_ARB_DO_NOT_FORCE_ALLOW_USR_RETRAINING_DURING_PSTATE_CHANGE_REQUEST, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_ARB_USR_RETRAINING_CNTL, DCHUBBUB_ARB_DO_NOT_FORCE_ALLOW_USR_RETRAINING_DURING_PRE_CSTATE, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_A, DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_A, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_B, DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_B, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_C, DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_C, mask_sh), \
+	HUBBUB_SF(DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_D, DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_D, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_A, DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_A, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_B, DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_B, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_C, DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_C, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_D, DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_D, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_A, DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_A, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_B, DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_B, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_C, DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_C, mask_sh),\
+	HUBBUB_SF(DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_D, DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_D, mask_sh), \
+	HUBBUB_SF(DCN_VM_FAULT_ADDR_MSB, DCN_VM_FAULT_ADDR_MSB, mask_sh), \
+	HUBBUB_SF(DCN_VM_FAULT_ADDR_LSB, DCN_VM_FAULT_ADDR_LSB, mask_sh), \
+	HUBBUB_SF(DCN_VM_FAULT_CNTL, DCN_VM_ERROR_STATUS_CLEAR, mask_sh), \
+	HUBBUB_SF(DCN_VM_FAULT_CNTL, DCN_VM_ERROR_STATUS_MODE, mask_sh), \
+	HUBBUB_SF(DCN_VM_FAULT_CNTL, DCN_VM_ERROR_INTERRUPT_ENABLE, mask_sh), \
+	HUBBUB_SF(DCN_VM_FAULT_CNTL, DCN_VM_RANGE_FAULT_DISABLE, mask_sh), \
+	HUBBUB_SF(DCN_VM_FAULT_CNTL, DCN_VM_PRQ_FAULT_DISABLE, mask_sh), \
+	HUBBUB_SF(DCN_VM_FAULT_STATUS, DCN_VM_ERROR_STATUS, mask_sh), \
+	HUBBUB_SF(DCN_VM_FAULT_STATUS, DCN_VM_ERROR_VMID, mask_sh), \
+	HUBBUB_SF(DCN_VM_FAULT_STATUS, DCN_VM_ERROR_TABLE_LEVEL, mask_sh), \
+	HUBBUB_SF(DCN_VM_FAULT_STATUS, DCN_VM_ERROR_PIPE, mask_sh), \
+	HUBBUB_SF(DCN_VM_FAULT_STATUS, DCN_VM_ERROR_INTERRUPT_STATUS, mask_sh)
+
+
+void hubbub32_construct(struct dcn20_hubbub *hubbub2,
+	struct dc_context *ctx,
+	const struct dcn_hubbub_registers *hubbub_regs,
+	const struct dcn_hubbub_shift *hubbub_shift,
+	const struct dcn_hubbub_mask *hubbub_mask,
+	int det_size_kb,
+	int pixel_chunk_size_kb,
+	int config_return_buffer_size_kb);
+
+#endif
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubp.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubp.c
new file mode 100644
index 000000000000..0a7d64306481
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubp.c
@@ -0,0 +1,148 @@
+/*
+ * Copyright 2012-20 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+#include "dce_calcs.h"
+#include "reg_helper.h"
+#include "basics/conversion.h"
+#include "dcn32_hubp.h"
+
+#define REG(reg)\
+	hubp2->hubp_regs->reg
+
+#define CTX \
+	hubp2->base.ctx
+
+#undef FN
+#define FN(reg_name, field_name) \
+	hubp2->hubp_shift->field_name, hubp2->hubp_mask->field_name
+
+void hubp32_update_force_pstate_disallow(struct hubp *hubp, bool pstate_disallow)
+{
+	struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+	REG_UPDATE_2(UCLK_PSTATE_FORCE,
+			DATA_UCLK_PSTATE_FORCE_EN, pstate_disallow,
+			DATA_UCLK_PSTATE_FORCE_VALUE, 0);
+}
+
+void hubp32_update_mall_sel(struct hubp *hubp, uint32_t mall_sel)
+{
+	struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+
+	// Also cache cursor in MALL if using MALL for SS
+	REG_UPDATE_2(DCHUBP_MALL_CONFIG, USE_MALL_SEL, mall_sel,
+			USE_MALL_FOR_CURSOR, mall_sel == 2 ? 1 : 0);
+}
+
+void hubp32_prepare_subvp_buffering(struct hubp *hubp, bool enable)
+{
+	struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+	REG_UPDATE(DCHUBP_VMPG_CONFIG, FORCE_ONE_ROW_FOR_FRAME, enable);
+
+	/* Programming guide suggests CURSOR_REQ_MODE = 1 for SubVP:
+	 * For Pstate change using the MALL with sub-viewport buffering,
+	 * the cursor does not use the MALL (USE_MALL_FOR_CURSOR is ignored)
+	 * and sub-viewport positioning by Display FW has to avoid the cursor
+	 * requests to DRAM (set CURSOR_REQ_MODE = 1 to minimize this exclusion).
+	 *
+	 * CURSOR_REQ_MODE = 1 begins fetching cursor data at the beginning of display prefetch.
+	 * Setting this should allow the sub-viewport position to always avoid the cursor because
+	 * we do not allow the sub-viewport region to overlap with display prefetch (i.e. during blank).
+	 */
+	REG_UPDATE(CURSOR_CONTROL, CURSOR_REQ_MODE, enable);
+}
+
+void hubp32_phantom_hubp_post_enable(struct hubp *hubp)
+{
+	uint32_t reg_val;
+	struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+
+	REG_UPDATE(DCHUBP_CNTL, HUBP_BLANK_EN, 1);
+	reg_val = REG_READ(DCHUBP_CNTL);
+	if (reg_val) {
+		/* init sequence workaround: in case HUBP is
+		 * power gated, this wait would timeout.
+		 *
+		 * we just wrote reg_val to non-0, if it stay 0
+		 * it means HUBP is gated
+		 */
+		REG_WAIT(DCHUBP_CNTL,
+				HUBP_NO_OUTSTANDING_REQ, 1,
+				1, 200);
+	}
+}
+
+static struct hubp_funcs dcn32_hubp_funcs = {
+	.hubp_enable_tripleBuffer = hubp2_enable_triplebuffer,
+	.hubp_is_triplebuffer_enabled = hubp2_is_triplebuffer_enabled,
+	.hubp_program_surface_flip_and_addr = hubp3_program_surface_flip_and_addr,
+	.hubp_program_surface_config = hubp3_program_surface_config,
+	.hubp_is_flip_pending = hubp2_is_flip_pending,
+	.hubp_setup = hubp3_setup,
+	.hubp_setup_interdependent = hubp2_setup_interdependent,
+	.hubp_set_vm_system_aperture_settings = hubp3_set_vm_system_aperture_settings,
+	.set_blank = hubp2_set_blank,
+	.dcc_control = hubp3_dcc_control,
+	.mem_program_viewport = min_set_viewport,
+	.set_cursor_attributes	= hubp2_cursor_set_attributes,
+	.set_cursor_position	= hubp2_cursor_set_position,
+	.hubp_clk_cntl = hubp2_clk_cntl,
+	.hubp_vtg_sel = hubp2_vtg_sel,
+	.dmdata_set_attributes = hubp3_dmdata_set_attributes,
+	.dmdata_load = hubp2_dmdata_load,
+	.dmdata_status_done = hubp2_dmdata_status_done,
+	.hubp_read_state = hubp3_read_state,
+	.hubp_clear_underflow = hubp2_clear_underflow,
+	.hubp_set_flip_control_surface_gsl = hubp2_set_flip_control_surface_gsl,
+	.hubp_init = hubp3_init,
+	.set_unbounded_requesting = hubp31_set_unbounded_requesting,
+	.hubp_soft_reset = hubp31_soft_reset,
+	.hubp_in_blank = hubp1_in_blank,
+	.hubp_update_force_pstate_disallow = hubp32_update_force_pstate_disallow,
+	.phantom_hubp_post_enable = hubp32_phantom_hubp_post_enable,
+	.hubp_update_mall_sel = hubp32_update_mall_sel,
+	.hubp_prepare_subvp_buffering = hubp32_prepare_subvp_buffering,
+	.hubp_set_flip_int = hubp1_set_flip_int
+};
+
+bool hubp32_construct(
+	struct dcn20_hubp *hubp2,
+	struct dc_context *ctx,
+	uint32_t inst,
+	const struct dcn_hubp2_registers *hubp_regs,
+	const struct dcn_hubp2_shift *hubp_shift,
+	const struct dcn_hubp2_mask *hubp_mask)
+{
+	hubp2->base.funcs = &dcn32_hubp_funcs;
+	hubp2->base.ctx = ctx;
+	hubp2->hubp_regs = hubp_regs;
+	hubp2->hubp_shift = hubp_shift;
+	hubp2->hubp_mask = hubp_mask;
+	hubp2->base.inst = inst;
+	hubp2->base.opp_id = OPP_ID_INVALID;
+	hubp2->base.mpcc_id = 0xf;
+
+	return true;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubp.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubp.h
new file mode 100644
index 000000000000..00b4211389c2
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubp.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright 2012-20 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_HUBP_DCN32_H__
+#define __DC_HUBP_DCN32_H__
+
+#include "dcn20/dcn20_hubp.h"
+#include "dcn21/dcn21_hubp.h"
+#include "dcn30/dcn30_hubp.h"
+#include "dcn31/dcn31_hubp.h"
+
+#define HUBP_REG_LIST_DCN32(id)\
+	HUBP_REG_LIST_DCN30(id),\
+	SRI(DCHUBP_MALL_CONFIG, HUBP, id),\
+	SRI(DCHUBP_VMPG_CONFIG, HUBP, id),\
+	SRI(UCLK_PSTATE_FORCE, HUBPREQ, id)
+
+#define HUBP_MASK_SH_LIST_DCN32(mask_sh)\
+	HUBP_MASK_SH_LIST_DCN31(mask_sh),\
+	HUBP_SF(HUBP0_DCHUBP_MALL_CONFIG, USE_MALL_SEL, mask_sh),\
+	HUBP_SF(HUBP0_DCHUBP_MALL_CONFIG, USE_MALL_FOR_CURSOR, mask_sh),\
+	HUBP_SF(HUBP0_DCHUBP_VMPG_CONFIG, VMPG_SIZE, mask_sh),\
+	HUBP_SF(HUBP0_DCHUBP_VMPG_CONFIG, PTE_BUFFER_MODE, mask_sh),\
+	HUBP_SF(HUBP0_DCHUBP_VMPG_CONFIG, BIGK_FRAGMENT_SIZE, mask_sh),\
+	HUBP_SF(HUBP0_DCHUBP_VMPG_CONFIG, FORCE_ONE_ROW_FOR_FRAME, mask_sh),\
+	HUBP_SF(HUBPREQ0_UCLK_PSTATE_FORCE, DATA_UCLK_PSTATE_FORCE_EN, mask_sh),\
+	HUBP_SF(HUBPREQ0_UCLK_PSTATE_FORCE, DATA_UCLK_PSTATE_FORCE_VALUE, mask_sh),\
+	HUBP_SF(HUBPREQ0_UCLK_PSTATE_FORCE, CURSOR_UCLK_PSTATE_FORCE_EN, mask_sh),\
+	HUBP_SF(HUBPREQ0_UCLK_PSTATE_FORCE, CURSOR_UCLK_PSTATE_FORCE_VALUE, mask_sh)
+
+void hubp32_update_force_pstate_disallow(struct hubp *hubp, bool pstate_disallow);
+
+void hubp32_update_mall_sel(struct hubp *hubp, uint32_t mall_sel);
+
+void hubp32_prepare_subvp_buffering(struct hubp *hubp, bool enable);
+
+void hubp32_phantom_hubp_post_enable(struct hubp *hubp);
+
+bool hubp32_construct(
+	struct dcn20_hubp *hubp2,
+	struct dc_context *ctx,
+	uint32_t inst,
+	const struct dcn_hubp2_registers *hubp_regs,
+	const struct dcn_hubp2_shift *hubp_shift,
+	const struct dcn_hubp2_mask *hubp_mask);
+
+#endif /* __DC_HUBP_DCN32_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
new file mode 100644
index 000000000000..d0a222f1a09c
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
@@ -0,0 +1,891 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+
+#include "dm_services.h"
+#include "dm_helpers.h"
+#include "core_types.h"
+#include "resource.h"
+#include "dccg.h"
+#include "dce/dce_hwseq.h"
+#include "dcn30/dcn30_cm_common.h"
+#include "reg_helper.h"
+#include "abm.h"
+#include "hubp.h"
+#include "dchubbub.h"
+#include "timing_generator.h"
+#include "opp.h"
+#include "ipp.h"
+#include "mpc.h"
+#include "mcif_wb.h"
+#include "dc_dmub_srv.h"
+#include "link_hwss.h"
+#include "dpcd_defs.h"
+#include "dcn32_hwseq.h"
+#include "clk_mgr.h"
+#include "dsc.h"
+#include "dcn20/dcn20_optc.h"
+
+#define DC_LOGGER_INIT(logger)
+
+#define CTX \
+	hws->ctx
+#define REG(reg)\
+	hws->regs->reg
+#define DC_LOGGER \
+		dc->ctx->logger
+
+
+#undef FN
+#define FN(reg_name, field_name) \
+	hws->shifts->field_name, hws->masks->field_name
+
+void dcn32_dsc_pg_control(
+		struct dce_hwseq *hws,
+		unsigned int dsc_inst,
+		bool power_on)
+{
+	uint32_t power_gate = power_on ? 0 : 1;
+	uint32_t pwr_status = power_on ? 0 : 2;
+	uint32_t org_ip_request_cntl = 0;
+
+	if (hws->ctx->dc->debug.disable_dsc_power_gate)
+		return;
+
+	REG_GET(DC_IP_REQUEST_CNTL, IP_REQUEST_EN, &org_ip_request_cntl);
+	if (org_ip_request_cntl == 0)
+		REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1);
+
+	switch (dsc_inst) {
+	case 0: /* DSC0 */
+		REG_UPDATE(DOMAIN16_PG_CONFIG,
+				DOMAIN_POWER_GATE, power_gate);
+
+		REG_WAIT(DOMAIN16_PG_STATUS,
+				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+				1, 1000);
+		break;
+	case 1: /* DSC1 */
+		REG_UPDATE(DOMAIN17_PG_CONFIG,
+				DOMAIN_POWER_GATE, power_gate);
+
+		REG_WAIT(DOMAIN17_PG_STATUS,
+				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+				1, 1000);
+		break;
+	case 2: /* DSC2 */
+		REG_UPDATE(DOMAIN18_PG_CONFIG,
+				DOMAIN_POWER_GATE, power_gate);
+
+		REG_WAIT(DOMAIN18_PG_STATUS,
+				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+				1, 1000);
+		break;
+	case 3: /* DSC3 */
+		REG_UPDATE(DOMAIN19_PG_CONFIG,
+				DOMAIN_POWER_GATE, power_gate);
+
+		REG_WAIT(DOMAIN19_PG_STATUS,
+				DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+				1, 1000);
+		break;
+	default:
+		BREAK_TO_DEBUGGER();
+		break;
+	}
+
+	if (org_ip_request_cntl == 0)
+		REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 0);
+}
+
+
+void dcn32_enable_power_gating_plane(
+	struct dce_hwseq *hws,
+	bool enable)
+{
+	bool force_on = true; /* disable power gating */
+
+	if (enable)
+		force_on = false;
+
+	/* DCHUBP0/1/2/3 */
+	REG_UPDATE(DOMAIN0_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+	REG_UPDATE(DOMAIN1_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+	REG_UPDATE(DOMAIN2_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+	REG_UPDATE(DOMAIN3_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+
+	/* DCS0/1/2/3 */
+	REG_UPDATE(DOMAIN16_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+	REG_UPDATE(DOMAIN17_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+	REG_UPDATE(DOMAIN18_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+	REG_UPDATE(DOMAIN19_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+}
+
+void dcn32_hubp_pg_control(struct dce_hwseq *hws, unsigned int hubp_inst, bool power_on)
+{
+	uint32_t power_gate = power_on ? 0 : 1;
+	uint32_t pwr_status = power_on ? 0 : 2;
+
+	if (hws->ctx->dc->debug.disable_hubp_power_gate)
+		return;
+
+	if (REG(DOMAIN0_PG_CONFIG) == 0)
+		return;
+
+	switch (hubp_inst) {
+	case 0:
+		REG_SET(DOMAIN0_PG_CONFIG, 0, DOMAIN_POWER_GATE, power_gate);
+		REG_WAIT(DOMAIN0_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
+		break;
+	case 1:
+		REG_SET(DOMAIN1_PG_CONFIG, 0, DOMAIN_POWER_GATE, power_gate);
+		REG_WAIT(DOMAIN1_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
+		break;
+	case 2:
+		REG_SET(DOMAIN2_PG_CONFIG, 0, DOMAIN_POWER_GATE, power_gate);
+		REG_WAIT(DOMAIN2_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
+		break;
+	case 3:
+		REG_SET(DOMAIN3_PG_CONFIG, 0, DOMAIN_POWER_GATE, power_gate);
+		REG_WAIT(DOMAIN3_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
+		break;
+	default:
+		BREAK_TO_DEBUGGER();
+		break;
+	}
+}
+
+static bool dcn32_check_no_memory_request_for_cab(struct dc *dc)
+{
+	int i;
+
+    /* First, check no-memory-request case */
+	for (i = 0; i < dc->current_state->stream_count; i++) {
+		if (dc->current_state->stream_status[i].plane_count)
+			/* Fail eligibility on a visible stream */
+			break;
+	}
+
+	if (i == dc->current_state->stream_count)
+		return true;
+
+	return false;
+}
+
+/* This function takes in the start address and surface size to be cached in CAB
+ * and calculates the total number of cache lines required to store the surface.
+ * The number of cache lines used for each surface is calculated independently of
+ * one another. For example, if there is a primary surface(1), meta surface(2), and
+ * cursor(3), this function should be called 3 times to calculate the number of cache
+ * lines used for each of those surfaces.
+ */
+static uint32_t dcn32_cache_lines_for_surface(struct dc *dc, uint32_t surface_size, uint64_t start_address)
+{
+	uint32_t lines_used = 1;
+	uint32_t num_cached_bytes = 0;
+	uint32_t remaining_size = 0;
+	uint32_t cache_line_size = dc->caps.cache_line_size;
+
+	/* 1. Calculate surface size minus the number of bytes stored
+	 * in the first cache line (all bytes in first cache line might
+	 * not be fully used).
+	 */
+	num_cached_bytes = cache_line_size - (start_address % cache_line_size);
+	remaining_size = surface_size - num_cached_bytes;
+
+	/* 2. Calculate number of cache lines that will be fully used with
+	 * the remaining number of bytes to be stored.
+	 */
+	lines_used += (remaining_size / cache_line_size);
+
+	/* 3. Check if we need an extra line due to the remaining size not being
+	 * a multiple of CACHE_LINE_SIZE.
+	 */
+	if (remaining_size % cache_line_size > 0)
+		lines_used++;
+
+	return lines_used;
+}
+
+/* This function loops through every surface that needs to be cached in CAB for SS,
+ * and calculates the total number of ways required to store all surfaces (primary,
+ * meta, cursor).
+ */
+static uint32_t dcn32_calculate_cab_allocation(struct dc *dc, struct dc_state *ctx)
+{
+	uint8_t i, j;
+	struct dc_stream_state *stream = NULL;
+	struct dc_plane_state *plane = NULL;
+	uint32_t surface_size = 0;
+	uint32_t cursor_size = 0;
+	uint32_t cache_lines_used = 0;
+	uint32_t total_lines = 0;
+	uint32_t lines_per_way = 0;
+	uint32_t num_ways = 0;
+
+	for (i = 0; i < ctx->stream_count; i++) {
+		stream = ctx->streams[i];
+
+		// Don't include PSR surface in the total surface size for CAB allocation
+		if (stream->link->psr_settings.psr_version != DC_PSR_VERSION_UNSUPPORTED)
+			continue;
+
+		if (ctx->stream_status[i].plane_count == 0)
+			continue;
+
+		// For each stream, loop through each plane to calculate the number of cache
+		// lines required to store the surface in CAB
+		for (j = 0; j < ctx->stream_status[i].plane_count; j++) {
+			plane = ctx->stream_status[i].plane_states[j];
+
+			// Calculate total surface size
+			surface_size = plane->plane_size.surface_pitch *
+					plane->plane_size.surface_size.height *
+					(plane->format >= SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616 ? 8 : 4);
+
+			// Convert surface size + starting address to number of cache lines required
+			// (alignment accounted for)
+			cache_lines_used += dcn32_cache_lines_for_surface(dc, surface_size,
+					plane->address.grph.addr.quad_part);
+
+			if (plane->address.grph.meta_addr.quad_part) {
+				// Meta surface
+				cache_lines_used += dcn32_cache_lines_for_surface(dc, surface_size,
+						plane->address.grph.meta_addr.quad_part);
+			}
+		}
+
+		// Include cursor size for CAB allocation
+		if (stream->cursor_position.enable && plane->address.grph.cursor_cache_addr.quad_part) {
+			cursor_size = dc->caps.max_cursor_size * dc->caps.max_cursor_size;
+			switch (stream->cursor_attributes.color_format) {
+			case CURSOR_MODE_MONO:
+				cursor_size /= 2;
+				break;
+			case CURSOR_MODE_COLOR_1BIT_AND:
+			case CURSOR_MODE_COLOR_PRE_MULTIPLIED_ALPHA:
+			case CURSOR_MODE_COLOR_UN_PRE_MULTIPLIED_ALPHA:
+				cursor_size *= 4;
+				break;
+
+			case CURSOR_MODE_COLOR_64BIT_FP_PRE_MULTIPLIED:
+			case CURSOR_MODE_COLOR_64BIT_FP_UN_PRE_MULTIPLIED:
+				cursor_size *= 8;
+				break;
+			}
+			cache_lines_used += dcn32_cache_lines_for_surface(dc, surface_size,
+					plane->address.grph.cursor_cache_addr.quad_part);
+		}
+	}
+
+	// Convert number of cache lines required to number of ways
+	total_lines = dc->caps.max_cab_allocation_bytes / dc->caps.cache_line_size;
+	lines_per_way = total_lines / dc->caps.cache_num_ways;
+	num_ways = cache_lines_used / lines_per_way;
+
+	if (cache_lines_used % lines_per_way > 0)
+		num_ways++;
+
+	return num_ways;
+}
+
+bool dcn32_apply_idle_power_optimizations(struct dc *dc, bool enable)
+{
+	union dmub_rb_cmd cmd;
+	uint8_t ways;
+
+	if (!dc->ctx->dmub_srv)
+		return false;
+
+	if (enable) {
+		if (dc->current_state) {
+
+			/* 1. Check no memory request case for CAB.
+			 * If no memory request case, send CAB_ACTION NO_DF_REQ DMUB message
+			 */
+			if (dcn32_check_no_memory_request_for_cab(dc)) {
+				/* Enable no-memory-requests case */
+				memset(&cmd, 0, sizeof(cmd));
+				cmd.cab.header.type = DMUB_CMD__CAB_FOR_SS;
+				cmd.cab.header.sub_type = DMUB_CMD__CAB_NO_DCN_REQ;
+				cmd.cab.header.payload_bytes = sizeof(cmd.cab) - sizeof(cmd.cab.header);
+
+				dc_dmub_srv_cmd_queue(dc->ctx->dmub_srv, &cmd);
+				dc_dmub_srv_cmd_execute(dc->ctx->dmub_srv);
+
+				return true;
+			}
+
+			/* 2. Check if all surfaces can fit in CAB.
+			 * If surfaces can fit into CAB, send CAB_ACTION_ALLOW DMUB message
+			 * and configure HUBP's to fetch from MALL
+			 */
+			ways = dcn32_calculate_cab_allocation(dc, dc->current_state);
+			if (ways <= dc->caps.cache_num_ways) {
+				memset(&cmd, 0, sizeof(cmd));
+				cmd.cab.header.type = DMUB_CMD__CAB_FOR_SS;
+				cmd.cab.header.sub_type = DMUB_CMD__CAB_DCN_SS_FIT_IN_CAB;
+				cmd.cab.header.payload_bytes = sizeof(cmd.cab) - sizeof(cmd.cab.header);
+				cmd.cab.cab_alloc_ways = ways;
+
+				dc_dmub_srv_cmd_queue(dc->ctx->dmub_srv, &cmd);
+				dc_dmub_srv_cmd_execute(dc->ctx->dmub_srv);
+
+				return true;
+			}
+
+		}
+		return false;
+	}
+
+	/* Disable CAB */
+	memset(&cmd, 0, sizeof(cmd));
+	cmd.cab.header.type = DMUB_CMD__CAB_FOR_SS;
+	cmd.cab.header.sub_type = DMUB_CMD__CAB_NO_IDLE_OPTIMIZATION;
+	cmd.cab.header.payload_bytes =
+			sizeof(cmd.cab) - sizeof(cmd.cab.header);
+
+	dc_dmub_srv_cmd_queue(dc->ctx->dmub_srv, &cmd);
+	dc_dmub_srv_cmd_execute(dc->ctx->dmub_srv);
+	dc_dmub_srv_wait_idle(dc->ctx->dmub_srv);
+
+	return false;
+}
+
+/* Send DMCUB message with SubVP pipe info
+ * - For each pipe in context, populate payload with required SubVP information
+ *   if the pipe is using SubVP for MCLK switch
+ * - This function must be called while the DMUB HW lock is acquired by driver
+ */
+void dcn32_commit_subvp_config(struct dc *dc, struct dc_state *context)
+{
+/*
+	int i;
+	bool enable_subvp = false;
+
+	if (!dc->ctx || !dc->ctx->dmub_srv)
+		return;
+
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
+
+		if (pipe_ctx->stream && pipe_ctx->stream->mall_stream_config.paired_stream &&
+				pipe_ctx->stream->mall_stream_config.type == SUBVP_MAIN) {
+			// There is at least 1 SubVP pipe, so enable SubVP
+			enable_subvp = true;
+			break;
+		}
+	}
+	dc_dmub_setup_subvp_dmub_command(dc, context, enable_subvp);
+*/
+}
+
+static bool dcn32_set_mpc_shaper_3dlut(
+	struct pipe_ctx *pipe_ctx, const struct dc_stream_state *stream)
+{
+	struct dpp *dpp_base = pipe_ctx->plane_res.dpp;
+	int mpcc_id = pipe_ctx->plane_res.hubp->inst;
+	struct mpc *mpc = pipe_ctx->stream_res.opp->ctx->dc->res_pool->mpc;
+	bool result = false;
+
+	const struct pwl_params *shaper_lut = NULL;
+	//get the shaper lut params
+	if (stream->func_shaper) {
+		if (stream->func_shaper->type == TF_TYPE_HWPWL)
+			shaper_lut = &stream->func_shaper->pwl;
+		else if (stream->func_shaper->type == TF_TYPE_DISTRIBUTED_POINTS) {
+			cm_helper_translate_curve_to_hw_format(
+					stream->func_shaper,
+					&dpp_base->shaper_params, true);
+			shaper_lut = &dpp_base->shaper_params;
+		}
+	}
+
+	if (stream->lut3d_func &&
+		stream->lut3d_func->state.bits.initialized == 1) {
+
+		result = mpc->funcs->program_3dlut(mpc,
+								&stream->lut3d_func->lut_3d,
+								mpcc_id);
+
+		result = mpc->funcs->program_shaper(mpc,
+								shaper_lut,
+								mpcc_id);
+	}
+
+	return result;
+}
+bool dcn32_set_output_transfer_func(struct dc *dc,
+				struct pipe_ctx *pipe_ctx,
+				const struct dc_stream_state *stream)
+{
+	int mpcc_id = pipe_ctx->plane_res.hubp->inst;
+	struct mpc *mpc = pipe_ctx->stream_res.opp->ctx->dc->res_pool->mpc;
+	struct pwl_params *params = NULL;
+	bool ret = false;
+
+	/* program OGAM or 3DLUT only for the top pipe*/
+	if (pipe_ctx->top_pipe == NULL) {
+		/*program shaper and 3dlut in MPC*/
+		ret = dcn32_set_mpc_shaper_3dlut(pipe_ctx, stream);
+		if (ret == false && mpc->funcs->set_output_gamma && stream->out_transfer_func) {
+			if (stream->out_transfer_func->type == TF_TYPE_HWPWL)
+				params = &stream->out_transfer_func->pwl;
+			else if (pipe_ctx->stream->out_transfer_func->type ==
+					TF_TYPE_DISTRIBUTED_POINTS &&
+					cm3_helper_translate_curve_to_hw_format(
+					stream->out_transfer_func,
+					&mpc->blender_params, false))
+				params = &mpc->blender_params;
+		 /* there are no ROM LUTs in OUTGAM */
+		if (stream->out_transfer_func->type == TF_TYPE_PREDEFINED)
+			BREAK_TO_DEBUGGER();
+		}
+	}
+
+	mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
+	return ret;
+}
+
+/* Program P-State force value according to if pipe is using SubVP or not:
+ * 1. Reset P-State force on all pipes first
+ * 2. For each main pipe, force P-State disallow (P-State allow moderated by DMUB)
+ */
+void dcn32_subvp_update_force_pstate(struct dc *dc, struct dc_state *context)
+{
+	int i;
+	int num_subvp = 0;
+	/* Unforce p-state for each pipe
+	 */
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+		struct hubp *hubp = pipe->plane_res.hubp;
+
+		if (hubp && hubp->funcs->hubp_update_force_pstate_disallow)
+			hubp->funcs->hubp_update_force_pstate_disallow(hubp, false);
+		if (pipe->stream && pipe->stream->mall_stream_config.type == SUBVP_MAIN)
+			num_subvp++;
+	}
+
+	if (num_subvp == 0)
+		return;
+
+	/* Loop through each pipe -- for each subvp main pipe force p-state allow equal to false.
+	 */
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+
+		if (pipe->stream && pipe->plane_state && pipe->stream->mall_stream_config.type == SUBVP_MAIN) {
+			struct hubp *hubp = pipe->plane_res.hubp;
+
+			if (hubp && hubp->funcs->hubp_update_force_pstate_disallow)
+				hubp->funcs->hubp_update_force_pstate_disallow(hubp, true);
+		}
+	}
+}
+
+/* Update MALL_SEL register based on if pipe / plane
+ * is a phantom pipe, main pipe, and if using MALL
+ * for SS.
+ */
+void dcn32_update_mall_sel(struct dc *dc, struct dc_state *context)
+{
+	int i;
+	unsigned int num_ways = dcn32_calculate_cab_allocation(dc, context);
+
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+		struct hubp *hubp = pipe->plane_res.hubp;
+
+		if (pipe->stream && pipe->plane_state && hubp && hubp->funcs->hubp_update_mall_sel) {
+			if (pipe->stream->mall_stream_config.type == SUBVP_PHANTOM) {
+					hubp->funcs->hubp_update_mall_sel(hubp, 1);
+			} else {
+				hubp->funcs->hubp_update_mall_sel(hubp,
+					num_ways <= dc->caps.cache_num_ways &&
+					pipe->stream->link->psr_settings.psr_version == DC_PSR_VERSION_UNSUPPORTED ? 2 : 0);
+			}
+		}
+	}
+}
+
+/* Program the sub-viewport pipe configuration after the main / phantom pipes
+ * have been programmed in hardware.
+ * 1. Update force P-State for all the main pipes (disallow P-state)
+ * 2. Update MALL_SEL register
+ * 3. Program FORCE_ONE_ROW_FOR_FRAME for main subvp pipes
+ */
+void dcn32_program_mall_pipe_config(struct dc *dc, struct dc_state *context)
+{
+	int i;
+	struct dce_hwseq *hws = dc->hwseq;
+	// Update force P-state for each pipe accordingly
+	if (hws && hws->funcs.subvp_update_force_pstate)
+		hws->funcs.subvp_update_force_pstate(dc, context);
+
+	// Update MALL_SEL register for each pipe
+	if (hws && hws->funcs.update_mall_sel)
+		hws->funcs.update_mall_sel(dc, context);
+
+	// Program FORCE_ONE_ROW_FOR_FRAME and CURSOR_REQ_MODE for main subvp pipes
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+		struct hubp *hubp = pipe->plane_res.hubp;
+
+		if (pipe->stream && hubp && hubp->funcs->hubp_prepare_subvp_buffering) {
+			if (pipe->stream->mall_stream_config.type == SUBVP_MAIN) {
+				hubp->funcs->hubp_prepare_subvp_buffering(hubp, true);
+			} else {
+				hubp->funcs->hubp_prepare_subvp_buffering(hubp, false);
+			}
+		}
+	}
+}
+
+void dcn32_init_hw(struct dc *dc)
+{
+	struct abm **abms = dc->res_pool->multiple_abms;
+	struct dce_hwseq *hws = dc->hwseq;
+	struct dc_bios *dcb = dc->ctx->dc_bios;
+	struct resource_pool *res_pool = dc->res_pool;
+	int i;
+	int edp_num;
+	uint32_t backlight = MAX_BACKLIGHT_LEVEL;
+
+	dc->debug.disable_idle_power_optimizations = true;
+	if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks)
+		dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
+
+	// Initialize the dccg
+	if (res_pool->dccg->funcs->dccg_init)
+		res_pool->dccg->funcs->dccg_init(res_pool->dccg);
+
+	if (!dcb->funcs->is_accelerated_mode(dcb)) {
+		hws->funcs.bios_golden_init(dc);
+		hws->funcs.disable_vga(dc->hwseq);
+	}
+
+	// Set default OPTC memory power states
+	if (dc->debug.enable_mem_low_power.bits.optc) {
+		// Shutdown when unassigned and light sleep in VBLANK
+		REG_SET_2(ODM_MEM_PWR_CTRL3, 0, ODM_MEM_UNASSIGNED_PWR_MODE, 3, ODM_MEM_VBLANK_PWR_MODE, 1);
+	}
+
+	if (dc->debug.enable_mem_low_power.bits.vga) {
+		// Power down VGA memory
+		REG_UPDATE(MMHUBBUB_MEM_PWR_CNTL, VGA_MEM_PWR_FORCE, 1);
+	}
+
+	if (dc->ctx->dc_bios->fw_info_valid) {
+		res_pool->ref_clocks.xtalin_clock_inKhz =
+				dc->ctx->dc_bios->fw_info.pll_info.crystal_frequency;
+
+		if (res_pool->dccg && res_pool->hubbub) {
+			(res_pool->dccg->funcs->get_dccg_ref_freq)(res_pool->dccg,
+					dc->ctx->dc_bios->fw_info.pll_info.crystal_frequency,
+					&res_pool->ref_clocks.dccg_ref_clock_inKhz);
+
+			(res_pool->hubbub->funcs->get_dchub_ref_freq)(res_pool->hubbub,
+					res_pool->ref_clocks.dccg_ref_clock_inKhz,
+					&res_pool->ref_clocks.dchub_ref_clock_inKhz);
+		} else {
+			// Not all ASICs have DCCG sw component
+			res_pool->ref_clocks.dccg_ref_clock_inKhz =
+					res_pool->ref_clocks.xtalin_clock_inKhz;
+			res_pool->ref_clocks.dchub_ref_clock_inKhz =
+					res_pool->ref_clocks.xtalin_clock_inKhz;
+		}
+	} else
+		ASSERT_CRITICAL(false);
+
+	for (i = 0; i < dc->link_count; i++) {
+		/* Power up AND update implementation according to the
+		 * required signal (which may be different from the
+		 * default signal on connector).
+		 */
+		struct dc_link *link = dc->links[i];
+
+		link->link_enc->funcs->hw_init(link->link_enc);
+
+		/* Check for enabled DIG to identify enabled display */
+		if (link->link_enc->funcs->is_dig_enabled &&
+			link->link_enc->funcs->is_dig_enabled(link->link_enc))
+			link->link_status.link_active = true;
+	}
+
+	/* Power gate DSCs */
+	for (i = 0; i < res_pool->res_cap->num_dsc; i++)
+		if (hws->funcs.dsc_pg_control != NULL)
+			hws->funcs.dsc_pg_control(hws, res_pool->dscs[i]->inst, false);
+
+	/* we want to turn off all dp displays before doing detection */
+	dc_link_blank_all_dp_displays(dc);
+
+	/* If taking control over from VBIOS, we may want to optimize our first
+	 * mode set, so we need to skip powering down pipes until we know which
+	 * pipes we want to use.
+	 * Otherwise, if taking control is not possible, we need to power
+	 * everything down.
+	 */
+	if (dcb->funcs->is_accelerated_mode(dcb) || dc->config.power_down_display_on_boot) {
+		hws->funcs.init_pipes(dc, dc->current_state);
+		if (dc->res_pool->hubbub->funcs->allow_self_refresh_control)
+			dc->res_pool->hubbub->funcs->allow_self_refresh_control(dc->res_pool->hubbub,
+					!dc->res_pool->hubbub->ctx->dc->debug.disable_stutter);
+	}
+
+	/* In headless boot cases, DIG may be turned
+	 * on which causes HW/SW discrepancies.
+	 * To avoid this, power down hardware on boot
+	 * if DIG is turned on and seamless boot not enabled
+	 */
+	if (dc->config.power_down_display_on_boot) {
+		struct dc_link *edp_links[MAX_NUM_EDP];
+		struct dc_link *edp_link;
+
+		get_edp_links(dc, edp_links, &edp_num);
+		if (edp_num) {
+			for (i = 0; i < edp_num; i++) {
+				edp_link = edp_links[i];
+				if (edp_link->link_enc->funcs->is_dig_enabled &&
+						edp_link->link_enc->funcs->is_dig_enabled(edp_link->link_enc) &&
+						dc->hwss.edp_backlight_control &&
+						dc->hwss.power_down &&
+						dc->hwss.edp_power_control) {
+					dc->hwss.edp_backlight_control(edp_link, false);
+					dc->hwss.power_down(dc);
+					dc->hwss.edp_power_control(edp_link, false);
+				}
+			}
+		} else {
+			for (i = 0; i < dc->link_count; i++) {
+				struct dc_link *link = dc->links[i];
+
+				if (link->link_enc->funcs->is_dig_enabled &&
+						link->link_enc->funcs->is_dig_enabled(link->link_enc) &&
+						dc->hwss.power_down) {
+					dc->hwss.power_down(dc);
+					break;
+				}
+
+			}
+		}
+	}
+
+	for (i = 0; i < res_pool->audio_count; i++) {
+		struct audio *audio = res_pool->audios[i];
+
+		audio->funcs->hw_init(audio);
+	}
+
+	for (i = 0; i < dc->link_count; i++) {
+		struct dc_link *link = dc->links[i];
+
+		if (link->panel_cntl)
+			backlight = link->panel_cntl->funcs->hw_init(link->panel_cntl);
+	}
+
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		if (abms[i] != NULL && abms[i]->funcs != NULL)
+			abms[i]->funcs->abm_init(abms[i], backlight);
+	}
+
+	/* power AFMT HDMI memory TODO: may move to dis/en output save power*/
+	REG_WRITE(DIO_MEM_PWR_CTRL, 0);
+
+	if (!dc->debug.disable_clock_gate) {
+		/* enable all DCN clock gating */
+		REG_WRITE(DCCG_GATE_DISABLE_CNTL, 0);
+
+		REG_WRITE(DCCG_GATE_DISABLE_CNTL2, 0);
+
+		REG_UPDATE(DCFCLK_CNTL, DCFCLK_GATE_DIS, 0);
+	}
+	if (hws->funcs.enable_power_gating_plane)
+		hws->funcs.enable_power_gating_plane(dc->hwseq, true);
+
+	if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
+		dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
+
+	if (dc->clk_mgr->funcs->notify_wm_ranges)
+		dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
+
+	if (dc->clk_mgr->funcs->set_hard_max_memclk)
+		dc->clk_mgr->funcs->set_hard_max_memclk(dc->clk_mgr);
+
+	if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
+		dc->res_pool->hubbub->funcs->force_pstate_change_control(
+				dc->res_pool->hubbub, false, false);
+
+	if (dc->res_pool->hubbub->funcs->init_crb)
+		dc->res_pool->hubbub->funcs->init_crb(dc->res_pool->hubbub);
+
+	// Get DMCUB capabilities
+    if (dc->ctx->dmub_srv) {
+	dc_dmub_srv_query_caps_cmd(dc->ctx->dmub_srv->dmub);
+	dc->caps.dmub_caps.psr = dc->ctx->dmub_srv->dmub->feature_caps.psr;
+    }
+}
+
+static int calc_mpc_flow_ctrl_cnt(const struct dc_stream_state *stream,
+		int opp_cnt)
+{
+	bool hblank_halved = optc2_is_two_pixels_per_containter(&stream->timing);
+	int flow_ctrl_cnt;
+
+	if (opp_cnt >= 2)
+		hblank_halved = true;
+
+	flow_ctrl_cnt = stream->timing.h_total - stream->timing.h_addressable -
+			stream->timing.h_border_left -
+			stream->timing.h_border_right;
+
+	if (hblank_halved)
+		flow_ctrl_cnt /= 2;
+
+	/* ODM combine 4:1 case */
+	if (opp_cnt == 4)
+		flow_ctrl_cnt /= 2;
+
+	return flow_ctrl_cnt;
+}
+
+static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+{
+	struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc;
+	struct dc_stream_state *stream = pipe_ctx->stream;
+	struct pipe_ctx *odm_pipe;
+	int opp_cnt = 1;
+
+	ASSERT(dsc);
+	for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe)
+		opp_cnt++;
+
+	if (enable) {
+		struct dsc_config dsc_cfg;
+		struct dsc_optc_config dsc_optc_cfg;
+		enum optc_dsc_mode optc_dsc_mode;
+
+		/* Enable DSC hw block */
+		dsc_cfg.pic_width = (stream->timing.h_addressable + stream->timing.h_border_left + stream->timing.h_border_right) / opp_cnt;
+		dsc_cfg.pic_height = stream->timing.v_addressable + stream->timing.v_border_top + stream->timing.v_border_bottom;
+		dsc_cfg.pixel_encoding = stream->timing.pixel_encoding;
+		dsc_cfg.color_depth = stream->timing.display_color_depth;
+		dsc_cfg.is_odm = pipe_ctx->next_odm_pipe ? true : false;
+		dsc_cfg.dc_dsc_cfg = stream->timing.dsc_cfg;
+		ASSERT(dsc_cfg.dc_dsc_cfg.num_slices_h % opp_cnt == 0);
+		dsc_cfg.dc_dsc_cfg.num_slices_h /= opp_cnt;
+
+		dsc->funcs->dsc_set_config(dsc, &dsc_cfg, &dsc_optc_cfg);
+		dsc->funcs->dsc_enable(dsc, pipe_ctx->stream_res.opp->inst);
+		for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
+			struct display_stream_compressor *odm_dsc = odm_pipe->stream_res.dsc;
+
+			ASSERT(odm_dsc);
+			odm_dsc->funcs->dsc_set_config(odm_dsc, &dsc_cfg, &dsc_optc_cfg);
+			odm_dsc->funcs->dsc_enable(odm_dsc, odm_pipe->stream_res.opp->inst);
+		}
+		dsc_cfg.dc_dsc_cfg.num_slices_h *= opp_cnt;
+		dsc_cfg.pic_width *= opp_cnt;
+
+		optc_dsc_mode = dsc_optc_cfg.is_pixel_format_444 ? OPTC_DSC_ENABLED_444 : OPTC_DSC_ENABLED_NATIVE_SUBSAMPLED;
+
+		/* Enable DSC in OPTC */
+		DC_LOG_DSC("Setting optc DSC config for tg instance %d:", pipe_ctx->stream_res.tg->inst);
+		pipe_ctx->stream_res.tg->funcs->set_dsc_config(pipe_ctx->stream_res.tg,
+							optc_dsc_mode,
+							dsc_optc_cfg.bytes_per_pixel,
+							dsc_optc_cfg.slice_width);
+	} else {
+		/* disable DSC in OPTC */
+		pipe_ctx->stream_res.tg->funcs->set_dsc_config(
+				pipe_ctx->stream_res.tg,
+				OPTC_DSC_DISABLED, 0, 0);
+
+		/* disable DSC block */
+		dsc->funcs->dsc_disable(pipe_ctx->stream_res.dsc);
+		for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
+			ASSERT(odm_pipe->stream_res.dsc);
+			odm_pipe->stream_res.dsc->funcs->dsc_disable(odm_pipe->stream_res.dsc);
+		}
+	}
+}
+
+void dcn32_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *pipe_ctx)
+{
+	struct pipe_ctx *odm_pipe;
+	int opp_cnt = 1;
+	int opp_inst[MAX_PIPES] = { pipe_ctx->stream_res.opp->inst };
+	bool rate_control_2x_pclk = (pipe_ctx->stream->timing.flags.INTERLACE || optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing));
+	struct mpc_dwb_flow_control flow_control;
+	struct mpc *mpc = dc->res_pool->mpc;
+	int i;
+
+	for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
+		opp_inst[opp_cnt] = odm_pipe->stream_res.opp->inst;
+		opp_cnt++;
+	}
+
+	if (opp_cnt > 1)
+		pipe_ctx->stream_res.tg->funcs->set_odm_combine(
+				pipe_ctx->stream_res.tg,
+				opp_inst, opp_cnt,
+				&pipe_ctx->stream->timing);
+	else
+		pipe_ctx->stream_res.tg->funcs->set_odm_bypass(
+				pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing);
+
+	rate_control_2x_pclk = rate_control_2x_pclk || opp_cnt > 1;
+	flow_control.flow_ctrl_mode = 0;
+	flow_control.flow_ctrl_cnt0 = 0x80;
+	flow_control.flow_ctrl_cnt1 = calc_mpc_flow_ctrl_cnt(pipe_ctx->stream, opp_cnt);
+	if (mpc->funcs->set_out_rate_control) {
+		for (i = 0; i < opp_cnt; ++i) {
+			mpc->funcs->set_out_rate_control(
+					mpc, opp_inst[i],
+					true,
+					rate_control_2x_pclk,
+					&flow_control);
+		}
+	}
+
+	for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
+		odm_pipe->stream_res.opp->funcs->opp_pipe_clock_control(
+				odm_pipe->stream_res.opp,
+				true);
+	}
+
+	// Don't program pixel clock after link is already enabled
+/*	if (false == pipe_ctx->clock_source->funcs->program_pix_clk(
+			pipe_ctx->clock_source,
+			&pipe_ctx->stream_res.pix_clk_params,
+			&pipe_ctx->pll_settings)) {
+		BREAK_TO_DEBUGGER();
+	}*/
+
+	if (pipe_ctx->stream_res.dsc)
+		update_dsc_on_stream(pipe_ctx, pipe_ctx->stream->timing.flags.DSC);
+}
+
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.h
new file mode 100644
index 000000000000..737a7fac5cf9
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.h
@@ -0,0 +1,64 @@
+/*
+* Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_HWSS_DCN32_H__
+#define __DC_HWSS_DCN32_H__
+
+#include "hw_sequencer_private.h"
+
+struct dc;
+
+void dcn32_dsc_pg_control(
+		struct dce_hwseq *hws,
+		unsigned int dsc_inst,
+		bool power_on);
+
+void dcn32_enable_power_gating_plane(
+	struct dce_hwseq *hws,
+	bool enable);
+
+void dcn32_hubp_pg_control(struct dce_hwseq *hws, unsigned int hubp_inst, bool power_on);
+
+bool dcn32_apply_idle_power_optimizations(struct dc *dc, bool enable);
+
+void dcn32_cab_for_ss_control(struct dc *dc, bool enable);
+
+void dcn32_commit_subvp_config(struct dc *dc, struct dc_state *context);
+
+bool dcn32_set_output_transfer_func(struct dc *dc,
+				struct pipe_ctx *pipe_ctx,
+				const struct dc_stream_state *stream);
+
+void dcn32_init_hw(struct dc *dc);
+
+void dcn32_program_mall_pipe_config(struct dc *dc, struct dc_state *context);
+
+void dcn32_update_mall_sel(struct dc *dc, struct dc_state *context);
+
+void dcn32_subvp_update_force_pstate(struct dc *dc, struct dc_state *context);
+
+void dcn32_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *pipe_ctx);
+
+#endif /* __DC_HWSS_DCN32_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c
new file mode 100644
index 000000000000..d1e3db1bc488
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c
@@ -0,0 +1,155 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dce110/dce110_hw_sequencer.h"
+#include "dcn10/dcn10_hw_sequencer.h"
+#include "dcn20/dcn20_hwseq.h"
+#include "dcn21/dcn21_hwseq.h"
+#include "dcn30/dcn30_hwseq.h"
+#include "dcn31/dcn31_hwseq.h"
+#include "dcn32_hwseq.h"
+
+static const struct hw_sequencer_funcs dcn32_funcs = {
+	.program_gamut_remap = dcn10_program_gamut_remap,
+	.init_hw = dcn32_init_hw,
+	.apply_ctx_to_hw = dce110_apply_ctx_to_hw,
+	.apply_ctx_for_surface = NULL,
+	.program_front_end_for_ctx = dcn20_program_front_end_for_ctx,
+	.wait_for_pending_cleared = dcn10_wait_for_pending_cleared,
+	.post_unlock_program_front_end = dcn20_post_unlock_program_front_end,
+	.update_plane_addr = dcn20_update_plane_addr,
+	.update_dchub = dcn10_update_dchub,
+	.update_pending_status = dcn10_update_pending_status,
+	.program_output_csc = dcn20_program_output_csc,
+	.enable_accelerated_mode = dce110_enable_accelerated_mode,
+	.enable_timing_synchronization = dcn10_enable_timing_synchronization,
+	.enable_per_frame_crtc_position_reset = dcn10_enable_per_frame_crtc_position_reset,
+	.update_info_frame = dcn31_update_info_frame,
+	.send_immediate_sdp_message = dcn10_send_immediate_sdp_message,
+	.enable_stream = dcn20_enable_stream,
+	.disable_stream = dce110_disable_stream,
+	.unblank_stream = dcn20_unblank_stream,
+	.blank_stream = dce110_blank_stream,
+	.enable_audio_stream = dce110_enable_audio_stream,
+	.disable_audio_stream = dce110_disable_audio_stream,
+	.disable_plane = dcn20_disable_plane,
+	.pipe_control_lock = dcn20_pipe_control_lock,
+	.interdependent_update_lock = dcn10_lock_all_pipes,
+	.cursor_lock = dcn10_cursor_lock,
+	.prepare_bandwidth = dcn20_prepare_bandwidth,
+	.optimize_bandwidth = dcn20_optimize_bandwidth,
+	.update_bandwidth = dcn20_update_bandwidth,
+	.set_drr = dcn10_set_drr,
+	.get_position = dcn10_get_position,
+	.set_static_screen_control = dcn10_set_static_screen_control,
+	.setup_stereo = dcn10_setup_stereo,
+	.set_avmute = dcn30_set_avmute,
+	.log_hw_state = dcn10_log_hw_state,
+	.get_hw_state = dcn10_get_hw_state,
+	.clear_status_bits = dcn10_clear_status_bits,
+	.wait_for_mpcc_disconnect = dcn10_wait_for_mpcc_disconnect,
+	.edp_backlight_control = dce110_edp_backlight_control,
+	.edp_power_control = dce110_edp_power_control,
+	.edp_wait_for_hpd_ready = dce110_edp_wait_for_hpd_ready,
+	.edp_wait_for_T12 = dce110_edp_wait_for_T12,
+	.set_cursor_position = dcn10_set_cursor_position,
+	.set_cursor_attribute = dcn10_set_cursor_attribute,
+	.set_cursor_sdr_white_level = dcn10_set_cursor_sdr_white_level,
+	.setup_periodic_interrupt = dcn10_setup_periodic_interrupt,
+	.set_clock = dcn10_set_clock,
+	.get_clock = dcn10_get_clock,
+	.program_triplebuffer = dcn20_program_triple_buffer,
+	.enable_writeback = dcn30_enable_writeback,
+	.disable_writeback = dcn30_disable_writeback,
+	.update_writeback = dcn30_update_writeback,
+	.mmhubbub_warmup = dcn30_mmhubbub_warmup,
+	.dmdata_status_done = dcn20_dmdata_status_done,
+	.program_dmdata_engine = dcn30_program_dmdata_engine,
+	.set_dmdata_attributes = dcn20_set_dmdata_attributes,
+	.init_sys_ctx = dcn20_init_sys_ctx,
+	.init_vm_ctx = dcn20_init_vm_ctx,
+	.set_flip_control_gsl = dcn20_set_flip_control_gsl,
+	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
+	.calc_vupdate_position = dcn10_calc_vupdate_position,
+	.apply_idle_power_optimizations = dcn32_apply_idle_power_optimizations,
+	.does_plane_fit_in_mall = dcn30_does_plane_fit_in_mall,
+	.set_backlight_level = dcn21_set_backlight_level,
+	.set_abm_immediate_disable = dcn21_set_abm_immediate_disable,
+	.hardware_release = dcn30_hardware_release,
+	.set_pipe = dcn21_set_pipe,
+	.set_disp_pattern_generator = dcn30_set_disp_pattern_generator,
+	.get_dcc_en_bits = dcn10_get_dcc_en_bits,
+	.commit_subvp_config = dcn32_commit_subvp_config,
+	.update_visual_confirm_color = dcn20_update_visual_confirm_color,
+};
+
+static const struct hwseq_private_funcs dcn32_private_funcs = {
+	.init_pipes = dcn10_init_pipes,
+	.update_plane_addr = dcn20_update_plane_addr,
+	.plane_atomic_disconnect = dcn10_plane_atomic_disconnect,
+	.update_mpcc = dcn20_update_mpcc,
+	.set_input_transfer_func = dcn30_set_input_transfer_func,
+	.set_output_transfer_func = dcn32_set_output_transfer_func,
+	.power_down = dce110_power_down,
+	.enable_display_power_gating = dcn10_dummy_display_power_gating,
+	.blank_pixel_data = dcn20_blank_pixel_data,
+	.reset_hw_ctx_wrap = dcn20_reset_hw_ctx_wrap,
+	.enable_stream_timing = dcn20_enable_stream_timing,
+	.edp_backlight_control = dce110_edp_backlight_control,
+	.disable_stream_gating = dcn20_disable_stream_gating,
+	.enable_stream_gating = dcn20_enable_stream_gating,
+	.setup_vupdate_interrupt = dcn20_setup_vupdate_interrupt,
+	.did_underflow_occur = dcn10_did_underflow_occur,
+	.init_blank = dcn20_init_blank,
+	.disable_vga = dcn20_disable_vga,
+	.bios_golden_init = dcn10_bios_golden_init,
+	.plane_atomic_disable = dcn20_plane_atomic_disable,
+	.plane_atomic_power_down = dcn10_plane_atomic_power_down,
+	.enable_power_gating_plane = dcn32_enable_power_gating_plane,
+	.hubp_pg_control = dcn32_hubp_pg_control,
+	.program_all_writeback_pipes_in_tree = dcn30_program_all_writeback_pipes_in_tree,
+	.update_odm = dcn32_update_odm,
+	.dsc_pg_control = dcn32_dsc_pg_control,
+	.set_hdr_multiplier = dcn10_set_hdr_multiplier,
+	.verify_allow_pstate_change_high = dcn10_verify_allow_pstate_change_high,
+	.wait_for_blank_complete = dcn20_wait_for_blank_complete,
+	.dccg_init = dcn20_dccg_init,
+	.set_blend_lut = dcn30_set_blend_lut,
+	.set_shaper_3dlut = dcn20_set_shaper_3dlut,
+	.program_mall_pipe_config = dcn32_program_mall_pipe_config,
+	.subvp_update_force_pstate = dcn32_subvp_update_force_pstate,
+	.update_mall_sel = dcn32_update_mall_sel,
+};
+
+void dcn32_hw_sequencer_init_functions(struct dc *dc)
+{
+	dc->hwss = dcn32_funcs;
+	dc->hwseq->funcs = dcn32_private_funcs;
+
+	if (IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
+		dc->hwss.init_hw = dcn20_fpga_init_hw;
+		dc->hwseq->funcs.init_pipes = NULL;
+	}
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.h
new file mode 100644
index 000000000000..89a591eb2c23
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.h
@@ -0,0 +1,33 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_DCN32_INIT_H__
+#define __DC_DCN32_INIT_H__
+
+struct dc;
+
+void dcn32_hw_sequencer_init_functions(struct dc *dc);
+
+#endif /* __DC_DCN32_INIT_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mmhubbub.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mmhubbub.c
new file mode 100644
index 000000000000..adf93cc8359c
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mmhubbub.c
@@ -0,0 +1,239 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+
+#include "reg_helper.h"
+#include "resource.h"
+#include "mcif_wb.h"
+#include "dcn32_mmhubbub.h"
+
+
+#define REG(reg)\
+	mcif_wb30->mcif_wb_regs->reg
+
+#define CTX \
+	mcif_wb30->base.ctx
+
+#undef FN
+#define FN(reg_name, field_name) \
+	mcif_wb30->mcif_wb_shift->field_name, mcif_wb30->mcif_wb_mask->field_name
+
+#define MCIF_ADDR(addr) (((unsigned long long)addr & 0xffffffffff) + 0xFE) >> 8
+#define MCIF_ADDR_HIGH(addr) (unsigned long long)addr >> 40
+
+/* wbif programming guide:
+ * 1. set up wbif parameter:
+ *    unsigned long long   luma_address[4];       //4 frame buffer
+ *    unsigned long long   chroma_address[4];
+ *    unsigned int	   luma_pitch;
+ *    unsigned int	   chroma_pitch;
+ *    unsigned int         warmup_pitch=0x10;     //256B align, the page size is 4KB when it is 0x10
+ *    unsigned int	   slice_lines;           //slice size
+ *    unsigned int         time_per_pixel;        // time per pixel, in ns
+ *    unsigned int         arbitration_slice;     // 0: 2048 bytes 1: 4096 bytes 2: 8192 Bytes
+ *    unsigned int         max_scaled_time;       // used for QOS generation
+ *    unsigned int         swlock=0x0;
+ *    unsigned int         cli_watermark[4];      //4 group urgent watermark
+ *    unsigned int         pstate_watermark[4];   //4 group pstate watermark
+ *    unsigned int         sw_int_en;             // Software interrupt enable, frame end and overflow
+ *    unsigned int         sw_slice_int_en;       // slice end interrupt enable
+ *    unsigned int         sw_overrun_int_en;     // overrun error interrupt enable
+ *    unsigned int         vce_int_en;            // VCE interrupt enable, frame end and overflow
+ *    unsigned int         vce_slice_int_en;      // VCE slice end interrupt enable, frame end and overflow
+ *
+ * 2. configure wbif register
+ *    a. call mmhubbub_config_wbif()
+ *
+ * 3. Enable wbif
+ *    call set_wbif_bufmgr_enable();
+ *
+ * 4. wbif_dump_status(), option, for debug purpose
+ *    the bufmgr status can show the progress of write back, can be used for debug purpose
+ */
+
+static void mmhubbub32_warmup_mcif(struct mcif_wb *mcif_wb,
+		struct mcif_warmup_params *params)
+{
+	struct dcn30_mmhubbub *mcif_wb30 = TO_DCN30_MMHUBBUB(mcif_wb);
+	union large_integer start_address_shift = {.quad_part = params->start_address.quad_part >> 5};
+
+	/* Set base address and region size for warmup */
+	REG_SET(MMHUBBUB_WARMUP_BASE_ADDR_HIGH, 0, MMHUBBUB_WARMUP_BASE_ADDR_HIGH, start_address_shift.high_part);
+	REG_SET(MMHUBBUB_WARMUP_BASE_ADDR_LOW, 0, MMHUBBUB_WARMUP_BASE_ADDR_LOW, start_address_shift.low_part);
+	REG_SET(MMHUBBUB_WARMUP_ADDR_REGION, 0, MMHUBBUB_WARMUP_ADDR_REGION, params->region_size >> 5);
+//	REG_SET(MMHUBBUB_WARMUP_P_VMID, 0, MMHUBBUB_WARMUP_P_VMID, params->p_vmid);
+
+	/* Set address increment and enable warmup */
+	REG_SET_3(MMHUBBUB_WARMUP_CONTROL_STATUS, 0, MMHUBBUB_WARMUP_EN, true,
+			MMHUBBUB_WARMUP_SW_INT_EN, true,
+			MMHUBBUB_WARMUP_INC_ADDR, params->address_increment >> 5);
+
+	/* Wait for an interrupt to signal warmup is completed */
+	REG_WAIT(MMHUBBUB_WARMUP_CONTROL_STATUS, MMHUBBUB_WARMUP_SW_INT_STATUS, 1, 20, 100);
+
+	/* Acknowledge interrupt */
+	REG_UPDATE(MMHUBBUB_WARMUP_CONTROL_STATUS, MMHUBBUB_WARMUP_SW_INT_ACK, 1);
+
+	/* Disable warmup */
+	REG_UPDATE(MMHUBBUB_WARMUP_CONTROL_STATUS, MMHUBBUB_WARMUP_EN, false);
+}
+
+void mmhubbub32_config_mcif_buf(struct mcif_wb *mcif_wb,
+		struct mcif_buf_params *params,
+		unsigned int dest_height)
+{
+	struct dcn30_mmhubbub *mcif_wb30 = TO_DCN30_MMHUBBUB(mcif_wb);
+
+	/* buffer address for packing mode or Luma in planar mode */
+	REG_UPDATE(MCIF_WB_BUF_1_ADDR_Y, MCIF_WB_BUF_1_ADDR_Y, MCIF_ADDR(params->luma_address[0]));
+	REG_UPDATE(MCIF_WB_BUF_1_ADDR_Y_HIGH, MCIF_WB_BUF_1_ADDR_Y_HIGH, MCIF_ADDR_HIGH(params->luma_address[0]));
+
+	/* buffer address for Chroma in planar mode (unused in packing mode) */
+	REG_UPDATE(MCIF_WB_BUF_1_ADDR_C, MCIF_WB_BUF_1_ADDR_C, MCIF_ADDR(params->chroma_address[0]));
+	REG_UPDATE(MCIF_WB_BUF_1_ADDR_C_HIGH, MCIF_WB_BUF_1_ADDR_C_HIGH, MCIF_ADDR_HIGH(params->chroma_address[0]));
+
+	/* buffer address for packing mode or Luma in planar mode */
+	REG_UPDATE(MCIF_WB_BUF_2_ADDR_Y, MCIF_WB_BUF_2_ADDR_Y, MCIF_ADDR(params->luma_address[1]));
+	REG_UPDATE(MCIF_WB_BUF_2_ADDR_Y_HIGH, MCIF_WB_BUF_2_ADDR_Y_HIGH, MCIF_ADDR_HIGH(params->luma_address[1]));
+
+	/* buffer address for Chroma in planar mode (unused in packing mode) */
+	REG_UPDATE(MCIF_WB_BUF_2_ADDR_C, MCIF_WB_BUF_2_ADDR_C, MCIF_ADDR(params->chroma_address[1]));
+	REG_UPDATE(MCIF_WB_BUF_2_ADDR_C_HIGH, MCIF_WB_BUF_2_ADDR_C_HIGH, MCIF_ADDR_HIGH(params->chroma_address[1]));
+
+	/* buffer address for packing mode or Luma in planar mode */
+	REG_UPDATE(MCIF_WB_BUF_3_ADDR_Y, MCIF_WB_BUF_3_ADDR_Y, MCIF_ADDR(params->luma_address[2]));
+	REG_UPDATE(MCIF_WB_BUF_3_ADDR_Y_HIGH, MCIF_WB_BUF_3_ADDR_Y_HIGH, MCIF_ADDR_HIGH(params->luma_address[2]));
+
+	/* buffer address for Chroma in planar mode (unused in packing mode) */
+	REG_UPDATE(MCIF_WB_BUF_3_ADDR_C, MCIF_WB_BUF_3_ADDR_C, MCIF_ADDR(params->chroma_address[2]));
+	REG_UPDATE(MCIF_WB_BUF_3_ADDR_C_HIGH, MCIF_WB_BUF_3_ADDR_C_HIGH, MCIF_ADDR_HIGH(params->chroma_address[2]));
+
+	/* buffer address for packing mode or Luma in planar mode */
+	REG_UPDATE(MCIF_WB_BUF_4_ADDR_Y, MCIF_WB_BUF_4_ADDR_Y, MCIF_ADDR(params->luma_address[3]));
+	REG_UPDATE(MCIF_WB_BUF_4_ADDR_Y_HIGH, MCIF_WB_BUF_4_ADDR_Y_HIGH, MCIF_ADDR_HIGH(params->luma_address[3]));
+
+	/* buffer address for Chroma in planar mode (unused in packing mode) */
+	REG_UPDATE(MCIF_WB_BUF_4_ADDR_C, MCIF_WB_BUF_4_ADDR_C, MCIF_ADDR(params->chroma_address[3]));
+	REG_UPDATE(MCIF_WB_BUF_4_ADDR_C_HIGH, MCIF_WB_BUF_4_ADDR_C_HIGH, MCIF_ADDR_HIGH(params->chroma_address[3]));
+
+	/* setup luma & chroma size
+	 * should be enough to contain a whole frame Luma data,
+	 * the programmed value is frame buffer size [27:8], 256-byte aligned
+	 */
+	REG_UPDATE(MCIF_WB_BUF_LUMA_SIZE, MCIF_WB_BUF_LUMA_SIZE, (params->luma_pitch>>8) * dest_height);
+	REG_UPDATE(MCIF_WB_BUF_CHROMA_SIZE, MCIF_WB_BUF_CHROMA_SIZE, (params->chroma_pitch>>8) * dest_height);
+
+	/* enable address fence */
+	REG_UPDATE(MCIF_WB_BUFMGR_SW_CONTROL, MCIF_WB_BUF_ADDR_FENCE_EN, 1);
+
+	/* setup pitch, the programmed value is [15:8], 256B align */
+	REG_UPDATE_2(MCIF_WB_BUF_PITCH, MCIF_WB_BUF_LUMA_PITCH, params->luma_pitch >> 8,
+			MCIF_WB_BUF_CHROMA_PITCH, params->chroma_pitch >> 8);
+}
+
+static void mmhubbub32_config_mcif_arb(struct mcif_wb *mcif_wb,
+		struct mcif_arb_params *params)
+{
+	struct dcn30_mmhubbub *mcif_wb30 = TO_DCN30_MMHUBBUB(mcif_wb);
+
+	/* Programmed by the video driver based on the CRTC timing (for DWB) */
+	REG_UPDATE(MCIF_WB_ARBITRATION_CONTROL, MCIF_WB_TIME_PER_PIXEL, params->time_per_pixel);
+
+	/* Programming dwb watermark */
+	/* Watermark to generate urgent in MCIF_WB_CLI, value is determined by MCIF_WB_CLI_WATERMARK_MASK. */
+	/* Program in ns. A formula will be provided in the pseudo code to calculate the value. */
+	REG_UPDATE(MCIF_WB_WATERMARK, MCIF_WB_CLI_WATERMARK_MASK, 0x0);
+	/* urgent_watermarkA */
+	REG_UPDATE(MCIF_WB_WATERMARK, MCIF_WB_CLI_WATERMARK,  params->cli_watermark[0]);
+	REG_UPDATE(MCIF_WB_WATERMARK, MCIF_WB_CLI_WATERMARK_MASK, 0x1);
+	/* urgent_watermarkB */
+	REG_UPDATE(MCIF_WB_WATERMARK, MCIF_WB_CLI_WATERMARK,  params->cli_watermark[1]);
+	REG_UPDATE(MCIF_WB_WATERMARK, MCIF_WB_CLI_WATERMARK_MASK, 0x2);
+	/* urgent_watermarkC */
+	REG_UPDATE(MCIF_WB_WATERMARK, MCIF_WB_CLI_WATERMARK,  params->cli_watermark[2]);
+	REG_UPDATE(MCIF_WB_WATERMARK, MCIF_WB_CLI_WATERMARK_MASK, 0x3);
+	/* urgent_watermarkD */
+	REG_UPDATE(MCIF_WB_WATERMARK, MCIF_WB_CLI_WATERMARK,  params->cli_watermark[3]);
+
+	/* Programming nb pstate watermark */
+	/* nbp_state_change_watermarkA */
+	REG_UPDATE(MCIF_WB_NB_PSTATE_LATENCY_WATERMARK, NB_PSTATE_CHANGE_WATERMARK_MASK, 0x0);
+	REG_UPDATE(MCIF_WB_NB_PSTATE_LATENCY_WATERMARK,
+			NB_PSTATE_CHANGE_REFRESH_WATERMARK, params->pstate_watermark[0]);
+	/* nbp_state_change_watermarkB */
+	REG_UPDATE(MCIF_WB_NB_PSTATE_LATENCY_WATERMARK, NB_PSTATE_CHANGE_WATERMARK_MASK, 0x1);
+	REG_UPDATE(MCIF_WB_NB_PSTATE_LATENCY_WATERMARK,
+			NB_PSTATE_CHANGE_REFRESH_WATERMARK, params->pstate_watermark[1]);
+	/* nbp_state_change_watermarkC */
+	REG_UPDATE(MCIF_WB_NB_PSTATE_LATENCY_WATERMARK, NB_PSTATE_CHANGE_WATERMARK_MASK, 0x2);
+	REG_UPDATE(MCIF_WB_NB_PSTATE_LATENCY_WATERMARK,
+			NB_PSTATE_CHANGE_REFRESH_WATERMARK, params->pstate_watermark[2]);
+	/* nbp_state_change_watermarkD */
+	REG_UPDATE(MCIF_WB_NB_PSTATE_LATENCY_WATERMARK, NB_PSTATE_CHANGE_WATERMARK_MASK, 0x3);
+	REG_UPDATE(MCIF_WB_NB_PSTATE_LATENCY_WATERMARK,
+			NB_PSTATE_CHANGE_REFRESH_WATERMARK, params->pstate_watermark[3]);
+
+	/* dram_speed_change_duration - register removed */
+	//REG_UPDATE(MCIF_WB_DRAM_SPEED_CHANGE_DURATION_VBI,
+	//		MCIF_WB_DRAM_SPEED_CHANGE_DURATION_VBI, params->dram_speed_change_duration);
+
+	/* max_scaled_time */
+	REG_UPDATE(MULTI_LEVEL_QOS_CTRL, MAX_SCALED_TIME_TO_URGENT, params->max_scaled_time);
+
+	/* slice_lines */
+	REG_UPDATE(MCIF_WB_BUFMGR_VCE_CONTROL, MCIF_WB_BUFMGR_SLICE_SIZE, params->slice_lines-1);
+
+	/* Set arbitration unit for Luma/Chroma */
+	/* arb_unit=2 should be chosen for more efficiency */
+	/* Arbitration size, 0: 2048 bytes 1: 4096 bytes 2: 8192 Bytes */
+	REG_UPDATE(MCIF_WB_ARBITRATION_CONTROL, MCIF_WB_CLIENT_ARBITRATION_SLICE,  params->arbitration_slice);
+}
+
+const struct mcif_wb_funcs dcn32_mmhubbub_funcs = {
+	.warmup_mcif		= mmhubbub32_warmup_mcif,
+	.enable_mcif		= mmhubbub2_enable_mcif,
+	.disable_mcif		= mmhubbub2_disable_mcif,
+	.config_mcif_buf	= mmhubbub32_config_mcif_buf,
+	.config_mcif_arb	= mmhubbub32_config_mcif_arb,
+	.config_mcif_irq	= mmhubbub2_config_mcif_irq,
+	.dump_frame			= mcifwb2_dump_frame,
+};
+
+void dcn32_mmhubbub_construct(struct dcn30_mmhubbub *mcif_wb30,
+		struct dc_context *ctx,
+		const struct dcn30_mmhubbub_registers *mcif_wb_regs,
+		const struct dcn30_mmhubbub_shift *mcif_wb_shift,
+		const struct dcn30_mmhubbub_mask *mcif_wb_mask,
+		int inst)
+{
+	mcif_wb30->base.ctx = ctx;
+
+	mcif_wb30->base.inst = inst;
+	mcif_wb30->base.funcs = &dcn32_mmhubbub_funcs;
+
+	mcif_wb30->mcif_wb_regs = mcif_wb_regs;
+	mcif_wb30->mcif_wb_shift = mcif_wb_shift;
+	mcif_wb30->mcif_wb_mask = mcif_wb_mask;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mmhubbub.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mmhubbub.h
new file mode 100644
index 000000000000..22355051f5f7
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mmhubbub.h
@@ -0,0 +1,225 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_MCIF_WB_DCN32_H__
+#define __DC_MCIF_WB_DCN32_H__
+
+#include "dcn20/dcn20_mmhubbub.h"
+#include "dcn30/dcn30_mmhubbub.h"
+
+#define MCIF_WB_COMMON_REG_LIST_DCN32(inst) \
+	SRI2(MCIF_WB_BUFMGR_SW_CONTROL, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUFMGR_STATUS, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_PITCH, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_1_STATUS, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_1_STATUS2, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_2_STATUS, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_2_STATUS2, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_3_STATUS, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_3_STATUS2, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_4_STATUS, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_4_STATUS2, MCIF_WB, inst),\
+	SRI2(MCIF_WB_ARBITRATION_CONTROL, MCIF_WB, inst),\
+	SRI2(MCIF_WB_SCLK_CHANGE, MCIF_WB, inst),\
+	SRI2(MCIF_WB_TEST_DEBUG_INDEX, MCIF_WB, inst),\
+	SRI2(MCIF_WB_TEST_DEBUG_DATA, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_1_ADDR_Y, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_1_ADDR_C, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_2_ADDR_Y, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_2_ADDR_C, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_3_ADDR_Y, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_3_ADDR_C, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_4_ADDR_Y, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_4_ADDR_C, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUFMGR_VCE_CONTROL, MCIF_WB, inst),\
+	SRI2(MCIF_WB_NB_PSTATE_LATENCY_WATERMARK, MMHUBBUB, inst),\
+	SRI2(MCIF_WB_NB_PSTATE_CONTROL, MCIF_WB, inst),\
+	SRI2(MCIF_WB_WATERMARK, MMHUBBUB, inst),\
+	SRI2(MCIF_WB_CLOCK_GATER_CONTROL, MCIF_WB, inst),\
+	SRI2(MCIF_WB_SELF_REFRESH_CONTROL, MCIF_WB, inst),\
+	SRI2(MULTI_LEVEL_QOS_CTRL, MCIF_WB, inst),\
+	SRI2(MCIF_WB_SECURITY_LEVEL, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_LUMA_SIZE, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_CHROMA_SIZE, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_1_ADDR_Y_HIGH, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_1_ADDR_C_HIGH, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_2_ADDR_Y_HIGH, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_2_ADDR_C_HIGH, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_3_ADDR_Y_HIGH, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_3_ADDR_C_HIGH, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_4_ADDR_Y_HIGH, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_4_ADDR_C_HIGH, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_1_RESOLUTION, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_2_RESOLUTION, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_3_RESOLUTION, MCIF_WB, inst),\
+	SRI2(MCIF_WB_BUF_4_RESOLUTION, MCIF_WB, inst),\
+	SRI2(MMHUBBUB_MEM_PWR_CNTL, MMHUBBUB, inst),\
+	SRI2(MMHUBBUB_WARMUP_ADDR_REGION, MMHUBBUB, inst),\
+	SRI2(MMHUBBUB_WARMUP_BASE_ADDR_HIGH, MMHUBBUB, inst),\
+	SRI2(MMHUBBUB_WARMUP_BASE_ADDR_LOW, MMHUBBUB, inst),\
+	SRI2(MMHUBBUB_WARMUP_CONTROL_STATUS, MMHUBBUB, inst)
+
+
+#define MCIF_WB_COMMON_MASK_SH_LIST_DCN32(mask_sh) \
+	SF(MCIF_WB_BUFMGR_SW_CONTROL, MCIF_WB_BUFMGR_ENABLE, mask_sh),\
+	SF(MCIF_WB_BUFMGR_SW_CONTROL, MCIF_WB_BUFMGR_SW_INT_EN, mask_sh),\
+	SF(MCIF_WB_BUFMGR_SW_CONTROL, MCIF_WB_BUFMGR_SW_INT_ACK, mask_sh),\
+	SF(MCIF_WB_BUFMGR_SW_CONTROL, MCIF_WB_BUFMGR_SW_SLICE_INT_EN, mask_sh),\
+	SF(MCIF_WB_BUFMGR_SW_CONTROL, MCIF_WB_BUFMGR_SW_OVERRUN_INT_EN, mask_sh),\
+	SF(MCIF_WB_BUFMGR_SW_CONTROL, MCIF_WB_BUFMGR_SW_LOCK, mask_sh),\
+	SF(MCIF_WB_BUFMGR_SW_CONTROL, MCIF_WB_BUF_ADDR_FENCE_EN, mask_sh),\
+	SF(MCIF_WB_BUFMGR_STATUS, MCIF_WB_BUFMGR_VCE_INT_STATUS, mask_sh),\
+	SF(MCIF_WB_BUFMGR_STATUS, MCIF_WB_BUFMGR_SW_INT_STATUS, mask_sh),\
+	SF(MCIF_WB_BUFMGR_STATUS, MCIF_WB_BUFMGR_SW_OVERRUN_INT_STATUS, mask_sh),\
+	SF(MCIF_WB_BUFMGR_STATUS, MCIF_WB_BUFMGR_CUR_BUF, mask_sh),\
+	SF(MCIF_WB_BUFMGR_STATUS, MCIF_WB_BUFMGR_BUFTAG, mask_sh),\
+	SF(MCIF_WB_BUFMGR_STATUS, MCIF_WB_BUFMGR_CUR_LINE_L, mask_sh),\
+	SF(MCIF_WB_BUFMGR_STATUS, MCIF_WB_BUFMGR_NEXT_BUF, mask_sh),\
+	SF(MCIF_WB_BUF_PITCH, MCIF_WB_BUF_LUMA_PITCH, mask_sh),\
+	SF(MCIF_WB_BUF_PITCH, MCIF_WB_BUF_CHROMA_PITCH, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS, MCIF_WB_BUF_1_ACTIVE, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS, MCIF_WB_BUF_1_SW_LOCKED, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS, MCIF_WB_BUF_1_VCE_LOCKED, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS, MCIF_WB_BUF_1_OVERFLOW, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS, MCIF_WB_BUF_1_DISABLE, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS, MCIF_WB_BUF_1_MODE, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS, MCIF_WB_BUF_1_BUFTAG, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS, MCIF_WB_BUF_1_NXT_BUF, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS, MCIF_WB_BUF_1_CUR_LINE_L, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS2, MCIF_WB_BUF_1_NEW_CONTENT, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS2, MCIF_WB_BUF_1_COLOR_DEPTH, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS2, MCIF_WB_BUF_1_TMZ_BLACK_PIXEL, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS2, MCIF_WB_BUF_1_TMZ, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS2, MCIF_WB_BUF_1_Y_OVERRUN, mask_sh),\
+	SF(MCIF_WB_BUF_1_STATUS2, MCIF_WB_BUF_1_C_OVERRUN, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS, MCIF_WB_BUF_2_ACTIVE, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS, MCIF_WB_BUF_2_SW_LOCKED, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS, MCIF_WB_BUF_2_VCE_LOCKED, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS, MCIF_WB_BUF_2_OVERFLOW, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS, MCIF_WB_BUF_2_DISABLE, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS, MCIF_WB_BUF_2_MODE, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS, MCIF_WB_BUF_2_BUFTAG, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS, MCIF_WB_BUF_2_NXT_BUF, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS, MCIF_WB_BUF_2_CUR_LINE_L, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS2, MCIF_WB_BUF_2_NEW_CONTENT, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS2, MCIF_WB_BUF_2_COLOR_DEPTH, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS2, MCIF_WB_BUF_2_TMZ_BLACK_PIXEL, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS2, MCIF_WB_BUF_2_TMZ, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS2, MCIF_WB_BUF_2_Y_OVERRUN, mask_sh),\
+	SF(MCIF_WB_BUF_2_STATUS2, MCIF_WB_BUF_2_C_OVERRUN, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS, MCIF_WB_BUF_3_ACTIVE, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS, MCIF_WB_BUF_3_SW_LOCKED, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS, MCIF_WB_BUF_3_VCE_LOCKED, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS, MCIF_WB_BUF_3_OVERFLOW, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS, MCIF_WB_BUF_3_DISABLE, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS, MCIF_WB_BUF_3_MODE, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS, MCIF_WB_BUF_3_BUFTAG, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS, MCIF_WB_BUF_3_NXT_BUF, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS, MCIF_WB_BUF_3_CUR_LINE_L, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS2, MCIF_WB_BUF_3_NEW_CONTENT, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS2, MCIF_WB_BUF_3_COLOR_DEPTH, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS2, MCIF_WB_BUF_3_TMZ_BLACK_PIXEL, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS2, MCIF_WB_BUF_3_TMZ, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS2, MCIF_WB_BUF_3_Y_OVERRUN, mask_sh),\
+	SF(MCIF_WB_BUF_3_STATUS2, MCIF_WB_BUF_3_C_OVERRUN, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS, MCIF_WB_BUF_4_ACTIVE, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS, MCIF_WB_BUF_4_SW_LOCKED, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS, MCIF_WB_BUF_4_VCE_LOCKED, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS, MCIF_WB_BUF_4_OVERFLOW, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS, MCIF_WB_BUF_4_DISABLE, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS, MCIF_WB_BUF_4_MODE, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS, MCIF_WB_BUF_4_BUFTAG, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS, MCIF_WB_BUF_4_NXT_BUF, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS, MCIF_WB_BUF_4_CUR_LINE_L, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS2, MCIF_WB_BUF_4_NEW_CONTENT, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS2, MCIF_WB_BUF_4_COLOR_DEPTH, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS2, MCIF_WB_BUF_4_TMZ_BLACK_PIXEL, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS2, MCIF_WB_BUF_4_TMZ, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS2, MCIF_WB_BUF_4_Y_OVERRUN, mask_sh),\
+	SF(MCIF_WB_BUF_4_STATUS2, MCIF_WB_BUF_4_C_OVERRUN, mask_sh),\
+	SF(MCIF_WB_ARBITRATION_CONTROL, MCIF_WB_CLIENT_ARBITRATION_SLICE, mask_sh),\
+	SF(MCIF_WB_ARBITRATION_CONTROL, MCIF_WB_TIME_PER_PIXEL, mask_sh),\
+	SF(MCIF_WB_SCLK_CHANGE, WM_CHANGE_ACK_FORCE_ON, mask_sh),\
+	SF(MCIF_WB_TEST_DEBUG_INDEX, MCIF_WB_TEST_DEBUG_INDEX, mask_sh),\
+	SF(MCIF_WB_TEST_DEBUG_DATA, MCIF_WB_TEST_DEBUG_DATA, mask_sh),\
+	SF(MCIF_WB_BUF_1_ADDR_Y, MCIF_WB_BUF_1_ADDR_Y, mask_sh),\
+	SF(MCIF_WB_BUF_1_ADDR_C, MCIF_WB_BUF_1_ADDR_C, mask_sh),\
+	SF(MCIF_WB_BUF_2_ADDR_Y, MCIF_WB_BUF_2_ADDR_Y, mask_sh),\
+	SF(MCIF_WB_BUF_2_ADDR_C, MCIF_WB_BUF_2_ADDR_C, mask_sh),\
+	SF(MCIF_WB_BUF_3_ADDR_Y, MCIF_WB_BUF_3_ADDR_Y, mask_sh),\
+	SF(MCIF_WB_BUF_3_ADDR_C, MCIF_WB_BUF_3_ADDR_C, mask_sh),\
+	SF(MCIF_WB_BUF_4_ADDR_Y, MCIF_WB_BUF_4_ADDR_Y, mask_sh),\
+	SF(MCIF_WB_BUF_4_ADDR_C, MCIF_WB_BUF_4_ADDR_C, mask_sh),\
+	SF(MCIF_WB_BUFMGR_VCE_CONTROL, MCIF_WB_BUFMGR_VCE_LOCK_IGNORE, mask_sh),\
+	SF(MCIF_WB_BUFMGR_VCE_CONTROL, MCIF_WB_BUFMGR_VCE_INT_EN, mask_sh),\
+	SF(MCIF_WB_BUFMGR_VCE_CONTROL, MCIF_WB_BUFMGR_VCE_INT_ACK, mask_sh),\
+	SF(MCIF_WB_BUFMGR_VCE_CONTROL, MCIF_WB_BUFMGR_VCE_SLICE_INT_EN, mask_sh),\
+	SF(MCIF_WB_BUFMGR_VCE_CONTROL, MCIF_WB_BUFMGR_VCE_LOCK, mask_sh),\
+	SF(MCIF_WB_BUFMGR_VCE_CONTROL, MCIF_WB_BUFMGR_SLICE_SIZE, mask_sh),\
+	SF(MCIF_WB_NB_PSTATE_LATENCY_WATERMARK, NB_PSTATE_CHANGE_REFRESH_WATERMARK, mask_sh),\
+	SF(MCIF_WB_NB_PSTATE_LATENCY_WATERMARK, NB_PSTATE_CHANGE_WATERMARK_MASK, mask_sh),\
+	SF(MCIF_WB_NB_PSTATE_CONTROL, NB_PSTATE_CHANGE_FORCE_ON, mask_sh),\
+	SF(MCIF_WB_WATERMARK, MCIF_WB_CLI_WATERMARK, mask_sh),\
+	SF(MCIF_WB_WATERMARK, MCIF_WB_CLI_WATERMARK_MASK, mask_sh),\
+	SF(MCIF_WB_CLOCK_GATER_CONTROL, MCIF_WB_CLI_CLOCK_GATER_OVERRIDE, mask_sh),\
+	SF(MCIF_WB_SELF_REFRESH_CONTROL, PERFRAME_SELF_REFRESH, mask_sh),\
+	SF(MULTI_LEVEL_QOS_CTRL, MAX_SCALED_TIME_TO_URGENT, mask_sh),\
+	SF(MCIF_WB_SECURITY_LEVEL, MCIF_WB_SECURITY_LEVEL, mask_sh),\
+	SF(MCIF_WB_BUF_LUMA_SIZE, MCIF_WB_BUF_LUMA_SIZE, mask_sh),\
+	SF(MCIF_WB_BUF_CHROMA_SIZE, MCIF_WB_BUF_CHROMA_SIZE, mask_sh),\
+	SF(MCIF_WB_BUF_1_ADDR_Y_HIGH, MCIF_WB_BUF_1_ADDR_Y_HIGH, mask_sh),\
+	SF(MCIF_WB_BUF_1_ADDR_C_HIGH, MCIF_WB_BUF_1_ADDR_C_HIGH, mask_sh),\
+	SF(MCIF_WB_BUF_2_ADDR_Y_HIGH, MCIF_WB_BUF_2_ADDR_Y_HIGH, mask_sh),\
+	SF(MCIF_WB_BUF_2_ADDR_C_HIGH, MCIF_WB_BUF_2_ADDR_C_HIGH, mask_sh),\
+	SF(MCIF_WB_BUF_3_ADDR_Y_HIGH, MCIF_WB_BUF_3_ADDR_Y_HIGH, mask_sh),\
+	SF(MCIF_WB_BUF_3_ADDR_C_HIGH, MCIF_WB_BUF_3_ADDR_C_HIGH, mask_sh),\
+	SF(MCIF_WB_BUF_4_ADDR_Y_HIGH, MCIF_WB_BUF_4_ADDR_Y_HIGH, mask_sh),\
+	SF(MCIF_WB_BUF_4_ADDR_C_HIGH, MCIF_WB_BUF_4_ADDR_C_HIGH, mask_sh),\
+	SF(MCIF_WB_BUF_1_RESOLUTION, MCIF_WB_BUF_1_RESOLUTION_WIDTH, mask_sh),\
+	SF(MCIF_WB_BUF_1_RESOLUTION, MCIF_WB_BUF_1_RESOLUTION_HEIGHT, mask_sh),\
+	SF(MCIF_WB_BUF_2_RESOLUTION, MCIF_WB_BUF_2_RESOLUTION_WIDTH, mask_sh),\
+	SF(MCIF_WB_BUF_2_RESOLUTION, MCIF_WB_BUF_2_RESOLUTION_HEIGHT, mask_sh),\
+	SF(MCIF_WB_BUF_3_RESOLUTION, MCIF_WB_BUF_3_RESOLUTION_WIDTH, mask_sh),\
+	SF(MCIF_WB_BUF_3_RESOLUTION, MCIF_WB_BUF_3_RESOLUTION_HEIGHT, mask_sh),\
+	SF(MCIF_WB_BUF_4_RESOLUTION, MCIF_WB_BUF_4_RESOLUTION_WIDTH, mask_sh),\
+	SF(MCIF_WB_BUF_4_RESOLUTION, MCIF_WB_BUF_4_RESOLUTION_HEIGHT, mask_sh),\
+	SF(MMHUBBUB_WARMUP_ADDR_REGION, MMHUBBUB_WARMUP_ADDR_REGION, mask_sh),\
+	SF(MMHUBBUB_WARMUP_BASE_ADDR_HIGH, MMHUBBUB_WARMUP_BASE_ADDR_HIGH, mask_sh),\
+	SF(MMHUBBUB_WARMUP_BASE_ADDR_LOW, MMHUBBUB_WARMUP_BASE_ADDR_LOW, mask_sh),\
+	SF(MMHUBBUB_WARMUP_CONTROL_STATUS, MMHUBBUB_WARMUP_EN, mask_sh),\
+	SF(MMHUBBUB_WARMUP_CONTROL_STATUS, MMHUBBUB_WARMUP_SW_INT_EN, mask_sh),\
+	SF(MMHUBBUB_WARMUP_CONTROL_STATUS, MMHUBBUB_WARMUP_SW_INT_STATUS, mask_sh),\
+	SF(MMHUBBUB_WARMUP_CONTROL_STATUS, MMHUBBUB_WARMUP_SW_INT_ACK, mask_sh),\
+	SF(MMHUBBUB_WARMUP_CONTROL_STATUS, MMHUBBUB_WARMUP_INC_ADDR, mask_sh)
+
+
+void dcn32_mmhubbub_construct(struct dcn30_mmhubbub *mcif_wb30,
+	struct dc_context *ctx,
+	const struct dcn30_mmhubbub_registers *mcif_wb_regs,
+	const struct dcn30_mmhubbub_shift *mcif_wb_shift,
+	const struct dcn30_mmhubbub_mask *mcif_wb_mask,
+	int inst);
+
+#endif //__DC_MCIF_WB_DCN32_H__
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c
new file mode 100644
index 000000000000..a308f33d3d0d
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c
@@ -0,0 +1,810 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "reg_helper.h"
+#include "dcn30/dcn30_mpc.h"
+#include "dcn30/dcn30_cm_common.h"
+#include "dcn30/dcn30_mpc.h"
+#include "basics/conversion.h"
+#include "dcn10/dcn10_cm_common.h"
+#include "dc.h"
+
+#define REG(reg)\
+	mpc30->mpc_regs->reg
+
+#define CTX \
+	mpc30->base.ctx
+
+#undef FN
+#define FN(reg_name, field_name) \
+	mpc30->mpc_shift->field_name, mpc30->mpc_mask->field_name
+
+
+static void mpc32_mpc_init(struct mpc *mpc)
+{
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+	int mpcc_id;
+
+	mpc1_mpc_init(mpc);
+
+	if (mpc->ctx->dc->debug.enable_mem_low_power.bits.mpc) {
+		if (mpc30->mpc_mask->MPCC_MCM_SHAPER_MEM_LOW_PWR_MODE && mpc30->mpc_mask->MPCC_MCM_3DLUT_MEM_LOW_PWR_MODE) {
+			for (mpcc_id = 0; mpcc_id < mpc30->num_mpcc; mpcc_id++) {
+				REG_UPDATE(MPCC_MCM_MEM_PWR_CTRL[mpcc_id], MPCC_MCM_SHAPER_MEM_LOW_PWR_MODE, 3);
+				REG_UPDATE(MPCC_MCM_MEM_PWR_CTRL[mpcc_id], MPCC_MCM_3DLUT_MEM_LOW_PWR_MODE, 3);
+				REG_UPDATE(MPCC_MCM_MEM_PWR_CTRL[mpcc_id], MPCC_MCM_1DLUT_MEM_LOW_PWR_MODE, 3);
+			}
+		}
+		if (mpc30->mpc_mask->MPCC_OGAM_MEM_LOW_PWR_MODE) {
+			for (mpcc_id = 0; mpcc_id < mpc30->num_mpcc; mpcc_id++)
+				REG_UPDATE(MPCC_MEM_PWR_CTRL[mpcc_id], MPCC_OGAM_MEM_LOW_PWR_MODE, 3);
+		}
+	}
+}
+
+
+static enum dc_lut_mode mpc32_get_shaper_current(struct mpc *mpc, uint32_t mpcc_id)
+{
+	enum dc_lut_mode mode;
+	uint32_t state_mode;
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+
+	REG_GET(MPCC_MCM_SHAPER_CONTROL[mpcc_id],
+			MPCC_MCM_SHAPER_MODE_CURRENT, &state_mode);
+
+		switch (state_mode) {
+		case 0:
+			mode = LUT_BYPASS;
+			break;
+		case 1:
+			mode = LUT_RAM_A;
+			break;
+		case 2:
+			mode = LUT_RAM_B;
+			break;
+		default:
+			mode = LUT_BYPASS;
+			break;
+		}
+		return mode;
+}
+
+
+static void mpc32_configure_shaper_lut(
+		struct mpc *mpc,
+		bool is_ram_a,
+		uint32_t mpcc_id)
+{
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+
+	REG_UPDATE(MPCC_MCM_SHAPER_LUT_WRITE_EN_MASK[mpcc_id],
+			MPCC_MCM_SHAPER_LUT_WRITE_EN_MASK, 7);
+	REG_UPDATE(MPCC_MCM_SHAPER_LUT_WRITE_EN_MASK[mpcc_id],
+			MPCC_MCM_SHAPER_LUT_WRITE_SEL, is_ram_a == true ? 0:1);
+	REG_SET(MPCC_MCM_SHAPER_LUT_INDEX[mpcc_id], 0, MPCC_MCM_SHAPER_LUT_INDEX, 0);
+}
+
+
+static void mpc32_program_shaper_luta_settings(
+		struct mpc *mpc,
+		const struct pwl_params *params,
+		uint32_t mpcc_id)
+{
+	const struct gamma_curve *curve;
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+
+	REG_SET_2(MPCC_MCM_SHAPER_RAMA_START_CNTL_B[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B, params->corner_points[0].blue.custom_float_x,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B, 0);
+	REG_SET_2(MPCC_MCM_SHAPER_RAMA_START_CNTL_G[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B, params->corner_points[0].green.custom_float_x,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B, 0);
+	REG_SET_2(MPCC_MCM_SHAPER_RAMA_START_CNTL_R[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B, params->corner_points[0].red.custom_float_x,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B, 0);
+
+	REG_SET_2(MPCC_MCM_SHAPER_RAMA_END_CNTL_B[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B, params->corner_points[1].blue.custom_float_x,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, params->corner_points[1].blue.custom_float_y);
+	REG_SET_2(MPCC_MCM_SHAPER_RAMA_END_CNTL_G[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B, params->corner_points[1].green.custom_float_x,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, params->corner_points[1].green.custom_float_y);
+	REG_SET_2(MPCC_MCM_SHAPER_RAMA_END_CNTL_R[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B, params->corner_points[1].red.custom_float_x,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, params->corner_points[1].red.custom_float_y);
+
+	curve = params->arr_curve_points;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_0_1[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_2_3[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_4_5[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_6_7[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_8_9[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_10_11[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_12_13[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_14_15[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_16_17[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_18_19[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_20_21[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_22_23[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_24_25[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_26_27[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_28_29[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_30_31[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMA_REGION_32_33[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+}
+
+
+static void mpc32_program_shaper_lutb_settings(
+		struct mpc *mpc,
+		const struct pwl_params *params,
+		uint32_t mpcc_id)
+{
+	const struct gamma_curve *curve;
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+
+	REG_SET_2(MPCC_MCM_SHAPER_RAMB_START_CNTL_B[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B, params->corner_points[0].blue.custom_float_x,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B, 0);
+	REG_SET_2(MPCC_MCM_SHAPER_RAMB_START_CNTL_G[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B, params->corner_points[0].green.custom_float_x,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B, 0);
+	REG_SET_2(MPCC_MCM_SHAPER_RAMB_START_CNTL_R[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B, params->corner_points[0].red.custom_float_x,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B, 0);
+
+	REG_SET_2(MPCC_MCM_SHAPER_RAMB_END_CNTL_B[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B, params->corner_points[1].blue.custom_float_x,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, params->corner_points[1].blue.custom_float_y);
+	REG_SET_2(MPCC_MCM_SHAPER_RAMB_END_CNTL_G[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B, params->corner_points[1].green.custom_float_x,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, params->corner_points[1].green.custom_float_y);
+	REG_SET_2(MPCC_MCM_SHAPER_RAMB_END_CNTL_R[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B, params->corner_points[1].red.custom_float_x,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, params->corner_points[1].red.custom_float_y);
+
+	curve = params->arr_curve_points;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_0_1[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_2_3[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_4_5[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_6_7[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_8_9[mpcc_id], 0,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+		MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_10_11[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_12_13[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_14_15[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_16_17[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_18_19[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_20_21[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_22_23[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_24_25[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_26_27[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_28_29[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_30_31[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+
+	curve += 2;
+	REG_SET_4(MPCC_MCM_SHAPER_RAMB_REGION_32_33[mpcc_id], 0,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, curve[0].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, curve[0].segments_num,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, curve[1].offset,
+			MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, curve[1].segments_num);
+}
+
+
+static void mpc32_program_shaper_lut(
+		struct mpc *mpc,
+		const struct pwl_result_data *rgb,
+		uint32_t num,
+		uint32_t mpcc_id)
+{
+	uint32_t i, red, green, blue;
+	uint32_t  red_delta, green_delta, blue_delta;
+	uint32_t  red_value, green_value, blue_value;
+
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+
+	for (i = 0 ; i < num; i++) {
+
+		red   = rgb[i].red_reg;
+		green = rgb[i].green_reg;
+		blue  = rgb[i].blue_reg;
+
+		red_delta   = rgb[i].delta_red_reg;
+		green_delta = rgb[i].delta_green_reg;
+		blue_delta  = rgb[i].delta_blue_reg;
+
+		red_value   = ((red_delta   & 0x3ff) << 14) | (red   & 0x3fff);
+		green_value = ((green_delta & 0x3ff) << 14) | (green & 0x3fff);
+		blue_value  = ((blue_delta  & 0x3ff) << 14) | (blue  & 0x3fff);
+
+		REG_SET(MPCC_MCM_SHAPER_LUT_DATA[mpcc_id], 0, MPCC_MCM_SHAPER_LUT_DATA, red_value);
+		REG_SET(MPCC_MCM_SHAPER_LUT_DATA[mpcc_id], 0, MPCC_MCM_SHAPER_LUT_DATA, green_value);
+		REG_SET(MPCC_MCM_SHAPER_LUT_DATA[mpcc_id], 0, MPCC_MCM_SHAPER_LUT_DATA, blue_value);
+	}
+
+}
+
+
+static void mpc32_power_on_shaper_3dlut(
+		struct mpc *mpc,
+		uint32_t mpcc_id,
+		bool power_on)
+{
+	uint32_t power_status_shaper = 2;
+	uint32_t power_status_3dlut  = 2;
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+	int max_retries = 10;
+
+	REG_SET(MPCC_MCM_MEM_PWR_CTRL[mpcc_id], 0,
+	MPCC_MCM_3DLUT_MEM_PWR_DIS, power_on == true ? 1:0);
+	/* wait for memory to fully power up */
+	if (power_on && mpc->ctx->dc->debug.enable_mem_low_power.bits.mpc) {
+		REG_WAIT(MPCC_MCM_MEM_PWR_CTRL[mpcc_id], MPCC_MCM_SHAPER_MEM_PWR_STATE, 0, 1, max_retries);
+		REG_WAIT(MPCC_MCM_MEM_PWR_CTRL[mpcc_id], MPCC_MCM_3DLUT_MEM_PWR_STATE, 0, 1, max_retries);
+	}
+
+	/*read status is not mandatory, it is just for debugging*/
+	REG_GET(MPCC_MCM_MEM_PWR_CTRL[mpcc_id], MPCC_MCM_SHAPER_MEM_PWR_STATE, &power_status_shaper);
+	REG_GET(MPCC_MCM_MEM_PWR_CTRL[mpcc_id], MPCC_MCM_3DLUT_MEM_PWR_STATE, &power_status_3dlut);
+
+	if (power_status_shaper != 0 && power_on == true)
+		BREAK_TO_DEBUGGER();
+
+	if (power_status_3dlut != 0 && power_on == true)
+		BREAK_TO_DEBUGGER();
+}
+
+
+bool mpc32_program_shaper(
+		struct mpc *mpc,
+		const struct pwl_params *params,
+		uint32_t mpcc_id)
+{
+	enum dc_lut_mode current_mode;
+	enum dc_lut_mode next_mode;
+
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+
+	if (params == NULL) {
+		REG_SET(MPCC_MCM_SHAPER_CONTROL[mpcc_id], 0, MPCC_MCM_SHAPER_LUT_MODE, 0);
+		return false;
+	}
+
+	if (mpc->ctx->dc->debug.enable_mem_low_power.bits.mpc)
+		mpc32_power_on_shaper_3dlut(mpc, mpcc_id, true);
+
+	current_mode = mpc32_get_shaper_current(mpc, mpcc_id);
+
+	if (current_mode == LUT_BYPASS || current_mode == LUT_RAM_A)
+		next_mode = LUT_RAM_B;
+	else
+		next_mode = LUT_RAM_A;
+
+	mpc32_configure_shaper_lut(mpc, next_mode == LUT_RAM_A ? true:false, mpcc_id);
+
+	if (next_mode == LUT_RAM_A)
+		mpc32_program_shaper_luta_settings(mpc, params, mpcc_id);
+	else
+		mpc32_program_shaper_lutb_settings(mpc, params, mpcc_id);
+
+	mpc32_program_shaper_lut(
+			mpc, params->rgb_resulted, params->hw_points_num, mpcc_id);
+
+	REG_SET(MPCC_MCM_SHAPER_CONTROL[mpcc_id], 0, MPCC_MCM_SHAPER_LUT_MODE, next_mode == LUT_RAM_A ? 1:2);
+	mpc32_power_on_shaper_3dlut(mpc, mpcc_id, false);
+
+	return true;
+}
+
+
+static enum dc_lut_mode get3dlut_config(
+			struct mpc *mpc,
+			bool *is_17x17x17,
+			bool *is_12bits_color_channel,
+			int mpcc_id)
+{
+	uint32_t i_mode, i_enable_10bits, lut_size;
+	enum dc_lut_mode mode;
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+
+	REG_GET(MPCC_MCM_3DLUT_MODE[mpcc_id],
+			MPCC_MCM_3DLUT_MODE_CURRENT,  &i_mode);
+
+	REG_GET(MPCC_MCM_3DLUT_READ_WRITE_CONTROL[mpcc_id],
+			MPCC_MCM_3DLUT_30BIT_EN, &i_enable_10bits);
+
+	switch (i_mode) {
+	case 0:
+		mode = LUT_BYPASS;
+		break;
+	case 1:
+		mode = LUT_RAM_A;
+		break;
+	case 2:
+		mode = LUT_RAM_B;
+		break;
+	default:
+		mode = LUT_BYPASS;
+		break;
+	}
+	if (i_enable_10bits > 0)
+		*is_12bits_color_channel = false;
+	else
+		*is_12bits_color_channel = true;
+
+	REG_GET(MPCC_MCM_3DLUT_MODE[mpcc_id], MPCC_MCM_3DLUT_SIZE, &lut_size);
+
+	if (lut_size == 0)
+		*is_17x17x17 = true;
+	else
+		*is_17x17x17 = false;
+
+	return mode;
+}
+
+
+static void mpc32_select_3dlut_ram(
+		struct mpc *mpc,
+		enum dc_lut_mode mode,
+		bool is_color_channel_12bits,
+		uint32_t mpcc_id)
+{
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+
+	REG_UPDATE_2(MPCC_MCM_3DLUT_READ_WRITE_CONTROL[mpcc_id],
+		MPCC_MCM_3DLUT_RAM_SEL, mode == LUT_RAM_A ? 0 : 1,
+		MPCC_MCM_3DLUT_30BIT_EN, is_color_channel_12bits == true ? 0:1);
+}
+
+
+static void mpc32_select_3dlut_ram_mask(
+		struct mpc *mpc,
+		uint32_t ram_selection_mask,
+		uint32_t mpcc_id)
+{
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+
+	REG_UPDATE(MPCC_MCM_3DLUT_READ_WRITE_CONTROL[mpcc_id], MPCC_MCM_3DLUT_WRITE_EN_MASK,
+			ram_selection_mask);
+	REG_SET(MPCC_MCM_3DLUT_INDEX[mpcc_id], 0, MPCC_MCM_3DLUT_INDEX, 0);
+}
+
+
+static void mpc32_set3dlut_ram12(
+		struct mpc *mpc,
+		const struct dc_rgb *lut,
+		uint32_t entries,
+		uint32_t mpcc_id)
+{
+	uint32_t i, red, green, blue, red1, green1, blue1;
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+
+	for (i = 0 ; i < entries; i += 2) {
+		red   = lut[i].red<<4;
+		green = lut[i].green<<4;
+		blue  = lut[i].blue<<4;
+		red1   = lut[i+1].red<<4;
+		green1 = lut[i+1].green<<4;
+		blue1  = lut[i+1].blue<<4;
+
+		REG_SET_2(MPCC_MCM_3DLUT_DATA[mpcc_id], 0,
+				MPCC_MCM_3DLUT_DATA0, red,
+				MPCC_MCM_3DLUT_DATA1, red1);
+
+		REG_SET_2(MPCC_MCM_3DLUT_DATA[mpcc_id], 0,
+				MPCC_MCM_3DLUT_DATA0, green,
+				MPCC_MCM_3DLUT_DATA1, green1);
+
+		REG_SET_2(MPCC_MCM_3DLUT_DATA[mpcc_id], 0,
+				MPCC_MCM_3DLUT_DATA0, blue,
+				MPCC_MCM_3DLUT_DATA1, blue1);
+	}
+}
+
+
+static void mpc32_set3dlut_ram10(
+		struct mpc *mpc,
+		const struct dc_rgb *lut,
+		uint32_t entries,
+		uint32_t mpcc_id)
+{
+	uint32_t i, red, green, blue, value;
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+
+	for (i = 0; i < entries; i++) {
+		red   = lut[i].red;
+		green = lut[i].green;
+		blue  = lut[i].blue;
+		//should we shift red 22bit and green 12?
+		value = (red<<20) | (green<<10) | blue;
+
+		REG_SET(MPCC_MCM_3DLUT_DATA_30BIT[mpcc_id], 0, MPCC_MCM_3DLUT_DATA_30BIT, value);
+	}
+
+}
+
+
+static void mpc32_set_3dlut_mode(
+		struct mpc *mpc,
+		enum dc_lut_mode mode,
+		bool is_color_channel_12bits,
+		bool is_lut_size17x17x17,
+		uint32_t mpcc_id)
+{
+	uint32_t lut_mode;
+	struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc);
+
+	if (mode == LUT_BYPASS)
+		lut_mode = 0;
+	else if (mode == LUT_RAM_A)
+		lut_mode = 1;
+	else
+		lut_mode = 2;
+
+	REG_UPDATE_2(MPCC_MCM_3DLUT_MODE[mpcc_id],
+			MPCC_MCM_3DLUT_MODE, lut_mode,
+			MPCC_MCM_3DLUT_SIZE, is_lut_size17x17x17 == true ? 0 : 1);
+}
+
+
+bool mpc32_program_3dlut(
+		struct mpc *mpc,
+		const struct tetrahedral_params *params,
+		int mpcc_id)
+{
+	enum dc_lut_mode mode;
+	bool is_17x17x17;
+	bool is_12bits_color_channel;
+	const struct dc_rgb *lut0;
+	const struct dc_rgb *lut1;
+	const struct dc_rgb *lut2;
+	const struct dc_rgb *lut3;
+	int lut_size0;
+	int lut_size;
+
+	if (params == NULL) {
+		mpc32_set_3dlut_mode(mpc, LUT_BYPASS, false, false, mpcc_id);
+		return false;
+	}
+	mpc32_power_on_shaper_3dlut(mpc, mpcc_id, true);
+
+	mode = get3dlut_config(mpc, &is_17x17x17, &is_12bits_color_channel, mpcc_id);
+
+	if (mode == LUT_BYPASS || mode == LUT_RAM_B)
+		mode = LUT_RAM_A;
+	else
+		mode = LUT_RAM_B;
+
+	is_17x17x17 = !params->use_tetrahedral_9;
+	is_12bits_color_channel = params->use_12bits;
+	if (is_17x17x17) {
+		lut0 = params->tetrahedral_17.lut0;
+		lut1 = params->tetrahedral_17.lut1;
+		lut2 = params->tetrahedral_17.lut2;
+		lut3 = params->tetrahedral_17.lut3;
+		lut_size0 = sizeof(params->tetrahedral_17.lut0)/
+					sizeof(params->tetrahedral_17.lut0[0]);
+		lut_size  = sizeof(params->tetrahedral_17.lut1)/
+					sizeof(params->tetrahedral_17.lut1[0]);
+	} else {
+		lut0 = params->tetrahedral_9.lut0;
+		lut1 = params->tetrahedral_9.lut1;
+		lut2 = params->tetrahedral_9.lut2;
+		lut3 = params->tetrahedral_9.lut3;
+		lut_size0 = sizeof(params->tetrahedral_9.lut0)/
+				sizeof(params->tetrahedral_9.lut0[0]);
+		lut_size  = sizeof(params->tetrahedral_9.lut1)/
+				sizeof(params->tetrahedral_9.lut1[0]);
+		}
+
+	mpc32_select_3dlut_ram(mpc, mode,
+				is_12bits_color_channel, mpcc_id);
+	mpc32_select_3dlut_ram_mask(mpc, 0x1, mpcc_id);
+	if (is_12bits_color_channel)
+		mpc32_set3dlut_ram12(mpc, lut0, lut_size0, mpcc_id);
+	else
+		mpc32_set3dlut_ram10(mpc, lut0, lut_size0, mpcc_id);
+
+	mpc32_select_3dlut_ram_mask(mpc, 0x2, mpcc_id);
+	if (is_12bits_color_channel)
+		mpc32_set3dlut_ram12(mpc, lut1, lut_size, mpcc_id);
+	else
+		mpc32_set3dlut_ram10(mpc, lut1, lut_size, mpcc_id);
+
+	mpc32_select_3dlut_ram_mask(mpc, 0x4, mpcc_id);
+	if (is_12bits_color_channel)
+		mpc32_set3dlut_ram12(mpc, lut2, lut_size, mpcc_id);
+	else
+		mpc32_set3dlut_ram10(mpc, lut2, lut_size, mpcc_id);
+
+	mpc32_select_3dlut_ram_mask(mpc, 0x8, mpcc_id);
+	if (is_12bits_color_channel)
+		mpc32_set3dlut_ram12(mpc, lut3, lut_size, mpcc_id);
+	else
+		mpc32_set3dlut_ram10(mpc, lut3, lut_size, mpcc_id);
+
+	mpc32_set_3dlut_mode(mpc, mode, is_12bits_color_channel,
+					is_17x17x17, mpcc_id);
+
+	if (mpc->ctx->dc->debug.enable_mem_low_power.bits.mpc)
+		mpc32_power_on_shaper_3dlut(mpc, mpcc_id, false);
+
+	return true;
+}
+
+const struct mpc_funcs dcn32_mpc_funcs = {
+	.read_mpcc_state = mpc1_read_mpcc_state,
+	.insert_plane = mpc1_insert_plane,
+	.remove_mpcc = mpc1_remove_mpcc,
+	.mpc_init = mpc32_mpc_init,
+	.mpc_init_single_inst = mpc1_mpc_init_single_inst,
+	.update_blending = mpc2_update_blending,
+	.cursor_lock = mpc1_cursor_lock,
+	.get_mpcc_for_dpp = mpc1_get_mpcc_for_dpp,
+	.wait_for_idle = mpc2_assert_idle_mpcc,
+	.assert_mpcc_idle_before_connect = mpc2_assert_mpcc_idle_before_connect,
+	.init_mpcc_list_from_hw = mpc1_init_mpcc_list_from_hw,
+	.set_denorm =  mpc3_set_denorm,
+	.set_denorm_clamp = mpc3_set_denorm_clamp,
+	.set_output_csc = mpc3_set_output_csc,
+	.set_ocsc_default = mpc3_set_ocsc_default,
+	.set_output_gamma = mpc3_set_output_gamma,
+	.insert_plane_to_secondary = NULL,
+	.remove_mpcc_from_secondary =  NULL,
+	.set_dwb_mux = mpc3_set_dwb_mux,
+	.disable_dwb_mux = mpc3_disable_dwb_mux,
+	.is_dwb_idle = mpc3_is_dwb_idle,
+	.set_out_rate_control = mpc3_set_out_rate_control,
+	.set_gamut_remap = mpc3_set_gamut_remap,
+	.program_shaper = mpc32_program_shaper,
+	.program_3dlut = mpc32_program_3dlut,
+	.acquire_rmu = NULL,
+	.release_rmu = NULL,
+	.power_on_mpc_mem_pwr = mpc3_power_on_ogam_lut,
+	.get_mpc_out_mux = mpc1_get_mpc_out_mux,
+	.set_bg_color = mpc1_set_bg_color,
+};
+
+
+void dcn32_mpc_construct(struct dcn30_mpc *mpc30,
+	struct dc_context *ctx,
+	const struct dcn30_mpc_registers *mpc_regs,
+	const struct dcn30_mpc_shift *mpc_shift,
+	const struct dcn30_mpc_mask *mpc_mask,
+	int num_mpcc,
+	int num_rmu)
+{
+	int i;
+
+	mpc30->base.ctx = ctx;
+
+	mpc30->base.funcs = &dcn32_mpc_funcs;
+
+	mpc30->mpc_regs = mpc_regs;
+	mpc30->mpc_shift = mpc_shift;
+	mpc30->mpc_mask = mpc_mask;
+
+	mpc30->mpcc_in_use_mask = 0;
+	mpc30->num_mpcc = num_mpcc;
+	mpc30->num_rmu = num_rmu;
+
+	for (i = 0; i < MAX_MPCC; i++)
+		mpc3_init_mpcc(&mpc30->base.mpcc_array[i], i);
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.h
new file mode 100644
index 000000000000..d4be3c89ec7b
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.h
@@ -0,0 +1,213 @@
+/* Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_MPCC_DCN32_H__
+#define __DC_MPCC_DCN32_H__
+
+#include "dcn20/dcn20_mpc.h"
+#include "dcn30/dcn30_mpc.h"
+
+#define MPC_MCM_REG_LIST_DCN32(inst) \
+	SRII(MPCC_MCM_SHAPER_CONTROL, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_OFFSET_R, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_OFFSET_G, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_OFFSET_B, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_SCALE_R, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_SCALE_G_B, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_LUT_INDEX, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_LUT_DATA, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_LUT_WRITE_EN_MASK, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_START_CNTL_B, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_START_CNTL_G, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_START_CNTL_R, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_END_CNTL_B, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_END_CNTL_G, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_END_CNTL_R, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_0_1, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_2_3, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_4_5, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_6_7, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_8_9, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_10_11, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_12_13, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_14_15, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_16_17, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_18_19, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_20_21, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_22_23, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_24_25, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_26_27, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_28_29, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_30_31, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMA_REGION_32_33, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_START_CNTL_B, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_START_CNTL_G, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_START_CNTL_R, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_END_CNTL_B, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_END_CNTL_G, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_END_CNTL_R, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_0_1, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_2_3, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_4_5, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_6_7, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_8_9, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_10_11, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_12_13, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_14_15, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_16_17, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_18_19, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_20_21, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_22_23, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_24_25, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_26_27, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_28_29, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_30_31, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_SHAPER_RAMB_REGION_32_33, MPCC_MCM, inst), \
+	SRII(MPCC_MCM_3DLUT_MODE, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_3DLUT_INDEX, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_3DLUT_DATA, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_3DLUT_DATA_30BIT, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_3DLUT_READ_WRITE_CONTROL, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_3DLUT_OUT_NORM_FACTOR, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_3DLUT_OUT_OFFSET_R, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_3DLUT_OUT_OFFSET_G, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_3DLUT_OUT_OFFSET_B, MPCC_MCM, inst),\
+	SRII(MPCC_MCM_MEM_PWR_CTRL, MPCC_MCM, inst)
+
+
+#define MPC_COMMON_MASK_SH_LIST_DCN32(mask_sh) \
+	MPC_COMMON_MASK_SH_LIST_DCN1_0(mask_sh),\
+	SF(MPCC0_MPCC_CONTROL, MPCC_BG_BPC, mask_sh),\
+	SF(MPCC0_MPCC_CONTROL, MPCC_BOT_GAIN_MODE, mask_sh),\
+	SF(MPCC0_MPCC_TOP_GAIN, MPCC_TOP_GAIN, mask_sh),\
+	SF(MPCC0_MPCC_BOT_GAIN_INSIDE, MPCC_BOT_GAIN_INSIDE, mask_sh),\
+	SF(MPCC0_MPCC_BOT_GAIN_OUTSIDE, MPCC_BOT_GAIN_OUTSIDE, mask_sh),\
+	SF(MPC_OUT0_CSC_MODE, MPC_OCSC_MODE, mask_sh),\
+	SF(MPC_OUT0_CSC_C11_C12_A, MPC_OCSC_C11_A, mask_sh),\
+	SF(MPC_OUT0_CSC_C11_C12_A, MPC_OCSC_C12_A, mask_sh),\
+	SF(MPCC0_MPCC_STATUS, MPCC_DISABLED, mask_sh),\
+	SF(MPCC0_MPCC_MEM_PWR_CTRL, MPCC_OGAM_MEM_PWR_FORCE, mask_sh),\
+	SF(MPCC0_MPCC_MEM_PWR_CTRL, MPCC_OGAM_MEM_PWR_DIS, mask_sh),\
+	SF(MPCC0_MPCC_MEM_PWR_CTRL, MPCC_OGAM_MEM_LOW_PWR_MODE, mask_sh),\
+	SF(MPCC0_MPCC_MEM_PWR_CTRL, MPCC_OGAM_MEM_PWR_STATE, mask_sh),\
+	SF(MPC_OUT0_DENORM_CONTROL, MPC_OUT_DENORM_MODE, mask_sh),\
+	SF(MPC_OUT0_DENORM_CONTROL, MPC_OUT_DENORM_CLAMP_MAX_R_CR, mask_sh),\
+	SF(MPC_OUT0_DENORM_CONTROL, MPC_OUT_DENORM_CLAMP_MIN_R_CR, mask_sh),\
+	SF(MPC_OUT0_DENORM_CLAMP_G_Y, MPC_OUT_DENORM_CLAMP_MAX_G_Y, mask_sh),\
+	SF(MPC_OUT0_DENORM_CLAMP_G_Y, MPC_OUT_DENORM_CLAMP_MIN_G_Y, mask_sh),\
+	SF(MPC_OUT0_DENORM_CLAMP_B_CB, MPC_OUT_DENORM_CLAMP_MAX_B_CB, mask_sh),\
+	SF(MPC_OUT0_DENORM_CLAMP_B_CB, MPC_OUT_DENORM_CLAMP_MIN_B_CB, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_GAMUT_REMAP_MODE, MPCC_GAMUT_REMAP_MODE, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_GAMUT_REMAP_MODE, MPCC_GAMUT_REMAP_MODE_CURRENT, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_GAMUT_REMAP_COEF_FORMAT, MPCC_GAMUT_REMAP_COEF_FORMAT, mask_sh),\
+	SF(MPCC_OGAM0_MPC_GAMUT_REMAP_C11_C12_A, MPCC_GAMUT_REMAP_C11_A, mask_sh),\
+	SF(MPCC_OGAM0_MPC_GAMUT_REMAP_C11_C12_A, MPCC_GAMUT_REMAP_C12_A, mask_sh),\
+	SF(MPC_DWB0_MUX, MPC_DWB0_MUX, mask_sh),\
+	SF(MPC_DWB0_MUX, MPC_DWB0_MUX_STATUS, mask_sh),\
+	SF(MPC_OUT0_MUX, MPC_OUT_RATE_CONTROL, mask_sh),\
+	SF(MPC_OUT0_MUX, MPC_OUT_RATE_CONTROL_DISABLE, mask_sh),\
+	SF(MPC_OUT0_MUX, MPC_OUT_FLOW_CONTROL_MODE, mask_sh),\
+	SF(MPC_OUT0_MUX, MPC_OUT_FLOW_CONTROL_COUNT, mask_sh), \
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_REGION_0_1, MPCC_OGAM_RAMA_EXP_REGION0_LUT_OFFSET, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_REGION_0_1, MPCC_OGAM_RAMA_EXP_REGION0_NUM_SEGMENTS, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_REGION_0_1, MPCC_OGAM_RAMA_EXP_REGION1_LUT_OFFSET, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_REGION_0_1, MPCC_OGAM_RAMA_EXP_REGION1_NUM_SEGMENTS, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_END_CNTL2_B, MPCC_OGAM_RAMA_EXP_REGION_END_SLOPE_B, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_END_CNTL2_B, MPCC_OGAM_RAMA_EXP_REGION_END_B, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_END_CNTL1_B, MPCC_OGAM_RAMA_EXP_REGION_END_BASE_B, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_START_SLOPE_CNTL_B, MPCC_OGAM_RAMA_EXP_REGION_START_SLOPE_B, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_START_BASE_CNTL_B, MPCC_OGAM_RAMA_EXP_REGION_START_BASE_B, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_START_CNTL_B, MPCC_OGAM_RAMA_EXP_REGION_START_B, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_START_CNTL_B, MPCC_OGAM_RAMA_EXP_REGION_START_SEGMENT_B, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_OFFSET_B, MPCC_OGAM_RAMA_OFFSET_B, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_OFFSET_G, MPCC_OGAM_RAMA_OFFSET_G, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_RAMA_OFFSET_R, MPCC_OGAM_RAMA_OFFSET_R, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_LUT_INDEX, MPCC_OGAM_LUT_INDEX, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_CONTROL, MPCC_OGAM_MODE, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_CONTROL, MPCC_OGAM_SELECT, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_CONTROL, MPCC_OGAM_PWL_DISABLE, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_CONTROL, MPCC_OGAM_MODE_CURRENT, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_CONTROL, MPCC_OGAM_SELECT_CURRENT, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_LUT_CONTROL, MPCC_OGAM_LUT_WRITE_COLOR_MASK, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_LUT_CONTROL, MPCC_OGAM_LUT_READ_COLOR_SEL, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_LUT_CONTROL, MPCC_OGAM_LUT_READ_DBG, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_LUT_CONTROL, MPCC_OGAM_LUT_HOST_SEL, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_LUT_CONTROL, MPCC_OGAM_LUT_CONFIG_MODE, mask_sh),\
+	SF(MPCC_OGAM0_MPCC_OGAM_LUT_DATA, MPCC_OGAM_LUT_DATA, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_3DLUT_MODE, MPCC_MCM_3DLUT_MODE, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_3DLUT_MODE, MPCC_MCM_3DLUT_SIZE, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_3DLUT_MODE, MPCC_MCM_3DLUT_MODE_CURRENT, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_3DLUT_READ_WRITE_CONTROL, MPCC_MCM_3DLUT_WRITE_EN_MASK, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_3DLUT_READ_WRITE_CONTROL, MPCC_MCM_3DLUT_RAM_SEL, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_3DLUT_READ_WRITE_CONTROL, MPCC_MCM_3DLUT_30BIT_EN, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_3DLUT_READ_WRITE_CONTROL, MPCC_MCM_3DLUT_READ_SEL, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_3DLUT_INDEX, MPCC_MCM_3DLUT_INDEX, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_3DLUT_DATA, MPCC_MCM_3DLUT_DATA0, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_3DLUT_DATA, MPCC_MCM_3DLUT_DATA1, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_3DLUT_DATA_30BIT, MPCC_MCM_3DLUT_DATA_30BIT, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_CONTROL, MPCC_MCM_SHAPER_LUT_MODE, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_CONTROL, MPCC_MCM_SHAPER_MODE_CURRENT, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_OFFSET_R, MPCC_MCM_SHAPER_OFFSET_R, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_OFFSET_G, MPCC_MCM_SHAPER_OFFSET_G, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_OFFSET_B, MPCC_MCM_SHAPER_OFFSET_B, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_SCALE_R, MPCC_MCM_SHAPER_SCALE_R, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_SCALE_G_B, MPCC_MCM_SHAPER_SCALE_G, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_SCALE_G_B, MPCC_MCM_SHAPER_SCALE_B, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_LUT_INDEX, MPCC_MCM_SHAPER_LUT_INDEX, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_LUT_DATA, MPCC_MCM_SHAPER_LUT_DATA, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_LUT_WRITE_EN_MASK, MPCC_MCM_SHAPER_LUT_WRITE_EN_MASK, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_LUT_WRITE_EN_MASK, MPCC_MCM_SHAPER_LUT_WRITE_SEL, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_RAMA_START_CNTL_B, MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_RAMA_START_CNTL_B, MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_RAMA_END_CNTL_B, MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_RAMA_END_CNTL_B, MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_RAMA_REGION_0_1, MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_RAMA_REGION_0_1, MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_RAMA_REGION_0_1, MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_RAMA_REGION_0_1, MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_MEM_PWR_CTRL, MPCC_MCM_SHAPER_MEM_PWR_FORCE, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_MEM_PWR_CTRL, MPCC_MCM_SHAPER_MEM_PWR_DIS, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_MEM_PWR_CTRL, MPCC_MCM_SHAPER_MEM_LOW_PWR_MODE, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_MEM_PWR_CTRL, MPCC_MCM_3DLUT_MEM_PWR_FORCE, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_MEM_PWR_CTRL, MPCC_MCM_3DLUT_MEM_PWR_DIS, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_MEM_PWR_CTRL, MPCC_MCM_3DLUT_MEM_LOW_PWR_MODE, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_MEM_PWR_CTRL, MPCC_MCM_1DLUT_MEM_PWR_FORCE, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_MEM_PWR_CTRL, MPCC_MCM_1DLUT_MEM_PWR_DIS, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_MEM_PWR_CTRL, MPCC_MCM_1DLUT_MEM_LOW_PWR_MODE, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_MEM_PWR_CTRL, MPCC_MCM_SHAPER_MEM_PWR_STATE, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_MEM_PWR_CTRL, MPCC_MCM_3DLUT_MEM_PWR_STATE, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_MEM_PWR_CTRL, MPCC_MCM_1DLUT_MEM_PWR_STATE, mask_sh),\
+	SF(MPCC_MCM0_MPCC_MCM_SHAPER_CONTROL, MPCC_MCM_SHAPER_MODE_CURRENT, mask_sh),\
+	SF(CUR_VUPDATE_LOCK_SET0, CUR_VUPDATE_LOCK_SET, mask_sh)
+
+
+void dcn32_mpc_construct(struct dcn30_mpc *mpc30,
+	struct dc_context *ctx,
+	const struct dcn30_mpc_registers *mpc_regs,
+	const struct dcn30_mpc_shift *mpc_shift,
+	const struct dcn30_mpc_mask *mpc_mask,
+	int num_mpcc,
+	int num_rmu);
+
+#endif		//__DC_MPCC_DCN32_H__
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c
new file mode 100644
index 000000000000..f1ed25e972f2
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c
@@ -0,0 +1,236 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dcn32_optc.h"
+
+#include "dcn30/dcn30_optc.h"
+#include "reg_helper.h"
+#include "dc.h"
+#include "dcn_calc_math.h"
+
+#define REG(reg)\
+	optc1->tg_regs->reg
+
+#define CTX \
+	optc1->base.ctx
+
+#undef FN
+#define FN(reg_name, field_name) \
+	optc1->tg_shift->field_name, optc1->tg_mask->field_name
+
+static void optc32_set_odm_combine(struct timing_generator *optc, int *opp_id, int opp_cnt,
+		struct dc_crtc_timing *timing)
+{
+	struct optc *optc1 = DCN10TG_FROM_TG(optc);
+	uint32_t memory_mask = 0;
+	int h_active = timing->h_addressable + timing->h_border_left + timing->h_border_right;
+	int mpcc_hactive = h_active / opp_cnt;
+	/* Each memory instance is 2048x(32x2) bits to support half line of 4096 */
+	int odm_mem_count = (h_active + 2047) / 2048;
+
+	/*
+	 * display <= 4k : 2 memories + 2 pipes
+	 * 4k < display <= 8k : 4 memories + 2 pipes
+	 * 8k < display <= 12k : 6 memories + 4 pipes
+	 */
+	if (opp_cnt == 4) {
+		if (odm_mem_count <= 2)
+			memory_mask = 0x3;
+		else if (odm_mem_count <= 4)
+			memory_mask = 0xf;
+		else
+			memory_mask = 0x3f;
+	} else {
+		if (odm_mem_count <= 2)
+			memory_mask = 0x1 << (opp_id[0] * 2) | 0x1 << (opp_id[1] * 2);
+		else if (odm_mem_count <= 4)
+			memory_mask = 0x3 << (opp_id[0] * 2) | 0x3 << (opp_id[1] * 2);
+		else
+			memory_mask = 0x77;
+	}
+
+	REG_SET(OPTC_MEMORY_CONFIG, 0,
+		OPTC_MEM_SEL, memory_mask);
+
+	if (opp_cnt == 2) {
+		REG_SET_3(OPTC_DATA_SOURCE_SELECT, 0,
+				OPTC_NUM_OF_INPUT_SEGMENT, 1,
+				OPTC_SEG0_SRC_SEL, opp_id[0],
+				OPTC_SEG1_SRC_SEL, opp_id[1]);
+	} else if (opp_cnt == 4) {
+		REG_SET_5(OPTC_DATA_SOURCE_SELECT, 0,
+				OPTC_NUM_OF_INPUT_SEGMENT, 3,
+				OPTC_SEG0_SRC_SEL, opp_id[0],
+				OPTC_SEG1_SRC_SEL, opp_id[1],
+				OPTC_SEG2_SRC_SEL, opp_id[2],
+				OPTC_SEG3_SRC_SEL, opp_id[3]);
+	}
+
+	REG_UPDATE(OPTC_WIDTH_CONTROL,
+			OPTC_SEGMENT_WIDTH, mpcc_hactive);
+
+	REG_SET(OTG_H_TIMING_CNTL, 0, OTG_H_TIMING_DIV_MODE, opp_cnt - 1);
+	optc1->opp_count = opp_cnt;
+}
+
+/**
+ * Enable CRTC
+ * Enable CRTC - call ASIC Control Object to enable Timing generator.
+ */
+static bool optc32_enable_crtc(struct timing_generator *optc)
+{
+	struct optc *optc1 = DCN10TG_FROM_TG(optc);
+
+	/* opp instance for OTG, 1 to 1 mapping and odm will adjust */
+	REG_UPDATE(OPTC_DATA_SOURCE_SELECT,
+			OPTC_SEG0_SRC_SEL, optc->inst);
+
+	/* VTG enable first is for HW workaround */
+	REG_UPDATE(CONTROL,
+			VTG0_ENABLE, 1);
+
+	REG_SEQ_START();
+
+	/* Enable CRTC */
+	REG_UPDATE_2(OTG_CONTROL,
+			OTG_DISABLE_POINT_CNTL, 2,
+			OTG_MASTER_EN, 1);
+
+	REG_SEQ_SUBMIT();
+	REG_SEQ_WAIT_DONE();
+
+	return true;
+}
+
+/* disable_crtc */
+static bool optc32_disable_crtc(struct timing_generator *optc)
+{
+	struct optc *optc1 = DCN10TG_FROM_TG(optc);
+
+	/* disable otg request until end of the first line
+	 * in the vertical blank region
+	 */
+	REG_UPDATE(OTG_CONTROL,
+			OTG_MASTER_EN, 0);
+
+	REG_UPDATE(CONTROL,
+			VTG0_ENABLE, 0);
+
+	/* CRTC disabled, so disable  clock. */
+	REG_WAIT(OTG_CLOCK_CONTROL,
+			OTG_BUSY, 0,
+			1, 100000);
+
+	return true;
+}
+
+void optc32_phantom_crtc_post_enable(struct timing_generator *optc)
+{
+	struct optc *optc1 = DCN10TG_FROM_TG(optc);
+
+	/* Disable immediately. */
+	REG_UPDATE_2(OTG_CONTROL, OTG_DISABLE_POINT_CNTL, 0, OTG_MASTER_EN, 0);
+
+	/* CRTC disabled, so disable  clock. */
+	REG_WAIT(OTG_CLOCK_CONTROL, OTG_BUSY, 0, 1, 100000);
+}
+
+
+static struct timing_generator_funcs dcn32_tg_funcs = {
+		.validate_timing = optc1_validate_timing,
+		.program_timing = optc1_program_timing,
+		.setup_vertical_interrupt0 = optc1_setup_vertical_interrupt0,
+		.setup_vertical_interrupt1 = optc1_setup_vertical_interrupt1,
+		.setup_vertical_interrupt2 = optc1_setup_vertical_interrupt2,
+		.program_global_sync = optc1_program_global_sync,
+		.enable_crtc = optc32_enable_crtc,
+		.disable_crtc = optc32_disable_crtc,
+		.phantom_crtc_post_enable = optc32_phantom_crtc_post_enable,
+		/* used by enable_timing_synchronization. Not need for FPGA */
+		.is_counter_moving = optc1_is_counter_moving,
+		.get_position = optc1_get_position,
+		.get_frame_count = optc1_get_vblank_counter,
+		.get_scanoutpos = optc1_get_crtc_scanoutpos,
+		.get_otg_active_size = optc1_get_otg_active_size,
+		.set_early_control = optc1_set_early_control,
+		/* used by enable_timing_synchronization. Not need for FPGA */
+		.wait_for_state = optc1_wait_for_state,
+		.set_blank_color = optc3_program_blank_color,
+		.did_triggered_reset_occur = optc1_did_triggered_reset_occur,
+		.triplebuffer_lock = optc3_triplebuffer_lock,
+		.triplebuffer_unlock = optc2_triplebuffer_unlock,
+		.enable_reset_trigger = optc1_enable_reset_trigger,
+		.enable_crtc_reset = optc1_enable_crtc_reset,
+		.disable_reset_trigger = optc1_disable_reset_trigger,
+		.lock = optc3_lock,
+		.unlock = optc1_unlock,
+		.lock_doublebuffer_enable = optc3_lock_doublebuffer_enable,
+		.lock_doublebuffer_disable = optc3_lock_doublebuffer_disable,
+		.enable_optc_clock = optc1_enable_optc_clock,
+		.set_vrr_m_const = optc3_set_vrr_m_const,
+		.set_drr = optc1_set_drr,
+		.get_last_used_drr_vtotal = optc2_get_last_used_drr_vtotal,
+		.set_vtotal_min_max = optc1_set_vtotal_min_max,
+		.set_static_screen_control = optc1_set_static_screen_control,
+		.program_stereo = optc1_program_stereo,
+		.is_stereo_left_eye = optc1_is_stereo_left_eye,
+		.tg_init = optc3_tg_init,
+		.is_tg_enabled = optc1_is_tg_enabled,
+		.is_optc_underflow_occurred = optc1_is_optc_underflow_occurred,
+		.clear_optc_underflow = optc1_clear_optc_underflow,
+		.setup_global_swap_lock = NULL,
+		.get_crc = optc1_get_crc,
+		.configure_crc = optc1_configure_crc,
+		.set_dsc_config = optc3_set_dsc_config,
+		.get_dsc_status = optc2_get_dsc_status,
+		.set_dwb_source = NULL,
+		.set_odm_bypass = optc3_set_odm_bypass,
+		.set_odm_combine = optc32_set_odm_combine,
+		.get_optc_source = optc2_get_optc_source,
+		.set_out_mux = optc3_set_out_mux,
+		.set_drr_trigger_window = optc3_set_drr_trigger_window,
+		.set_vtotal_change_limit = optc3_set_vtotal_change_limit,
+		.set_gsl = optc2_set_gsl,
+		.set_gsl_source_select = optc2_set_gsl_source_select,
+		.set_vtg_params = optc1_set_vtg_params,
+		.program_manual_trigger = optc2_program_manual_trigger,
+		.setup_manual_trigger = optc2_setup_manual_trigger,
+		.get_hw_timing = optc1_get_hw_timing,
+};
+
+void dcn32_timing_generator_init(struct optc *optc1)
+{
+	optc1->base.funcs = &dcn32_tg_funcs;
+
+	optc1->max_h_total = optc1->tg_mask->OTG_H_TOTAL + 1;
+	optc1->max_v_total = optc1->tg_mask->OTG_V_TOTAL + 1;
+
+	optc1->min_h_blank = 32;
+	optc1->min_v_blank = 3;
+	optc1->min_v_blank_interlace = 5;
+	optc1->min_h_sync_width = 4;
+	optc1->min_v_sync_width = 1;
+}
+
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.h
new file mode 100644
index 000000000000..e07b317ed3f4
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.h
@@ -0,0 +1,253 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_OPTC_DCN32_H__
+#define __DC_OPTC_DCN32_H__
+
+#include "dcn10/dcn10_optc.h"
+
+#define OPTC_COMMON_REG_LIST_DCN3_2(inst) \
+	SRI(OTG_VSTARTUP_PARAM, OTG, inst),\
+	SRI(OTG_VUPDATE_PARAM, OTG, inst),\
+	SRI(OTG_VREADY_PARAM, OTG, inst),\
+	SRI(OTG_MASTER_UPDATE_LOCK, OTG, inst),\
+	SRI(OTG_GLOBAL_CONTROL0, OTG, inst),\
+	SRI(OTG_GLOBAL_CONTROL1, OTG, inst),\
+	SRI(OTG_GLOBAL_CONTROL2, OTG, inst),\
+	SRI(OTG_GLOBAL_CONTROL4, OTG, inst),\
+	SRI(OTG_DOUBLE_BUFFER_CONTROL, OTG, inst),\
+	SRI(OTG_H_TOTAL, OTG, inst),\
+	SRI(OTG_H_BLANK_START_END, OTG, inst),\
+	SRI(OTG_H_SYNC_A, OTG, inst),\
+	SRI(OTG_H_SYNC_A_CNTL, OTG, inst),\
+	SRI(OTG_H_TIMING_CNTL, OTG, inst),\
+	SRI(OTG_V_TOTAL, OTG, inst),\
+	SRI(OTG_V_BLANK_START_END, OTG, inst),\
+	SRI(OTG_V_SYNC_A, OTG, inst),\
+	SRI(OTG_V_SYNC_A_CNTL, OTG, inst),\
+	SRI(OTG_CONTROL, OTG, inst),\
+	SRI(OTG_STEREO_CONTROL, OTG, inst),\
+	SRI(OTG_3D_STRUCTURE_CONTROL, OTG, inst),\
+	SRI(OTG_STEREO_STATUS, OTG, inst),\
+	SRI(OTG_V_TOTAL_MAX, OTG, inst),\
+	SRI(OTG_V_TOTAL_MIN, OTG, inst),\
+	SRI(OTG_V_TOTAL_CONTROL, OTG, inst),\
+	SRI(OTG_TRIGA_CNTL, OTG, inst),\
+	SRI(OTG_FORCE_COUNT_NOW_CNTL, OTG, inst),\
+	SRI(OTG_STATIC_SCREEN_CONTROL, OTG, inst),\
+	SRI(OTG_STATUS_FRAME_COUNT, OTG, inst),\
+	SRI(OTG_STATUS, OTG, inst),\
+	SRI(OTG_STATUS_POSITION, OTG, inst),\
+	SRI(OTG_NOM_VERT_POSITION, OTG, inst),\
+	SRI(OTG_M_CONST_DTO0, OTG, inst),\
+	SRI(OTG_M_CONST_DTO1, OTG, inst),\
+	SRI(OTG_CLOCK_CONTROL, OTG, inst),\
+	SRI(OTG_VERTICAL_INTERRUPT0_CONTROL, OTG, inst),\
+	SRI(OTG_VERTICAL_INTERRUPT0_POSITION, OTG, inst),\
+	SRI(OTG_VERTICAL_INTERRUPT1_CONTROL, OTG, inst),\
+	SRI(OTG_VERTICAL_INTERRUPT1_POSITION, OTG, inst),\
+	SRI(OTG_VERTICAL_INTERRUPT2_CONTROL, OTG, inst),\
+	SRI(OTG_VERTICAL_INTERRUPT2_POSITION, OTG, inst),\
+	SRI(OPTC_INPUT_CLOCK_CONTROL, ODM, inst),\
+	SRI(OPTC_DATA_SOURCE_SELECT, ODM, inst),\
+	SRI(OPTC_INPUT_GLOBAL_CONTROL, ODM, inst),\
+	SRI(CONTROL, VTG, inst),\
+	SRI(OTG_VERT_SYNC_CONTROL, OTG, inst),\
+	SRI(OTG_GSL_CONTROL, OTG, inst),\
+	SRI(OTG_CRC_CNTL, OTG, inst),\
+	SRI(OTG_CRC0_DATA_RG, OTG, inst),\
+	SRI(OTG_CRC0_DATA_B, OTG, inst),\
+	SRI(OTG_CRC0_WINDOWA_X_CONTROL, OTG, inst),\
+	SRI(OTG_CRC0_WINDOWA_Y_CONTROL, OTG, inst),\
+	SRI(OTG_CRC0_WINDOWB_X_CONTROL, OTG, inst),\
+	SRI(OTG_CRC0_WINDOWB_Y_CONTROL, OTG, inst),\
+	SR(GSL_SOURCE_SELECT),\
+	SRI(OTG_TRIGA_MANUAL_TRIG, OTG, inst),\
+	SRI(OTG_GLOBAL_CONTROL1, OTG, inst),\
+	SRI(OTG_GLOBAL_CONTROL2, OTG, inst),\
+	SRI(OTG_GSL_WINDOW_X, OTG, inst),\
+	SRI(OTG_GSL_WINDOW_Y, OTG, inst),\
+	SRI(OTG_VUPDATE_KEEPOUT, OTG, inst),\
+	SRI(OTG_DSC_START_POSITION, OTG, inst),\
+	SRI(OTG_DRR_TRIGGER_WINDOW, OTG, inst),\
+	SRI(OTG_DRR_V_TOTAL_CHANGE, OTG, inst),\
+	SRI(OPTC_DATA_FORMAT_CONTROL, ODM, inst),\
+	SRI(OPTC_BYTES_PER_PIXEL, ODM, inst),\
+	SRI(OPTC_WIDTH_CONTROL, ODM, inst),\
+	SRI(OPTC_MEMORY_CONFIG, ODM, inst),\
+	SRI(OTG_DRR_CONTROL, OTG, inst)
+
+#define OPTC_COMMON_MASK_SH_LIST_DCN3_2(mask_sh)\
+	SF(OTG0_OTG_VSTARTUP_PARAM, VSTARTUP_START, mask_sh),\
+	SF(OTG0_OTG_VUPDATE_PARAM, VUPDATE_OFFSET, mask_sh),\
+	SF(OTG0_OTG_VUPDATE_PARAM, VUPDATE_WIDTH, mask_sh),\
+	SF(OTG0_OTG_VREADY_PARAM, VREADY_OFFSET, mask_sh),\
+	SF(OTG0_OTG_MASTER_UPDATE_LOCK, OTG_MASTER_UPDATE_LOCK, mask_sh),\
+	SF(OTG0_OTG_MASTER_UPDATE_LOCK, UPDATE_LOCK_STATUS, mask_sh),\
+	SF(OTG0_OTG_GLOBAL_CONTROL0, MASTER_UPDATE_LOCK_DB_START_X, mask_sh),\
+	SF(OTG0_OTG_GLOBAL_CONTROL0, MASTER_UPDATE_LOCK_DB_END_X, mask_sh),\
+	SF(OTG0_OTG_GLOBAL_CONTROL0, MASTER_UPDATE_LOCK_DB_EN, mask_sh),\
+	SF(OTG0_OTG_GLOBAL_CONTROL1, MASTER_UPDATE_LOCK_DB_START_Y, mask_sh),\
+	SF(OTG0_OTG_GLOBAL_CONTROL1, MASTER_UPDATE_LOCK_DB_END_Y, mask_sh),\
+	SF(OTG0_OTG_GLOBAL_CONTROL2, OTG_MASTER_UPDATE_LOCK_SEL, mask_sh),\
+	SF(OTG0_OTG_GLOBAL_CONTROL4, DIG_UPDATE_POSITION_X, mask_sh),\
+	SF(OTG0_OTG_GLOBAL_CONTROL4, DIG_UPDATE_POSITION_Y, mask_sh),\
+	SF(OTG0_OTG_DOUBLE_BUFFER_CONTROL, OTG_UPDATE_PENDING, mask_sh),\
+	SF(OTG0_OTG_H_TOTAL, OTG_H_TOTAL, mask_sh),\
+	SF(OTG0_OTG_H_BLANK_START_END, OTG_H_BLANK_START, mask_sh),\
+	SF(OTG0_OTG_H_BLANK_START_END, OTG_H_BLANK_END, mask_sh),\
+	SF(OTG0_OTG_H_SYNC_A, OTG_H_SYNC_A_START, mask_sh),\
+	SF(OTG0_OTG_H_SYNC_A, OTG_H_SYNC_A_END, mask_sh),\
+	SF(OTG0_OTG_H_SYNC_A_CNTL, OTG_H_SYNC_A_POL, mask_sh),\
+	SF(OTG0_OTG_V_TOTAL, OTG_V_TOTAL, mask_sh),\
+	SF(OTG0_OTG_V_BLANK_START_END, OTG_V_BLANK_START, mask_sh),\
+	SF(OTG0_OTG_V_BLANK_START_END, OTG_V_BLANK_END, mask_sh),\
+	SF(OTG0_OTG_V_SYNC_A, OTG_V_SYNC_A_START, mask_sh),\
+	SF(OTG0_OTG_V_SYNC_A, OTG_V_SYNC_A_END, mask_sh),\
+	SF(OTG0_OTG_V_SYNC_A_CNTL, OTG_V_SYNC_A_POL, mask_sh),\
+	SF(OTG0_OTG_V_SYNC_A_CNTL, OTG_V_SYNC_MODE, mask_sh),\
+	SF(OTG0_OTG_CONTROL, OTG_MASTER_EN, mask_sh),\
+	SF(OTG0_OTG_CONTROL, OTG_START_POINT_CNTL, mask_sh),\
+	SF(OTG0_OTG_CONTROL, OTG_DISABLE_POINT_CNTL, mask_sh),\
+	SF(OTG0_OTG_CONTROL, OTG_FIELD_NUMBER_CNTL, mask_sh),\
+	SF(OTG0_OTG_CONTROL, OTG_OUT_MUX, mask_sh),\
+	SF(OTG0_OTG_STEREO_CONTROL, OTG_STEREO_EN, mask_sh),\
+	SF(OTG0_OTG_STEREO_CONTROL, OTG_STEREO_SYNC_OUTPUT_LINE_NUM, mask_sh),\
+	SF(OTG0_OTG_STEREO_CONTROL, OTG_STEREO_SYNC_OUTPUT_POLARITY, mask_sh),\
+	SF(OTG0_OTG_STEREO_CONTROL, OTG_STEREO_EYE_FLAG_POLARITY, mask_sh),\
+	SF(OTG0_OTG_STEREO_CONTROL, OTG_DISABLE_STEREOSYNC_OUTPUT_FOR_DP, mask_sh),\
+	SF(OTG0_OTG_STEREO_STATUS, OTG_STEREO_CURRENT_EYE, mask_sh),\
+	SF(OTG0_OTG_3D_STRUCTURE_CONTROL, OTG_3D_STRUCTURE_EN, mask_sh),\
+	SF(OTG0_OTG_3D_STRUCTURE_CONTROL, OTG_3D_STRUCTURE_V_UPDATE_MODE, mask_sh),\
+	SF(OTG0_OTG_3D_STRUCTURE_CONTROL, OTG_3D_STRUCTURE_STEREO_SEL_OVR, mask_sh),\
+	SF(OTG0_OTG_V_TOTAL_MAX, OTG_V_TOTAL_MAX, mask_sh),\
+	SF(OTG0_OTG_V_TOTAL_MIN, OTG_V_TOTAL_MIN, mask_sh),\
+	SF(OTG0_OTG_V_TOTAL_CONTROL, OTG_V_TOTAL_MIN_SEL, mask_sh),\
+	SF(OTG0_OTG_V_TOTAL_CONTROL, OTG_V_TOTAL_MAX_SEL, mask_sh),\
+	SF(OTG0_OTG_V_TOTAL_CONTROL, OTG_FORCE_LOCK_ON_EVENT, mask_sh),\
+	SF(OTG0_OTG_V_TOTAL_CONTROL, OTG_SET_V_TOTAL_MIN_MASK, mask_sh),\
+	SF(OTG0_OTG_V_TOTAL_CONTROL, OTG_VTOTAL_MID_REPLACING_MIN_EN, mask_sh),\
+	SF(OTG0_OTG_V_TOTAL_CONTROL, OTG_VTOTAL_MID_REPLACING_MAX_EN, mask_sh),\
+	SF(OTG0_OTG_FORCE_COUNT_NOW_CNTL, OTG_FORCE_COUNT_NOW_CLEAR, mask_sh),\
+	SF(OTG0_OTG_FORCE_COUNT_NOW_CNTL, OTG_FORCE_COUNT_NOW_MODE, mask_sh),\
+	SF(OTG0_OTG_FORCE_COUNT_NOW_CNTL, OTG_FORCE_COUNT_NOW_OCCURRED, mask_sh),\
+	SF(OTG0_OTG_TRIGA_CNTL, OTG_TRIGA_SOURCE_SELECT, mask_sh),\
+	SF(OTG0_OTG_TRIGA_CNTL, OTG_TRIGA_SOURCE_PIPE_SELECT, mask_sh),\
+	SF(OTG0_OTG_TRIGA_CNTL, OTG_TRIGA_RISING_EDGE_DETECT_CNTL, mask_sh),\
+	SF(OTG0_OTG_TRIGA_CNTL, OTG_TRIGA_FALLING_EDGE_DETECT_CNTL, mask_sh),\
+	SF(OTG0_OTG_TRIGA_CNTL, OTG_TRIGA_POLARITY_SELECT, mask_sh),\
+	SF(OTG0_OTG_TRIGA_CNTL, OTG_TRIGA_FREQUENCY_SELECT, mask_sh),\
+	SF(OTG0_OTG_TRIGA_CNTL, OTG_TRIGA_DELAY, mask_sh),\
+	SF(OTG0_OTG_TRIGA_CNTL, OTG_TRIGA_CLEAR, mask_sh),\
+	SF(OTG0_OTG_STATIC_SCREEN_CONTROL, OTG_STATIC_SCREEN_EVENT_MASK, mask_sh),\
+	SF(OTG0_OTG_STATIC_SCREEN_CONTROL, OTG_STATIC_SCREEN_FRAME_COUNT, mask_sh),\
+	SF(OTG0_OTG_STATUS_FRAME_COUNT, OTG_FRAME_COUNT, mask_sh),\
+	SF(OTG0_OTG_STATUS, OTG_V_BLANK, mask_sh),\
+	SF(OTG0_OTG_STATUS, OTG_V_ACTIVE_DISP, mask_sh),\
+	SF(OTG0_OTG_STATUS_POSITION, OTG_HORZ_COUNT, mask_sh),\
+	SF(OTG0_OTG_STATUS_POSITION, OTG_VERT_COUNT, mask_sh),\
+	SF(OTG0_OTG_NOM_VERT_POSITION, OTG_VERT_COUNT_NOM, mask_sh),\
+	SF(OTG0_OTG_M_CONST_DTO0, OTG_M_CONST_DTO_PHASE, mask_sh),\
+	SF(OTG0_OTG_M_CONST_DTO1, OTG_M_CONST_DTO_MODULO, mask_sh),\
+	SF(OTG0_OTG_CLOCK_CONTROL, OTG_BUSY, mask_sh),\
+	SF(OTG0_OTG_CLOCK_CONTROL, OTG_CLOCK_EN, mask_sh),\
+	SF(OTG0_OTG_CLOCK_CONTROL, OTG_CLOCK_ON, mask_sh),\
+	SF(OTG0_OTG_CLOCK_CONTROL, OTG_CLOCK_GATE_DIS, mask_sh),\
+	SF(OTG0_OTG_VERTICAL_INTERRUPT0_CONTROL, OTG_VERTICAL_INTERRUPT0_INT_ENABLE, mask_sh),\
+	SF(OTG0_OTG_VERTICAL_INTERRUPT0_POSITION, OTG_VERTICAL_INTERRUPT0_LINE_START, mask_sh),\
+	SF(OTG0_OTG_VERTICAL_INTERRUPT0_POSITION, OTG_VERTICAL_INTERRUPT0_LINE_END, mask_sh),\
+	SF(OTG0_OTG_VERTICAL_INTERRUPT1_CONTROL, OTG_VERTICAL_INTERRUPT1_INT_ENABLE, mask_sh),\
+	SF(OTG0_OTG_VERTICAL_INTERRUPT1_POSITION, OTG_VERTICAL_INTERRUPT1_LINE_START, mask_sh),\
+	SF(OTG0_OTG_VERTICAL_INTERRUPT2_CONTROL, OTG_VERTICAL_INTERRUPT2_INT_ENABLE, mask_sh),\
+	SF(OTG0_OTG_VERTICAL_INTERRUPT2_POSITION, OTG_VERTICAL_INTERRUPT2_LINE_START, mask_sh),\
+	SF(ODM0_OPTC_INPUT_CLOCK_CONTROL, OPTC_INPUT_CLK_EN, mask_sh),\
+	SF(ODM0_OPTC_INPUT_CLOCK_CONTROL, OPTC_INPUT_CLK_ON, mask_sh),\
+	SF(ODM0_OPTC_INPUT_CLOCK_CONTROL, OPTC_INPUT_CLK_GATE_DIS, mask_sh),\
+	SF(ODM0_OPTC_INPUT_GLOBAL_CONTROL, OPTC_UNDERFLOW_OCCURRED_STATUS, mask_sh),\
+	SF(ODM0_OPTC_INPUT_GLOBAL_CONTROL, OPTC_UNDERFLOW_CLEAR, mask_sh),\
+	SF(VTG0_CONTROL, VTG0_ENABLE, mask_sh),\
+	SF(VTG0_CONTROL, VTG0_FP2, mask_sh),\
+	SF(VTG0_CONTROL, VTG0_VCOUNT_INIT, mask_sh),\
+	SF(OTG0_OTG_VERT_SYNC_CONTROL, OTG_FORCE_VSYNC_NEXT_LINE_OCCURRED, mask_sh),\
+	SF(OTG0_OTG_VERT_SYNC_CONTROL, OTG_FORCE_VSYNC_NEXT_LINE_CLEAR, mask_sh),\
+	SF(OTG0_OTG_VERT_SYNC_CONTROL, OTG_AUTO_FORCE_VSYNC_MODE, mask_sh),\
+	SF(OTG0_OTG_GSL_CONTROL, OTG_GSL0_EN, mask_sh),\
+	SF(OTG0_OTG_GSL_CONTROL, OTG_GSL1_EN, mask_sh),\
+	SF(OTG0_OTG_GSL_CONTROL, OTG_GSL2_EN, mask_sh),\
+	SF(OTG0_OTG_GSL_CONTROL, OTG_GSL_MASTER_EN, mask_sh),\
+	SF(OTG0_OTG_GSL_CONTROL, OTG_GSL_FORCE_DELAY, mask_sh),\
+	SF(OTG0_OTG_GSL_CONTROL, OTG_GSL_CHECK_ALL_FIELDS, mask_sh),\
+	SF(OTG0_OTG_CRC_CNTL, OTG_CRC_CONT_EN, mask_sh),\
+	SF(OTG0_OTG_CRC_CNTL, OTG_CRC0_SELECT, mask_sh),\
+	SF(OTG0_OTG_CRC_CNTL, OTG_CRC_EN, mask_sh),\
+	SF(OTG0_OTG_CRC0_DATA_RG, CRC0_R_CR, mask_sh),\
+	SF(OTG0_OTG_CRC0_DATA_RG, CRC0_G_Y, mask_sh),\
+	SF(OTG0_OTG_CRC0_DATA_B, CRC0_B_CB, mask_sh),\
+	SF(OTG0_OTG_CRC0_WINDOWA_X_CONTROL, OTG_CRC0_WINDOWA_X_START, mask_sh),\
+	SF(OTG0_OTG_CRC0_WINDOWA_X_CONTROL, OTG_CRC0_WINDOWA_X_END, mask_sh),\
+	SF(OTG0_OTG_CRC0_WINDOWA_Y_CONTROL, OTG_CRC0_WINDOWA_Y_START, mask_sh),\
+	SF(OTG0_OTG_CRC0_WINDOWA_Y_CONTROL, OTG_CRC0_WINDOWA_Y_END, mask_sh),\
+	SF(OTG0_OTG_CRC0_WINDOWB_X_CONTROL, OTG_CRC0_WINDOWB_X_START, mask_sh),\
+	SF(OTG0_OTG_CRC0_WINDOWB_X_CONTROL, OTG_CRC0_WINDOWB_X_END, mask_sh),\
+	SF(OTG0_OTG_CRC0_WINDOWB_Y_CONTROL, OTG_CRC0_WINDOWB_Y_START, mask_sh),\
+	SF(OTG0_OTG_CRC0_WINDOWB_Y_CONTROL, OTG_CRC0_WINDOWB_Y_END, mask_sh),\
+	SF(OTG0_OTG_TRIGA_MANUAL_TRIG, OTG_TRIGA_MANUAL_TRIG, mask_sh),\
+	SF(GSL_SOURCE_SELECT, GSL0_READY_SOURCE_SEL, mask_sh),\
+	SF(GSL_SOURCE_SELECT, GSL1_READY_SOURCE_SEL, mask_sh),\
+	SF(GSL_SOURCE_SELECT, GSL2_READY_SOURCE_SEL, mask_sh),\
+	SF(OTG0_OTG_GLOBAL_CONTROL2, MANUAL_FLOW_CONTROL_SEL, mask_sh),\
+	SF(OTG0_OTG_GLOBAL_CONTROL2, GLOBAL_UPDATE_LOCK_EN, mask_sh),\
+	SF(OTG0_OTG_GSL_WINDOW_X, OTG_GSL_WINDOW_START_X, mask_sh),\
+	SF(OTG0_OTG_GSL_WINDOW_X, OTG_GSL_WINDOW_END_X, mask_sh), \
+	SF(OTG0_OTG_GSL_WINDOW_Y, OTG_GSL_WINDOW_START_Y, mask_sh),\
+	SF(OTG0_OTG_GSL_WINDOW_Y, OTG_GSL_WINDOW_END_Y, mask_sh),\
+	SF(OTG0_OTG_VUPDATE_KEEPOUT, OTG_MASTER_UPDATE_LOCK_VUPDATE_KEEPOUT_EN, mask_sh), \
+	SF(OTG0_OTG_VUPDATE_KEEPOUT, MASTER_UPDATE_LOCK_VUPDATE_KEEPOUT_START_OFFSET, mask_sh), \
+	SF(OTG0_OTG_VUPDATE_KEEPOUT, MASTER_UPDATE_LOCK_VUPDATE_KEEPOUT_END_OFFSET, mask_sh), \
+	SF(OTG0_OTG_GSL_CONTROL, OTG_GSL_MASTER_MODE, mask_sh), \
+	SF(OTG0_OTG_GSL_CONTROL, OTG_MASTER_UPDATE_LOCK_GSL_EN, mask_sh), \
+	SF(OTG0_OTG_DSC_START_POSITION, OTG_DSC_START_POSITION_X, mask_sh), \
+	SF(OTG0_OTG_DSC_START_POSITION, OTG_DSC_START_POSITION_LINE_NUM, mask_sh),\
+	SF(ODM0_OPTC_DATA_SOURCE_SELECT, OPTC_SEG0_SRC_SEL, mask_sh),\
+	SF(ODM0_OPTC_DATA_SOURCE_SELECT, OPTC_SEG1_SRC_SEL, mask_sh),\
+	SF(ODM0_OPTC_DATA_SOURCE_SELECT, OPTC_SEG2_SRC_SEL, mask_sh),\
+	SF(ODM0_OPTC_DATA_SOURCE_SELECT, OPTC_SEG3_SRC_SEL, mask_sh),\
+	SF(ODM0_OPTC_DATA_SOURCE_SELECT, OPTC_NUM_OF_INPUT_SEGMENT, mask_sh),\
+	SF(ODM0_OPTC_MEMORY_CONFIG, OPTC_MEM_SEL, mask_sh),\
+	SF(ODM0_OPTC_DATA_FORMAT_CONTROL, OPTC_DATA_FORMAT, mask_sh),\
+	SF(ODM0_OPTC_DATA_FORMAT_CONTROL, OPTC_DSC_MODE, mask_sh),\
+	SF(ODM0_OPTC_BYTES_PER_PIXEL, OPTC_DSC_BYTES_PER_PIXEL, mask_sh),\
+	SF(ODM0_OPTC_WIDTH_CONTROL, OPTC_DSC_SLICE_WIDTH, mask_sh),\
+	SF(ODM0_OPTC_WIDTH_CONTROL, OPTC_SEGMENT_WIDTH, mask_sh),\
+	SF(OTG0_OTG_DRR_TRIGGER_WINDOW, OTG_DRR_TRIGGER_WINDOW_START_X, mask_sh),\
+	SF(OTG0_OTG_DRR_TRIGGER_WINDOW, OTG_DRR_TRIGGER_WINDOW_END_X, mask_sh),\
+	SF(OTG0_OTG_DRR_V_TOTAL_CHANGE, OTG_DRR_V_TOTAL_CHANGE_LIMIT, mask_sh),\
+	SF(OTG0_OTG_H_TIMING_CNTL, OTG_H_TIMING_DIV_MODE, mask_sh),\
+	SF(OTG0_OTG_DOUBLE_BUFFER_CONTROL, OTG_DRR_TIMING_DBUF_UPDATE_MODE, mask_sh),\
+	SF(OTG0_OTG_DRR_CONTROL, OTG_V_TOTAL_LAST_USED_BY_DRR, mask_sh)
+
+void dcn32_timing_generator_init(struct optc *optc1);
+
+#endif /* __DC_OPTC_DCN32_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
new file mode 100644
index 000000000000..17a287bc89ce
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
@@ -0,0 +1,3976 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+#include "dc.h"
+
+#include "dcn32_init.h"
+
+#include "resource.h"
+#include "include/irq_service_interface.h"
+#include "dcn32_resource.h"
+
+#include "dcn20/dcn20_resource.h"
+#include "dcn30/dcn30_resource.h"
+
+#include "dcn10/dcn10_ipp.h"
+#include "dcn30/dcn30_hubbub.h"
+#include "dcn31/dcn31_hubbub.h"
+#include "dcn32/dcn32_hubbub.h"
+#include "dcn32/dcn32_mpc.h"
+#include "dcn32_hubp.h"
+#include "irq/dcn32/irq_service_dcn32.h"
+#include "dcn32/dcn32_dpp.h"
+#include "dcn32/dcn32_optc.h"
+#include "dcn20/dcn20_hwseq.h"
+#include "dcn30/dcn30_hwseq.h"
+#include "dce110/dce110_hw_sequencer.h"
+#include "dcn30/dcn30_opp.h"
+#include "dcn20/dcn20_dsc.h"
+#include "dcn30/dcn30_vpg.h"
+#include "dcn30/dcn30_afmt.h"
+#include "dcn30/dcn30_dio_stream_encoder.h"
+#include "dcn32/dcn32_dio_stream_encoder.h"
+#include "dcn31/dcn31_hpo_dp_stream_encoder.h"
+#include "dcn31/dcn31_hpo_dp_link_encoder.h"
+#include "dcn32/dcn32_hpo_dp_link_encoder.h"
+#include "dc_link_dp.h"
+#include "dcn31/dcn31_apg.h"
+#include "dcn31/dcn31_dio_link_encoder.h"
+#include "dcn32/dcn32_dio_link_encoder.h"
+#include "dce/dce_clock_source.h"
+#include "dce/dce_audio.h"
+#include "dce/dce_hwseq.h"
+#include "clk_mgr.h"
+#include "virtual/virtual_stream_encoder.h"
+#include "dml/display_mode_vba.h"
+#include "dcn32/dcn32_dccg.h"
+#include "dcn10/dcn10_resource.h"
+#include "dc_link_ddc.h"
+#include "dcn31/dcn31_panel_cntl.h"
+
+#include "dcn30/dcn30_dwb.h"
+#include "dcn32/dcn32_mmhubbub.h"
+
+#include "dcn/dcn_3_2_0_offset.h"
+#include "dcn/dcn_3_2_0_sh_mask.h"
+#include "dcn/nbio_4_3_0_offset.h"
+
+#include "reg_helper.h"
+#include "dce/dmub_abm.h"
+#include "dce/dmub_psr.h"
+#include "dce/dce_aux.h"
+#include "dce/dce_i2c.h"
+
+#include "dml/dcn30/display_mode_vba_30.h"
+#include "vm_helper.h"
+#include "dcn20/dcn20_vmid.h"
+
+#define DCN_BASE__INST0_SEG1                       0x000000C0
+#define DCN_BASE__INST0_SEG2                       0x000034C0
+#define DCN_BASE__INST0_SEG3                       0x00009000
+#define NBIO_BASE__INST0_SEG1                      0x00000014
+
+#define MAX_INSTANCE                                        6
+#define MAX_SEGMENT                                         6
+
+struct IP_BASE_INSTANCE {
+	unsigned int segment[MAX_SEGMENT];
+};
+
+struct IP_BASE {
+	struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
+};
+
+static const struct IP_BASE DCN_BASE = { { { { 0x00000012, 0x000000C0, 0x000034C0, 0x00009000, 0x02403C00, 0 } },
+					{ { 0, 0, 0, 0, 0, 0 } },
+					{ { 0, 0, 0, 0, 0, 0 } },
+					{ { 0, 0, 0, 0, 0, 0 } },
+					{ { 0, 0, 0, 0, 0, 0 } },
+					{ { 0, 0, 0, 0, 0, 0 } } } };
+
+#define DC_LOGGER_INIT(logger)
+
+#define DCN3_2_DEFAULT_DET_SIZE 256
+#define DCN3_2_MAX_DET_SIZE 1152
+#define DCN3_2_MIN_DET_SIZE 128
+#define DCN3_2_MIN_COMPBUF_SIZE_KB 128
+
+struct _vcs_dpi_ip_params_st dcn3_2_ip = {
+	.gpuvm_enable = 1,
+	.gpuvm_max_page_table_levels = 1,
+	.hostvm_enable = 0,
+	.rob_buffer_size_kbytes = 128,
+	.det_buffer_size_kbytes = DCN3_2_DEFAULT_DET_SIZE,
+	.config_return_buffer_size_in_kbytes = 1280,
+	.compressed_buffer_segment_size_in_kbytes = 64,
+	.meta_fifo_size_in_kentries = 22,
+	.zero_size_buffer_entries = 512,
+	.compbuf_reserved_space_64b = 256,
+	.compbuf_reserved_space_zs = 64,
+	.dpp_output_buffer_pixels = 2560,
+	.opp_output_buffer_lines = 1,
+	.pixel_chunk_size_kbytes = 8,
+	.alpha_pixel_chunk_size_kbytes = 4, // not appearing in spreadsheet, match c code from hw team
+	.min_pixel_chunk_size_bytes = 1024,
+	.dcc_meta_buffer_size_bytes = 6272,
+	.meta_chunk_size_kbytes = 2,
+	.min_meta_chunk_size_bytes = 256,
+	.writeback_chunk_size_kbytes = 8,
+	.ptoi_supported = false,
+	.num_dsc = 4,
+	.maximum_dsc_bits_per_component = 12,
+	.maximum_pixels_per_line_per_dsc_unit = 6016,
+	.dsc422_native_support = true,
+	.is_line_buffer_bpp_fixed = true,
+	.line_buffer_fixed_bpp = 57,
+	.line_buffer_size_bits = 1171920, //DPP doc, DCN3_2_DisplayMode_73.xlsm still shows as 986880 bits with 48 bpp
+	.max_line_buffer_lines = 32,
+	.writeback_interface_buffer_size_kbytes = 90,
+	.max_num_dpp = 4,
+	.max_num_otg = 4,
+	.max_num_hdmi_frl_outputs = 1,
+	.max_num_wb = 1,
+	.max_dchub_pscl_bw_pix_per_clk = 4,
+	.max_pscl_lb_bw_pix_per_clk = 2,
+	.max_lb_vscl_bw_pix_per_clk = 4,
+	.max_vscl_hscl_bw_pix_per_clk = 4,
+	.max_hscl_ratio = 6,
+	.max_vscl_ratio = 6,
+	.max_hscl_taps = 8,
+	.max_vscl_taps = 8,
+	.dpte_buffer_size_in_pte_reqs_luma = 64,
+	.dpte_buffer_size_in_pte_reqs_chroma = 34,
+	.dispclk_ramp_margin_percent = 1,
+	.max_inter_dcn_tile_repeaters = 8,
+	.cursor_buffer_size = 16,
+	.cursor_chunk_size = 2,
+	.writeback_line_buffer_buffer_size = 0,
+	.writeback_min_hscl_ratio = 1,
+	.writeback_min_vscl_ratio = 1,
+	.writeback_max_hscl_ratio = 1,
+	.writeback_max_vscl_ratio = 1,
+	.writeback_max_hscl_taps = 1,
+	.writeback_max_vscl_taps = 1,
+	.dppclk_delay_subtotal = 47,
+	.dppclk_delay_scl = 50,
+	.dppclk_delay_scl_lb_only = 16,
+	.dppclk_delay_cnvc_formatter = 28,
+	.dppclk_delay_cnvc_cursor = 6,
+	.dispclk_delay_subtotal = 125,
+	.dynamic_metadata_vm_enabled = false,
+	.odm_combine_4to1_supported = false,
+	.dcc_supported = true,
+	.max_num_dp2p0_outputs = 2,
+	.max_num_dp2p0_streams = 4,
+};
+
+struct _vcs_dpi_soc_bounding_box_st dcn3_2_soc = {
+	.clock_limits = {
+		{
+			.state = 0,
+			.dcfclk_mhz = 1564.0,
+			.fabricclk_mhz = 400.0,
+			.dispclk_mhz = 2150.0,
+			.dppclk_mhz = 2150.0,
+			.phyclk_mhz = 810.0,
+			.phyclk_d18_mhz = 667.0,
+			.phyclk_d32_mhz = 625.0,
+			.socclk_mhz = 1200.0,
+			.dscclk_mhz = 716.667,
+			.dram_speed_mts = 1600.0,
+			.dtbclk_mhz = 1564.0,
+		},
+	},
+	.num_states = 1,
+	.sr_exit_time_us = 5.20,
+	.sr_enter_plus_exit_time_us = 9.60,
+	.sr_exit_z8_time_us = 285.0,
+	.sr_enter_plus_exit_z8_time_us = 320,
+	.writeback_latency_us = 12.0,
+	.round_trip_ping_latency_dcfclk_cycles = 263,
+	.urgent_latency_pixel_data_only_us = 4.0,
+	.urgent_latency_pixel_mixed_with_vm_data_us = 4.0,
+	.urgent_latency_vm_data_only_us = 4.0,
+	.fclk_change_latency_us = 20,
+	.usr_retraining_latency_us = 2,
+	.smn_latency_us = 2,
+	.mall_allocated_for_dcn_mbytes = 64,
+	.urgent_out_of_order_return_per_channel_pixel_only_bytes = 4096,
+	.urgent_out_of_order_return_per_channel_pixel_and_vm_bytes = 4096,
+	.urgent_out_of_order_return_per_channel_vm_only_bytes = 4096,
+	.pct_ideal_sdp_bw_after_urgent = 100.0,
+	.pct_ideal_fabric_bw_after_urgent = 67.0,
+	.pct_ideal_dram_sdp_bw_after_urgent_pixel_only = 20.0,
+	.pct_ideal_dram_sdp_bw_after_urgent_pixel_and_vm = 60.0, // N/A, for now keep as is until DML implemented
+	.pct_ideal_dram_sdp_bw_after_urgent_vm_only = 30.0, // N/A, for now keep as is until DML implemented
+	.pct_ideal_dram_bw_after_urgent_strobe = 67.0,
+	.max_avg_sdp_bw_use_normal_percent = 80.0,
+	.max_avg_fabric_bw_use_normal_percent = 60.0,
+	.max_avg_dram_bw_use_normal_strobe_percent = 50.0,
+	.max_avg_dram_bw_use_normal_percent = 15.0,
+	.num_chans = 8,
+	.dram_channel_width_bytes = 2,
+	.fabric_datapath_to_dcn_data_return_bytes = 64,
+	.return_bus_width_bytes = 64,
+	.downspread_percent = 0.38,
+	.dcn_downspread_percent = 0.5,
+	.dram_clock_change_latency_us = 400,
+	.dispclk_dppclk_vco_speed_mhz = 4300.0,
+	.do_urgent_latency_adjustment = true,
+	.urgent_latency_adjustment_fabric_clock_component_us = 1.0,
+	.urgent_latency_adjustment_fabric_clock_reference_mhz = 1000,
+};
+
+enum dcn32_clk_src_array_id {
+	DCN32_CLK_SRC_PLL0,
+	DCN32_CLK_SRC_PLL1,
+	DCN32_CLK_SRC_PLL2,
+	DCN32_CLK_SRC_PLL3,
+	DCN32_CLK_SRC_PLL4,
+	DCN32_CLK_SRC_TOTAL
+};
+
+/* begin *********************
+ * macros to expend register list macro defined in HW object header file
+ */
+
+/* DCN */
+/* TODO awful hack. fixup dcn20_dwb.h */
+#undef BASE_INNER
+#define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
+
+#define BASE(seg) BASE_INNER(seg)
+
+#define SR(reg_name)\
+		.reg_name = BASE(reg ## reg_name ## _BASE_IDX) +  \
+					reg ## reg_name
+
+#define SRI(reg_name, block, id)\
+	.reg_name = BASE(reg ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+					reg ## block ## id ## _ ## reg_name
+
+#define SRI2(reg_name, block, id)\
+	.reg_name = BASE(reg ## reg_name ## _BASE_IDX) + \
+					reg ## reg_name
+
+#define SRIR(var_name, reg_name, block, id)\
+	.var_name = BASE(reg ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+					reg ## block ## id ## _ ## reg_name
+
+#define SRII(reg_name, block, id)\
+	.reg_name[id] = BASE(reg ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+					reg ## block ## id ## _ ## reg_name
+
+#define SRII_MPC_RMU(reg_name, block, id)\
+	.RMU##_##reg_name[id] = BASE(reg ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+					reg ## block ## id ## _ ## reg_name
+
+#define SRII_DWB(reg_name, temp_name, block, id)\
+	.reg_name[id] = BASE(reg ## block ## id ## _ ## temp_name ## _BASE_IDX) + \
+					reg ## block ## id ## _ ## temp_name
+
+#define DCCG_SRII(reg_name, block, id)\
+	.block ## _ ## reg_name[id] = BASE(reg ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+					reg ## block ## id ## _ ## reg_name
+
+#define VUPDATE_SRII(reg_name, block, id)\
+	.reg_name[id] = BASE(reg ## reg_name ## _ ## block ## id ## _BASE_IDX) + \
+					reg ## reg_name ## _ ## block ## id
+
+/* NBIO */
+#define NBIO_BASE_INNER(seg) \
+	NBIO_BASE__INST0_SEG ## seg
+
+#define NBIO_BASE(seg) \
+	NBIO_BASE_INNER(seg)
+
+#define NBIO_SR(reg_name)\
+		.reg_name = NBIO_BASE(regBIF_BX0_ ## reg_name ## _BASE_IDX) + \
+					regBIF_BX0_ ## reg_name
+
+#define CTX ctx
+#define REG(reg_name) \
+	(DCN_BASE.instance[0].segment[reg ## reg_name ## _BASE_IDX] + reg ## reg_name)
+
+static const struct bios_registers bios_regs = {
+		NBIO_SR(BIOS_SCRATCH_3),
+		NBIO_SR(BIOS_SCRATCH_6)
+};
+
+#define clk_src_regs(index, pllid)\
+[index] = {\
+	CS_COMMON_REG_LIST_DCN3_0(index, pllid),\
+}
+
+static const struct dce110_clk_src_regs clk_src_regs[] = {
+	clk_src_regs(0, A),
+	clk_src_regs(1, B),
+	clk_src_regs(2, C),
+	clk_src_regs(3, D)
+};
+
+static const struct dce110_clk_src_shift cs_shift = {
+		CS_COMMON_MASK_SH_LIST_DCN3_2(__SHIFT)
+};
+
+static const struct dce110_clk_src_mask cs_mask = {
+		CS_COMMON_MASK_SH_LIST_DCN3_2(_MASK)
+};
+
+#define abm_regs(id)\
+[id] = {\
+		ABM_DCN32_REG_LIST(id)\
+}
+
+static const struct dce_abm_registers abm_regs[] = {
+		abm_regs(0),
+		abm_regs(1),
+		abm_regs(2),
+		abm_regs(3),
+};
+
+static const struct dce_abm_shift abm_shift = {
+		ABM_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dce_abm_mask abm_mask = {
+		ABM_MASK_SH_LIST_DCN32(_MASK)
+};
+
+#define audio_regs(id)\
+[id] = {\
+		AUD_COMMON_REG_LIST(id)\
+}
+
+static const struct dce_audio_registers audio_regs[] = {
+	audio_regs(0),
+	audio_regs(1),
+	audio_regs(2),
+	audio_regs(3),
+	audio_regs(4)
+};
+
+#define DCE120_AUD_COMMON_MASK_SH_LIST(mask_sh)\
+		SF(AZF0ENDPOINT0_AZALIA_F0_CODEC_ENDPOINT_INDEX, AZALIA_ENDPOINT_REG_INDEX, mask_sh),\
+		SF(AZF0ENDPOINT0_AZALIA_F0_CODEC_ENDPOINT_DATA, AZALIA_ENDPOINT_REG_DATA, mask_sh),\
+		AUD_COMMON_MASK_SH_LIST_BASE(mask_sh)
+
+static const struct dce_audio_shift audio_shift = {
+		DCE120_AUD_COMMON_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dce_audio_mask audio_mask = {
+		DCE120_AUD_COMMON_MASK_SH_LIST(_MASK)
+};
+
+#define vpg_regs(id)\
+[id] = {\
+	VPG_DCN3_REG_LIST(id)\
+}
+
+static const struct dcn30_vpg_registers vpg_regs[] = {
+	vpg_regs(0),
+	vpg_regs(1),
+	vpg_regs(2),
+	vpg_regs(3),
+	vpg_regs(4),
+	vpg_regs(5),
+	vpg_regs(6),
+	vpg_regs(7),
+	vpg_regs(8),
+	vpg_regs(9),
+};
+
+static const struct dcn30_vpg_shift vpg_shift = {
+	DCN3_VPG_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn30_vpg_mask vpg_mask = {
+	DCN3_VPG_MASK_SH_LIST(_MASK)
+};
+
+#define afmt_regs(id)\
+[id] = {\
+	AFMT_DCN3_REG_LIST(id)\
+}
+
+static const struct dcn30_afmt_registers afmt_regs[] = {
+	afmt_regs(0),
+	afmt_regs(1),
+	afmt_regs(2),
+	afmt_regs(3),
+	afmt_regs(4),
+	afmt_regs(5)
+};
+
+static const struct dcn30_afmt_shift afmt_shift = {
+	DCN3_AFMT_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn30_afmt_mask afmt_mask = {
+	DCN3_AFMT_MASK_SH_LIST(_MASK)
+};
+
+#define apg_regs(id)\
+[id] = {\
+	APG_DCN31_REG_LIST(id)\
+}
+
+static const struct dcn31_apg_registers apg_regs[] = {
+	apg_regs(0),
+	apg_regs(1),
+	apg_regs(2),
+	apg_regs(3)
+};
+
+static const struct dcn31_apg_shift apg_shift = {
+	DCN31_APG_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn31_apg_mask apg_mask = {
+		DCN31_APG_MASK_SH_LIST(_MASK)
+};
+
+#define stream_enc_regs(id)\
+[id] = {\
+	SE_DCN32_REG_LIST(id)\
+}
+
+static const struct dcn10_stream_enc_registers stream_enc_regs[] = {
+	stream_enc_regs(0),
+	stream_enc_regs(1),
+	stream_enc_regs(2),
+	stream_enc_regs(3),
+	stream_enc_regs(4)
+};
+
+static const struct dcn10_stream_encoder_shift se_shift = {
+		SE_COMMON_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dcn10_stream_encoder_mask se_mask = {
+		SE_COMMON_MASK_SH_LIST_DCN32(_MASK)
+};
+
+
+#define aux_regs(id)\
+[id] = {\
+	DCN2_AUX_REG_LIST(id)\
+}
+
+static const struct dcn10_link_enc_aux_registers link_enc_aux_regs[] = {
+		aux_regs(0),
+		aux_regs(1),
+		aux_regs(2),
+		aux_regs(3),
+		aux_regs(4)
+};
+
+#define hpd_regs(id)\
+[id] = {\
+	HPD_REG_LIST(id)\
+}
+
+static const struct dcn10_link_enc_hpd_registers link_enc_hpd_regs[] = {
+		hpd_regs(0),
+		hpd_regs(1),
+		hpd_regs(2),
+		hpd_regs(3),
+		hpd_regs(4)
+};
+
+#define link_regs(id, phyid)\
+[id] = {\
+	LE_DCN31_REG_LIST(id), \
+	UNIPHY_DCN2_REG_LIST(phyid), \
+	/*DPCS_DCN31_REG_LIST(id),*/ \
+}
+
+static const struct dcn10_link_enc_registers link_enc_regs[] = {
+	link_regs(0, A),
+	link_regs(1, B),
+	link_regs(2, C),
+	link_regs(3, D),
+	link_regs(4, E)
+};
+
+static const struct dcn10_link_enc_shift le_shift = {
+	LINK_ENCODER_MASK_SH_LIST_DCN31(__SHIFT), \
+	//DPCS_DCN31_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn10_link_enc_mask le_mask = {
+	LINK_ENCODER_MASK_SH_LIST_DCN31(_MASK), \
+
+	//DPCS_DCN31_MASK_SH_LIST(_MASK)
+};
+
+#define hpo_dp_stream_encoder_reg_list(id)\
+[id] = {\
+	DCN3_1_HPO_DP_STREAM_ENC_REG_LIST(id)\
+}
+
+static const struct dcn31_hpo_dp_stream_encoder_registers hpo_dp_stream_enc_regs[] = {
+	hpo_dp_stream_encoder_reg_list(0),
+	hpo_dp_stream_encoder_reg_list(1),
+	hpo_dp_stream_encoder_reg_list(2),
+	hpo_dp_stream_encoder_reg_list(3),
+};
+
+static const struct dcn31_hpo_dp_stream_encoder_shift hpo_dp_se_shift = {
+	DCN3_1_HPO_DP_STREAM_ENC_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn31_hpo_dp_stream_encoder_mask hpo_dp_se_mask = {
+	DCN3_1_HPO_DP_STREAM_ENC_MASK_SH_LIST(_MASK)
+};
+
+
+#define hpo_dp_link_encoder_reg_list(id)\
+[id] = {\
+	DCN3_1_HPO_DP_LINK_ENC_REG_LIST(id),\
+	/*DCN3_1_RDPCSTX_REG_LIST(0),*/\
+	/*DCN3_1_RDPCSTX_REG_LIST(1),*/\
+	/*DCN3_1_RDPCSTX_REG_LIST(2),*/\
+	/*DCN3_1_RDPCSTX_REG_LIST(3),*/\
+	/*DCN3_1_RDPCSTX_REG_LIST(4)*/\
+}
+
+static const struct dcn31_hpo_dp_link_encoder_registers hpo_dp_link_enc_regs[] = {
+	hpo_dp_link_encoder_reg_list(0),
+	hpo_dp_link_encoder_reg_list(1),
+};
+
+static const struct dcn31_hpo_dp_link_encoder_shift hpo_dp_le_shift = {
+	DCN3_2_HPO_DP_LINK_ENC_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn31_hpo_dp_link_encoder_mask hpo_dp_le_mask = {
+	DCN3_2_HPO_DP_LINK_ENC_MASK_SH_LIST(_MASK)
+};
+
+#define dpp_regs(id)\
+[id] = {\
+	DPP_REG_LIST_DCN30_COMMON(id),\
+}
+
+static const struct dcn3_dpp_registers dpp_regs[] = {
+	dpp_regs(0),
+	dpp_regs(1),
+	dpp_regs(2),
+	dpp_regs(3)
+};
+
+static const struct dcn3_dpp_shift tf_shift = {
+		DPP_REG_LIST_SH_MASK_DCN30_COMMON(__SHIFT)
+};
+
+static const struct dcn3_dpp_mask tf_mask = {
+		DPP_REG_LIST_SH_MASK_DCN30_COMMON(_MASK)
+};
+
+
+#define opp_regs(id)\
+[id] = {\
+	OPP_REG_LIST_DCN30(id),\
+}
+
+static const struct dcn20_opp_registers opp_regs[] = {
+	opp_regs(0),
+	opp_regs(1),
+	opp_regs(2),
+	opp_regs(3)
+};
+
+static const struct dcn20_opp_shift opp_shift = {
+	OPP_MASK_SH_LIST_DCN20(__SHIFT)
+};
+
+static const struct dcn20_opp_mask opp_mask = {
+	OPP_MASK_SH_LIST_DCN20(_MASK)
+};
+
+#define aux_engine_regs(id)\
+[id] = {\
+	AUX_COMMON_REG_LIST0(id), \
+	.AUXN_IMPCAL = 0, \
+	.AUXP_IMPCAL = 0, \
+	.AUX_RESET_MASK = DP_AUX0_AUX_CONTROL__AUX_RESET_MASK, \
+}
+
+static const struct dce110_aux_registers aux_engine_regs[] = {
+		aux_engine_regs(0),
+		aux_engine_regs(1),
+		aux_engine_regs(2),
+		aux_engine_regs(3),
+		aux_engine_regs(4)
+};
+
+static const struct dce110_aux_registers_shift aux_shift = {
+	DCN_AUX_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dce110_aux_registers_mask aux_mask = {
+	DCN_AUX_MASK_SH_LIST(_MASK)
+};
+
+
+#define dwbc_regs_dcn3(id)\
+[id] = {\
+	DWBC_COMMON_REG_LIST_DCN30(id),\
+}
+
+static const struct dcn30_dwbc_registers dwbc30_regs[] = {
+	dwbc_regs_dcn3(0),
+};
+
+static const struct dcn30_dwbc_shift dwbc30_shift = {
+	DWBC_COMMON_MASK_SH_LIST_DCN30(__SHIFT)
+};
+
+static const struct dcn30_dwbc_mask dwbc30_mask = {
+	DWBC_COMMON_MASK_SH_LIST_DCN30(_MASK)
+};
+
+#define mcif_wb_regs_dcn3(id)\
+[id] = {\
+	MCIF_WB_COMMON_REG_LIST_DCN32(id),\
+}
+
+static const struct dcn30_mmhubbub_registers mcif_wb30_regs[] = {
+	mcif_wb_regs_dcn3(0)
+};
+
+static const struct dcn30_mmhubbub_shift mcif_wb30_shift = {
+	MCIF_WB_COMMON_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dcn30_mmhubbub_mask mcif_wb30_mask = {
+	MCIF_WB_COMMON_MASK_SH_LIST_DCN32(_MASK)
+};
+
+#define dsc_regsDCN20(id)\
+[id] = {\
+	DSC_REG_LIST_DCN20(id)\
+}
+
+static const struct dcn20_dsc_registers dsc_regs[] = {
+	dsc_regsDCN20(0),
+	dsc_regsDCN20(1),
+	dsc_regsDCN20(2),
+	dsc_regsDCN20(3)
+};
+
+static const struct dcn20_dsc_shift dsc_shift = {
+	DSC_REG_LIST_SH_MASK_DCN20(__SHIFT)
+};
+
+static const struct dcn20_dsc_mask dsc_mask = {
+	DSC_REG_LIST_SH_MASK_DCN20(_MASK)
+};
+
+static const struct dcn30_mpc_registers mpc_regs = {
+		MPC_REG_LIST_DCN3_0(0),
+		MPC_REG_LIST_DCN3_0(1),
+		MPC_REG_LIST_DCN3_0(2),
+		MPC_REG_LIST_DCN3_0(3),
+		MPC_OUT_MUX_REG_LIST_DCN3_0(0),
+		MPC_OUT_MUX_REG_LIST_DCN3_0(1),
+		MPC_OUT_MUX_REG_LIST_DCN3_0(2),
+		MPC_OUT_MUX_REG_LIST_DCN3_0(3),
+		MPC_MCM_REG_LIST_DCN32(0),
+		MPC_MCM_REG_LIST_DCN32(1),
+		MPC_MCM_REG_LIST_DCN32(2),
+		MPC_MCM_REG_LIST_DCN32(3),
+		MPC_DWB_MUX_REG_LIST_DCN3_0(0),
+};
+
+static const struct dcn30_mpc_shift mpc_shift = {
+	MPC_COMMON_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dcn30_mpc_mask mpc_mask = {
+	MPC_COMMON_MASK_SH_LIST_DCN32(_MASK)
+};
+
+#define optc_regs(id)\
+[id] = {OPTC_COMMON_REG_LIST_DCN3_2(id)}
+
+//#ifdef DIAGS_BUILD
+//static struct dcn_optc_registers optc_regs[] = {
+//#else
+static const struct dcn_optc_registers optc_regs[] = {
+//#endif
+	optc_regs(0),
+	optc_regs(1),
+	optc_regs(2),
+	optc_regs(3)
+};
+
+static const struct dcn_optc_shift optc_shift = {
+	OPTC_COMMON_MASK_SH_LIST_DCN3_2(__SHIFT)
+};
+
+static const struct dcn_optc_mask optc_mask = {
+	OPTC_COMMON_MASK_SH_LIST_DCN3_2(_MASK)
+};
+
+#define hubp_regs(id)\
+[id] = {\
+	HUBP_REG_LIST_DCN32(id)\
+}
+
+static const struct dcn_hubp2_registers hubp_regs[] = {
+		hubp_regs(0),
+		hubp_regs(1),
+		hubp_regs(2),
+		hubp_regs(3)
+};
+
+
+static const struct dcn_hubp2_shift hubp_shift = {
+		HUBP_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dcn_hubp2_mask hubp_mask = {
+		HUBP_MASK_SH_LIST_DCN32(_MASK)
+};
+static const struct dcn_hubbub_registers hubbub_reg = {
+		HUBBUB_REG_LIST_DCN32(0)
+};
+
+static const struct dcn_hubbub_shift hubbub_shift = {
+		HUBBUB_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dcn_hubbub_mask hubbub_mask = {
+		HUBBUB_MASK_SH_LIST_DCN32(_MASK)
+};
+
+static const struct dccg_registers dccg_regs = {
+		DCCG_REG_LIST_DCN32()
+};
+
+static const struct dccg_shift dccg_shift = {
+		DCCG_MASK_SH_LIST_DCN32(__SHIFT)
+};
+
+static const struct dccg_mask dccg_mask = {
+		DCCG_MASK_SH_LIST_DCN32(_MASK)
+};
+
+
+#define SRII2(reg_name_pre, reg_name_post, id)\
+	.reg_name_pre ## _ ##  reg_name_post[id] = BASE(reg ## reg_name_pre \
+			## id ## _ ## reg_name_post ## _BASE_IDX) + \
+			reg ## reg_name_pre ## id ## _ ## reg_name_post
+
+
+#define HWSEQ_DCN32_REG_LIST()\
+	SR(DCHUBBUB_GLOBAL_TIMER_CNTL), \
+	SR(DIO_MEM_PWR_CTRL), \
+	SR(ODM_MEM_PWR_CTRL3), \
+	SR(MMHUBBUB_MEM_PWR_CNTL), \
+	SR(DCCG_GATE_DISABLE_CNTL), \
+	SR(DCCG_GATE_DISABLE_CNTL2), \
+	SR(DCFCLK_CNTL),\
+	SR(DC_MEM_GLOBAL_PWR_REQ_CNTL), \
+	SRII(PIXEL_RATE_CNTL, OTG, 0), \
+	SRII(PIXEL_RATE_CNTL, OTG, 1),\
+	SRII(PIXEL_RATE_CNTL, OTG, 2),\
+	SRII(PIXEL_RATE_CNTL, OTG, 3),\
+	SRII(PHYPLL_PIXEL_RATE_CNTL, OTG, 0),\
+	SRII(PHYPLL_PIXEL_RATE_CNTL, OTG, 1),\
+	SRII(PHYPLL_PIXEL_RATE_CNTL, OTG, 2),\
+	SRII(PHYPLL_PIXEL_RATE_CNTL, OTG, 3),\
+	SR(MICROSECOND_TIME_BASE_DIV), \
+	SR(MILLISECOND_TIME_BASE_DIV), \
+	SR(DISPCLK_FREQ_CHANGE_CNTL), \
+	SR(RBBMIF_TIMEOUT_DIS), \
+	SR(RBBMIF_TIMEOUT_DIS_2), \
+	SR(DCHUBBUB_CRC_CTRL), \
+	SR(DPP_TOP0_DPP_CRC_CTRL), \
+	SR(DPP_TOP0_DPP_CRC_VAL_B_A), \
+	SR(DPP_TOP0_DPP_CRC_VAL_R_G), \
+	SR(MPC_CRC_CTRL), \
+	SR(MPC_CRC_RESULT_GB), \
+	SR(MPC_CRC_RESULT_C), \
+	SR(MPC_CRC_RESULT_AR), \
+	SR(DOMAIN0_PG_CONFIG), \
+	SR(DOMAIN1_PG_CONFIG), \
+	SR(DOMAIN2_PG_CONFIG), \
+	SR(DOMAIN3_PG_CONFIG), \
+	SR(DOMAIN16_PG_CONFIG), \
+	SR(DOMAIN17_PG_CONFIG), \
+	SR(DOMAIN18_PG_CONFIG), \
+	SR(DOMAIN19_PG_CONFIG), \
+	SR(DOMAIN0_PG_STATUS), \
+	SR(DOMAIN1_PG_STATUS), \
+	SR(DOMAIN2_PG_STATUS), \
+	SR(DOMAIN3_PG_STATUS), \
+	SR(DOMAIN16_PG_STATUS), \
+	SR(DOMAIN17_PG_STATUS), \
+	SR(DOMAIN18_PG_STATUS), \
+	SR(DOMAIN19_PG_STATUS), \
+	SR(D1VGA_CONTROL), \
+	SR(D2VGA_CONTROL), \
+	SR(D3VGA_CONTROL), \
+	SR(D4VGA_CONTROL), \
+	SR(D5VGA_CONTROL), \
+	SR(D6VGA_CONTROL), \
+	SR(DC_IP_REQUEST_CNTL), \
+	SR(AZALIA_AUDIO_DTO), \
+	SR(AZALIA_CONTROLLER_CLOCK_GATING)
+
+static const struct dce_hwseq_registers hwseq_reg = {
+		HWSEQ_DCN32_REG_LIST()
+};
+
+#define HWSEQ_DCN32_MASK_SH_LIST(mask_sh)\
+	HWSEQ_DCN_MASK_SH_LIST(mask_sh), \
+	HWS_SF(, DCHUBBUB_GLOBAL_TIMER_CNTL, DCHUBBUB_GLOBAL_TIMER_REFDIV, mask_sh), \
+	HWS_SF(, DOMAIN0_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN0_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN1_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN1_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN2_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN2_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN3_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN3_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN16_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN16_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN17_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN17_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN18_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN18_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN19_PG_CONFIG, DOMAIN_POWER_FORCEON, mask_sh), \
+	HWS_SF(, DOMAIN19_PG_CONFIG, DOMAIN_POWER_GATE, mask_sh), \
+	HWS_SF(, DOMAIN0_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN1_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN2_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN3_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN16_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN17_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN18_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DOMAIN19_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, mask_sh), \
+	HWS_SF(, DC_IP_REQUEST_CNTL, IP_REQUEST_EN, mask_sh), \
+	HWS_SF(, AZALIA_AUDIO_DTO, AZALIA_AUDIO_DTO_MODULE, mask_sh), \
+	HWS_SF(, HPO_TOP_CLOCK_CONTROL, HPO_HDMISTREAMCLK_G_GATE_DIS, mask_sh), \
+	HWS_SF(, ODM_MEM_PWR_CTRL3, ODM_MEM_UNASSIGNED_PWR_MODE, mask_sh), \
+	HWS_SF(, ODM_MEM_PWR_CTRL3, ODM_MEM_VBLANK_PWR_MODE, mask_sh), \
+	HWS_SF(, MMHUBBUB_MEM_PWR_CNTL, VGA_MEM_PWR_FORCE, mask_sh)
+
+static const struct dce_hwseq_shift hwseq_shift = {
+		HWSEQ_DCN32_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dce_hwseq_mask hwseq_mask = {
+		HWSEQ_DCN32_MASK_SH_LIST(_MASK)
+};
+#define vmid_regs(id)\
+[id] = {\
+		DCN20_VMID_REG_LIST(id)\
+}
+
+static const struct dcn_vmid_registers vmid_regs[] = {
+	vmid_regs(0),
+	vmid_regs(1),
+	vmid_regs(2),
+	vmid_regs(3),
+	vmid_regs(4),
+	vmid_regs(5),
+	vmid_regs(6),
+	vmid_regs(7),
+	vmid_regs(8),
+	vmid_regs(9),
+	vmid_regs(10),
+	vmid_regs(11),
+	vmid_regs(12),
+	vmid_regs(13),
+	vmid_regs(14),
+	vmid_regs(15)
+};
+
+static const struct dcn20_vmid_shift vmid_shifts = {
+		DCN20_VMID_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dcn20_vmid_mask vmid_masks = {
+		DCN20_VMID_MASK_SH_LIST(_MASK)
+};
+
+static const struct resource_caps res_cap_dcn32 = {
+	.num_timing_generator = 4,
+	.num_opp = 4,
+	.num_video_plane = 4,
+	.num_audio = 5,
+	.num_stream_encoder = 5,
+	.num_hpo_dp_stream_encoder = 4,
+	.num_hpo_dp_link_encoder = 2,
+	.num_pll = 5,
+	.num_dwb = 1,
+	.num_ddc = 5,
+	.num_vmid = 16,
+	.num_mpc_3dlut = 4,
+	.num_dsc = 4,
+};
+
+static const struct dc_plane_cap plane_cap = {
+	.type = DC_PLANE_TYPE_DCN_UNIVERSAL,
+	.blends_with_above = true,
+	.blends_with_below = true,
+	.per_pixel_alpha = true,
+
+	.pixel_format_support = {
+			.argb8888 = true,
+			.nv12 = true,
+			.fp16 = true,
+			.p010 = true,
+			.ayuv = false,
+	},
+
+	.max_upscale_factor = {
+			.argb8888 = 16000,
+			.nv12 = 16000,
+			.fp16 = 16000
+	},
+
+	// 6:1 downscaling ratio: 1000/6 = 166.666
+	.max_downscale_factor = {
+			.argb8888 = 167,
+			.nv12 = 167,
+			.fp16 = 167
+	},
+	64,
+	64
+};
+
+static const struct dc_debug_options debug_defaults_drv = {
+	.disable_dmcu = true,
+	.force_abm_enable = false,
+	.timing_trace = false,
+	.clock_trace = true,
+	.disable_pplib_clock_request = false,
+	.pipe_split_policy = MPC_SPLIT_DYNAMIC,
+	.force_single_disp_pipe_split = false,
+	.disable_dcc = DCC_ENABLE,
+	.vsr_support = true,
+	.performance_trace = false,
+	.max_downscale_src_width = 7680,/*upto 8K*/
+	.disable_pplib_wm_range = false,
+	.scl_reset_length10 = true,
+	.sanity_checks = false,
+	.underflow_assert_delay_us = 0xFFFFFFFF,
+	.dwb_fi_phase = -1, // -1 = disable,
+	.dmub_command_table = true,
+	.enable_mem_low_power = {
+		.bits = {
+			.vga = false,
+			.i2c = false,
+			.dmcu = false, // This is previously known to cause hang on S3 cycles if enabled
+			.dscl = false,
+			.cm = false,
+			.mpc = false,
+			.optc = false,
+		}
+	},
+	.use_max_lb = true,
+	.force_disable_subvp = true
+};
+
+static const struct dc_debug_options debug_defaults_diags = {
+	.disable_dmcu = true,
+	.force_abm_enable = false,
+	.timing_trace = true,
+	.clock_trace = true,
+	.disable_dpp_power_gate = true,
+	.disable_hubp_power_gate = true,
+	.disable_dsc_power_gate = true,
+	.disable_clock_gate = true,
+	.disable_pplib_clock_request = true,
+	.disable_pplib_wm_range = true,
+	.disable_stutter = false,
+	.scl_reset_length10 = true,
+	.dwb_fi_phase = -1, // -1 = disable
+	.dmub_command_table = true,
+	.enable_tri_buf = true,
+	.use_max_lb = true,
+	.force_disable_subvp = true
+};
+
+static struct dce_aux *dcn32_aux_engine_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct aux_engine_dce110 *aux_engine =
+		kzalloc(sizeof(struct aux_engine_dce110), GFP_KERNEL);
+
+	if (!aux_engine)
+		return NULL;
+
+	dce110_aux_engine_construct(aux_engine, ctx, inst,
+				    SW_AUX_TIMEOUT_PERIOD_MULTIPLIER * AUX_TIMEOUT_PERIOD,
+				    &aux_engine_regs[inst],
+					&aux_mask,
+					&aux_shift,
+					ctx->dc->caps.extended_aux_timeout_support);
+
+	return &aux_engine->base;
+}
+#define i2c_inst_regs(id) { I2C_HW_ENGINE_COMMON_REG_LIST_DCN30(id) }
+
+static const struct dce_i2c_registers i2c_hw_regs[] = {
+		i2c_inst_regs(1),
+		i2c_inst_regs(2),
+		i2c_inst_regs(3),
+		i2c_inst_regs(4),
+		i2c_inst_regs(5),
+};
+
+static const struct dce_i2c_shift i2c_shifts = {
+		I2C_COMMON_MASK_SH_LIST_DCN30(__SHIFT)
+};
+
+static const struct dce_i2c_mask i2c_masks = {
+		I2C_COMMON_MASK_SH_LIST_DCN30(_MASK)
+};
+
+static struct dce_i2c_hw *dcn32_i2c_hw_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dce_i2c_hw *dce_i2c_hw =
+		kzalloc(sizeof(struct dce_i2c_hw), GFP_KERNEL);
+
+	if (!dce_i2c_hw)
+		return NULL;
+
+	dcn2_i2c_hw_construct(dce_i2c_hw, ctx, inst,
+				    &i2c_hw_regs[inst], &i2c_shifts, &i2c_masks);
+
+	return dce_i2c_hw;
+}
+
+static struct clock_source *dcn32_clock_source_create(
+		struct dc_context *ctx,
+		struct dc_bios *bios,
+		enum clock_source_id id,
+		const struct dce110_clk_src_regs *regs,
+		bool dp_clk_src)
+{
+	struct dce110_clk_src *clk_src =
+		kzalloc(sizeof(struct dce110_clk_src), GFP_KERNEL);
+
+	if (!clk_src)
+		return NULL;
+
+	if (dcn3_clk_src_construct(clk_src, ctx, bios, id,
+			regs, &cs_shift, &cs_mask)) {
+		clk_src->base.dp_clk_src = dp_clk_src;
+		return &clk_src->base;
+	}
+
+	BREAK_TO_DEBUGGER();
+	return NULL;
+}
+
+static struct hubbub *dcn32_hubbub_create(struct dc_context *ctx)
+{
+	int i;
+
+	struct dcn20_hubbub *hubbub2 = kzalloc(sizeof(struct dcn20_hubbub),
+					  GFP_KERNEL);
+
+	if (!hubbub2)
+		return NULL;
+
+	hubbub32_construct(hubbub2, ctx,
+			&hubbub_reg,
+			&hubbub_shift,
+			&hubbub_mask,
+			ctx->dc->dml.ip.det_buffer_size_kbytes,
+			ctx->dc->dml.ip.pixel_chunk_size_kbytes,
+			ctx->dc->dml.ip.config_return_buffer_size_in_kbytes);
+
+
+	for (i = 0; i < res_cap_dcn32.num_vmid; i++) {
+		struct dcn20_vmid *vmid = &hubbub2->vmid[i];
+
+		vmid->ctx = ctx;
+
+		vmid->regs = &vmid_regs[i];
+		vmid->shifts = &vmid_shifts;
+		vmid->masks = &vmid_masks;
+	}
+
+	return &hubbub2->base;
+}
+
+static struct hubp *dcn32_hubp_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dcn20_hubp *hubp2 =
+		kzalloc(sizeof(struct dcn20_hubp), GFP_KERNEL);
+
+	if (!hubp2)
+		return NULL;
+
+	if (hubp32_construct(hubp2, ctx, inst,
+			&hubp_regs[inst], &hubp_shift, &hubp_mask))
+		return &hubp2->base;
+
+	BREAK_TO_DEBUGGER();
+	kfree(hubp2);
+	return NULL;
+}
+
+static void dcn32_dpp_destroy(struct dpp **dpp)
+{
+	kfree(TO_DCN30_DPP(*dpp));
+	*dpp = NULL;
+}
+
+static struct dpp *dcn32_dpp_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dcn3_dpp *dpp3 =
+		kzalloc(sizeof(struct dcn3_dpp), GFP_KERNEL);
+
+	if (!dpp3)
+		return NULL;
+
+	if (dpp32_construct(dpp3, ctx, inst,
+			&dpp_regs[inst], &tf_shift, &tf_mask))
+		return &dpp3->base;
+
+	BREAK_TO_DEBUGGER();
+	kfree(dpp3);
+	return NULL;
+}
+
+static struct mpc *dcn32_mpc_create(
+		struct dc_context *ctx,
+		int num_mpcc,
+		int num_rmu)
+{
+	struct dcn30_mpc *mpc30 = kzalloc(sizeof(struct dcn30_mpc),
+					  GFP_KERNEL);
+
+	if (!mpc30)
+		return NULL;
+
+	dcn32_mpc_construct(mpc30, ctx,
+			&mpc_regs,
+			&mpc_shift,
+			&mpc_mask,
+			num_mpcc,
+			num_rmu);
+
+	return &mpc30->base;
+}
+
+static struct output_pixel_processor *dcn32_opp_create(
+	struct dc_context *ctx, uint32_t inst)
+{
+	struct dcn20_opp *opp2 =
+		kzalloc(sizeof(struct dcn20_opp), GFP_KERNEL);
+
+	if (!opp2) {
+		BREAK_TO_DEBUGGER();
+		return NULL;
+	}
+
+	dcn20_opp_construct(opp2, ctx, inst,
+			&opp_regs[inst], &opp_shift, &opp_mask);
+	return &opp2->base;
+}
+
+
+static struct timing_generator *dcn32_timing_generator_create(
+		struct dc_context *ctx,
+		uint32_t instance)
+{
+	struct optc *tgn10 =
+		kzalloc(sizeof(struct optc), GFP_KERNEL);
+
+	if (!tgn10)
+		return NULL;
+
+	tgn10->base.inst = instance;
+	tgn10->base.ctx = ctx;
+
+	tgn10->tg_regs = &optc_regs[instance];
+	tgn10->tg_shift = &optc_shift;
+	tgn10->tg_mask = &optc_mask;
+
+	dcn32_timing_generator_init(tgn10);
+
+	return &tgn10->base;
+}
+
+static const struct encoder_feature_support link_enc_feature = {
+		.max_hdmi_deep_color = COLOR_DEPTH_121212,
+		.max_hdmi_pixel_clock = 600000,
+		.hdmi_ycbcr420_supported = true,
+		.dp_ycbcr420_supported = true,
+		.fec_supported = true,
+		.flags.bits.IS_HBR2_CAPABLE = true,
+		.flags.bits.IS_HBR3_CAPABLE = true,
+		.flags.bits.IS_TPS3_CAPABLE = true,
+		.flags.bits.IS_TPS4_CAPABLE = true
+};
+
+static struct link_encoder *dcn32_link_encoder_create(
+	const struct encoder_init_data *enc_init_data)
+{
+	struct dcn20_link_encoder *enc20 =
+		kzalloc(sizeof(struct dcn20_link_encoder), GFP_KERNEL);
+
+	if (!enc20)
+		return NULL;
+
+	dcn32_link_encoder_construct(enc20,
+			enc_init_data,
+			&link_enc_feature,
+			&link_enc_regs[enc_init_data->transmitter],
+			&link_enc_aux_regs[enc_init_data->channel - 1],
+			&link_enc_hpd_regs[enc_init_data->hpd_source],
+			&le_shift,
+			&le_mask);
+
+	return &enc20->enc10.base;
+}
+
+struct panel_cntl *dcn32_panel_cntl_create(const struct panel_cntl_init_data *init_data)
+{
+	struct dcn31_panel_cntl *panel_cntl =
+		kzalloc(sizeof(struct dcn31_panel_cntl), GFP_KERNEL);
+
+	if (!panel_cntl)
+		return NULL;
+
+	dcn31_panel_cntl_construct(panel_cntl, init_data);
+
+	return &panel_cntl->base;
+}
+
+static void read_dce_straps(
+	struct dc_context *ctx,
+	struct resource_straps *straps)
+{
+	generic_reg_get(ctx, regDC_PINSTRAPS + BASE(regDC_PINSTRAPS_BASE_IDX),
+		FN(DC_PINSTRAPS, DC_PINSTRAPS_AUDIO), &straps->dc_pinstraps_audio);
+
+}
+
+static struct audio *dcn32_create_audio(
+		struct dc_context *ctx, unsigned int inst)
+{
+	return dce_audio_create(ctx, inst,
+			&audio_regs[inst], &audio_shift, &audio_mask);
+}
+
+static struct vpg *dcn32_vpg_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dcn30_vpg *vpg3 = kzalloc(sizeof(struct dcn30_vpg), GFP_KERNEL);
+
+	if (!vpg3)
+		return NULL;
+
+	vpg3_construct(vpg3, ctx, inst,
+			&vpg_regs[inst],
+			&vpg_shift,
+			&vpg_mask);
+
+	return &vpg3->base;
+}
+
+static struct afmt *dcn32_afmt_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dcn30_afmt *afmt3 = kzalloc(sizeof(struct dcn30_afmt), GFP_KERNEL);
+
+	if (!afmt3)
+		return NULL;
+
+	afmt3_construct(afmt3, ctx, inst,
+			&afmt_regs[inst],
+			&afmt_shift,
+			&afmt_mask);
+
+	return &afmt3->base;
+}
+
+static struct apg *dcn31_apg_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dcn31_apg *apg31 = kzalloc(sizeof(struct dcn31_apg), GFP_KERNEL);
+
+	if (!apg31)
+		return NULL;
+
+	apg31_construct(apg31, ctx, inst,
+			&apg_regs[inst],
+			&apg_shift,
+			&apg_mask);
+
+	return &apg31->base;
+}
+
+static struct stream_encoder *dcn32_stream_encoder_create(
+	enum engine_id eng_id,
+	struct dc_context *ctx)
+{
+	struct dcn10_stream_encoder *enc1;
+	struct vpg *vpg;
+	struct afmt *afmt;
+	int vpg_inst;
+	int afmt_inst;
+
+	/* Mapping of VPG, AFMT, DME register blocks to DIO block instance */
+	if (eng_id <= ENGINE_ID_DIGF) {
+		vpg_inst = eng_id;
+		afmt_inst = eng_id;
+	} else
+		return NULL;
+
+	enc1 = kzalloc(sizeof(struct dcn10_stream_encoder), GFP_KERNEL);
+	vpg = dcn32_vpg_create(ctx, vpg_inst);
+	afmt = dcn32_afmt_create(ctx, afmt_inst);
+
+	if (!enc1 || !vpg || !afmt) {
+		kfree(enc1);
+		kfree(vpg);
+		kfree(afmt);
+		return NULL;
+	}
+
+	dcn32_dio_stream_encoder_construct(enc1, ctx, ctx->dc_bios,
+					eng_id, vpg, afmt,
+					&stream_enc_regs[eng_id],
+					&se_shift, &se_mask);
+
+	return &enc1->base;
+}
+
+static struct hpo_dp_stream_encoder *dcn32_hpo_dp_stream_encoder_create(
+	enum engine_id eng_id,
+	struct dc_context *ctx)
+{
+	struct dcn31_hpo_dp_stream_encoder *hpo_dp_enc31;
+	struct vpg *vpg;
+	struct apg *apg;
+	uint32_t hpo_dp_inst;
+	uint32_t vpg_inst;
+	uint32_t apg_inst;
+
+	ASSERT((eng_id >= ENGINE_ID_HPO_DP_0) && (eng_id <= ENGINE_ID_HPO_DP_3));
+	hpo_dp_inst = eng_id - ENGINE_ID_HPO_DP_0;
+
+	/* Mapping of VPG register blocks to HPO DP block instance:
+	 * VPG[6] -> HPO_DP[0]
+	 * VPG[7] -> HPO_DP[1]
+	 * VPG[8] -> HPO_DP[2]
+	 * VPG[9] -> HPO_DP[3]
+	 */
+	vpg_inst = hpo_dp_inst + 6;
+
+	/* Mapping of APG register blocks to HPO DP block instance:
+	 * APG[0] -> HPO_DP[0]
+	 * APG[1] -> HPO_DP[1]
+	 * APG[2] -> HPO_DP[2]
+	 * APG[3] -> HPO_DP[3]
+	 */
+	apg_inst = hpo_dp_inst;
+
+	/* allocate HPO stream encoder and create VPG sub-block */
+	hpo_dp_enc31 = kzalloc(sizeof(struct dcn31_hpo_dp_stream_encoder), GFP_KERNEL);
+	vpg = dcn32_vpg_create(ctx, vpg_inst);
+	apg = dcn31_apg_create(ctx, apg_inst);
+
+	if (!hpo_dp_enc31 || !vpg || !apg) {
+		kfree(hpo_dp_enc31);
+		kfree(vpg);
+		kfree(apg);
+		return NULL;
+	}
+
+	dcn31_hpo_dp_stream_encoder_construct(hpo_dp_enc31, ctx, ctx->dc_bios,
+					hpo_dp_inst, eng_id, vpg, apg,
+					&hpo_dp_stream_enc_regs[hpo_dp_inst],
+					&hpo_dp_se_shift, &hpo_dp_se_mask);
+
+	return &hpo_dp_enc31->base;
+}
+
+static struct hpo_dp_link_encoder *dcn32_hpo_dp_link_encoder_create(
+	uint8_t inst,
+	struct dc_context *ctx)
+{
+	struct dcn31_hpo_dp_link_encoder *hpo_dp_enc31;
+
+	/* allocate HPO link encoder */
+	hpo_dp_enc31 = kzalloc(sizeof(struct dcn31_hpo_dp_link_encoder), GFP_KERNEL);
+
+	hpo_dp_link_encoder32_construct(hpo_dp_enc31, ctx, inst,
+					&hpo_dp_link_enc_regs[inst],
+					&hpo_dp_le_shift, &hpo_dp_le_mask);
+
+	return &hpo_dp_enc31->base;
+}
+
+static struct dce_hwseq *dcn32_hwseq_create(
+	struct dc_context *ctx)
+{
+	struct dce_hwseq *hws = kzalloc(sizeof(struct dce_hwseq), GFP_KERNEL);
+
+	if (hws) {
+		hws->ctx = ctx;
+		hws->regs = &hwseq_reg;
+		hws->shifts = &hwseq_shift;
+		hws->masks = &hwseq_mask;
+	}
+	return hws;
+}
+static const struct resource_create_funcs res_create_funcs = {
+	.read_dce_straps = read_dce_straps,
+	.create_audio = dcn32_create_audio,
+	.create_stream_encoder = dcn32_stream_encoder_create,
+	.create_hpo_dp_stream_encoder = dcn32_hpo_dp_stream_encoder_create,
+	.create_hpo_dp_link_encoder = dcn32_hpo_dp_link_encoder_create,
+	.create_hwseq = dcn32_hwseq_create,
+};
+
+static const struct resource_create_funcs res_create_maximus_funcs = {
+	.read_dce_straps = NULL,
+	.create_audio = NULL,
+	.create_stream_encoder = NULL,
+	.create_hpo_dp_stream_encoder = dcn32_hpo_dp_stream_encoder_create,
+	.create_hpo_dp_link_encoder = dcn32_hpo_dp_link_encoder_create,
+	.create_hwseq = dcn32_hwseq_create,
+};
+
+static void dcn32_resource_destruct(struct dcn32_resource_pool *pool)
+{
+	unsigned int i;
+
+	for (i = 0; i < pool->base.stream_enc_count; i++) {
+		if (pool->base.stream_enc[i] != NULL) {
+			if (pool->base.stream_enc[i]->vpg != NULL) {
+				kfree(DCN30_VPG_FROM_VPG(pool->base.stream_enc[i]->vpg));
+				pool->base.stream_enc[i]->vpg = NULL;
+			}
+			if (pool->base.stream_enc[i]->afmt != NULL) {
+				kfree(DCN30_AFMT_FROM_AFMT(pool->base.stream_enc[i]->afmt));
+				pool->base.stream_enc[i]->afmt = NULL;
+			}
+			kfree(DCN10STRENC_FROM_STRENC(pool->base.stream_enc[i]));
+			pool->base.stream_enc[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.hpo_dp_stream_enc_count; i++) {
+		if (pool->base.hpo_dp_stream_enc[i] != NULL) {
+			if (pool->base.hpo_dp_stream_enc[i]->vpg != NULL) {
+				kfree(DCN30_VPG_FROM_VPG(pool->base.hpo_dp_stream_enc[i]->vpg));
+				pool->base.hpo_dp_stream_enc[i]->vpg = NULL;
+			}
+			if (pool->base.hpo_dp_stream_enc[i]->apg != NULL) {
+				kfree(DCN31_APG_FROM_APG(pool->base.hpo_dp_stream_enc[i]->apg));
+				pool->base.hpo_dp_stream_enc[i]->apg = NULL;
+			}
+			kfree(DCN3_1_HPO_DP_STREAM_ENC_FROM_HPO_STREAM_ENC(pool->base.hpo_dp_stream_enc[i]));
+			pool->base.hpo_dp_stream_enc[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.hpo_dp_link_enc_count; i++) {
+		if (pool->base.hpo_dp_link_enc[i] != NULL) {
+			kfree(DCN3_1_HPO_DP_LINK_ENC_FROM_HPO_LINK_ENC(pool->base.hpo_dp_link_enc[i]));
+			pool->base.hpo_dp_link_enc[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_dsc; i++) {
+		if (pool->base.dscs[i] != NULL)
+			dcn20_dsc_destroy(&pool->base.dscs[i]);
+	}
+
+	if (pool->base.mpc != NULL) {
+		kfree(TO_DCN20_MPC(pool->base.mpc));
+		pool->base.mpc = NULL;
+	}
+	if (pool->base.hubbub != NULL) {
+		kfree(TO_DCN20_HUBBUB(pool->base.hubbub));
+		pool->base.hubbub = NULL;
+	}
+	for (i = 0; i < pool->base.pipe_count; i++) {
+		if (pool->base.dpps[i] != NULL)
+			dcn32_dpp_destroy(&pool->base.dpps[i]);
+
+		if (pool->base.ipps[i] != NULL)
+			pool->base.ipps[i]->funcs->ipp_destroy(&pool->base.ipps[i]);
+
+		if (pool->base.hubps[i] != NULL) {
+			kfree(TO_DCN20_HUBP(pool->base.hubps[i]));
+			pool->base.hubps[i] = NULL;
+		}
+
+		if (pool->base.irqs != NULL) {
+			dal_irq_service_destroy(&pool->base.irqs);
+		}
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_ddc; i++) {
+		if (pool->base.engines[i] != NULL)
+			dce110_engine_destroy(&pool->base.engines[i]);
+		if (pool->base.hw_i2cs[i] != NULL) {
+			kfree(pool->base.hw_i2cs[i]);
+			pool->base.hw_i2cs[i] = NULL;
+		}
+		if (pool->base.sw_i2cs[i] != NULL) {
+			kfree(pool->base.sw_i2cs[i]);
+			pool->base.sw_i2cs[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_opp; i++) {
+		if (pool->base.opps[i] != NULL)
+			pool->base.opps[i]->funcs->opp_destroy(&pool->base.opps[i]);
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_timing_generator; i++) {
+		if (pool->base.timing_generators[i] != NULL)	{
+			kfree(DCN10TG_FROM_TG(pool->base.timing_generators[i]));
+			pool->base.timing_generators[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_dwb; i++) {
+		if (pool->base.dwbc[i] != NULL) {
+			kfree(TO_DCN30_DWBC(pool->base.dwbc[i]));
+			pool->base.dwbc[i] = NULL;
+		}
+		if (pool->base.mcif_wb[i] != NULL) {
+			kfree(TO_DCN30_MMHUBBUB(pool->base.mcif_wb[i]));
+			pool->base.mcif_wb[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.audio_count; i++) {
+		if (pool->base.audios[i])
+			dce_aud_destroy(&pool->base.audios[i]);
+	}
+
+	for (i = 0; i < pool->base.clk_src_count; i++) {
+		if (pool->base.clock_sources[i] != NULL) {
+			dcn20_clock_source_destroy(&pool->base.clock_sources[i]);
+			pool->base.clock_sources[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_mpc_3dlut; i++) {
+		if (pool->base.mpc_lut[i] != NULL) {
+			dc_3dlut_func_release(pool->base.mpc_lut[i]);
+			pool->base.mpc_lut[i] = NULL;
+		}
+		if (pool->base.mpc_shaper[i] != NULL) {
+			dc_transfer_func_release(pool->base.mpc_shaper[i]);
+			pool->base.mpc_shaper[i] = NULL;
+		}
+	}
+
+	if (pool->base.dp_clock_source != NULL) {
+		dcn20_clock_source_destroy(&pool->base.dp_clock_source);
+		pool->base.dp_clock_source = NULL;
+	}
+
+	for (i = 0; i < pool->base.res_cap->num_timing_generator; i++) {
+		if (pool->base.multiple_abms[i] != NULL)
+			dce_abm_destroy(&pool->base.multiple_abms[i]);
+	}
+
+	if (pool->base.psr != NULL)
+		dmub_psr_destroy(&pool->base.psr);
+
+	if (pool->base.dccg != NULL)
+		dcn_dccg_destroy(&pool->base.dccg);
+
+	if (pool->base.oem_device != NULL)
+		dal_ddc_service_destroy(&pool->base.oem_device);
+}
+
+
+static bool dcn32_dwbc_create(struct dc_context *ctx, struct resource_pool *pool)
+{
+	int i;
+	uint32_t dwb_count = pool->res_cap->num_dwb;
+
+	for (i = 0; i < dwb_count; i++) {
+		struct dcn30_dwbc *dwbc30 = kzalloc(sizeof(struct dcn30_dwbc),
+						    GFP_KERNEL);
+
+		if (!dwbc30) {
+			dm_error("DC: failed to create dwbc30!\n");
+			return false;
+		}
+
+		dcn30_dwbc_construct(dwbc30, ctx,
+				&dwbc30_regs[i],
+				&dwbc30_shift,
+				&dwbc30_mask,
+				i);
+
+		pool->dwbc[i] = &dwbc30->base;
+	}
+	return true;
+}
+
+static bool dcn32_mmhubbub_create(struct dc_context *ctx, struct resource_pool *pool)
+{
+	int i;
+	uint32_t dwb_count = pool->res_cap->num_dwb;
+
+	for (i = 0; i < dwb_count; i++) {
+		struct dcn30_mmhubbub *mcif_wb30 = kzalloc(sizeof(struct dcn30_mmhubbub),
+						    GFP_KERNEL);
+
+		if (!mcif_wb30) {
+			dm_error("DC: failed to create mcif_wb30!\n");
+			return false;
+		}
+
+		dcn32_mmhubbub_construct(mcif_wb30, ctx,
+				&mcif_wb30_regs[i],
+				&mcif_wb30_shift,
+				&mcif_wb30_mask,
+				i);
+
+		pool->mcif_wb[i] = &mcif_wb30->base;
+	}
+	return true;
+}
+
+static struct display_stream_compressor *dcn32_dsc_create(
+	struct dc_context *ctx, uint32_t inst)
+{
+	struct dcn20_dsc *dsc =
+		kzalloc(sizeof(struct dcn20_dsc), GFP_KERNEL);
+
+	if (!dsc) {
+		BREAK_TO_DEBUGGER();
+		return NULL;
+	}
+
+	dsc2_construct(dsc, ctx, inst, &dsc_regs[inst], &dsc_shift, &dsc_mask);
+	return &dsc->base;
+}
+
+static void dcn32_destroy_resource_pool(struct resource_pool **pool)
+{
+	struct dcn32_resource_pool *dcn32_pool = TO_DCN32_RES_POOL(*pool);
+
+	dcn32_resource_destruct(dcn32_pool);
+	kfree(dcn32_pool);
+	*pool = NULL;
+}
+
+bool dcn32_acquire_post_bldn_3dlut(
+		struct resource_context *res_ctx,
+		const struct resource_pool *pool,
+		int mpcc_id,
+		struct dc_3dlut **lut,
+		struct dc_transfer_func **shaper)
+{
+	bool ret = false;
+	union dc_3dlut_state *state;
+
+	ASSERT(*lut == NULL && *shaper == NULL);
+	*lut = NULL;
+	*shaper = NULL;
+
+	if (!res_ctx->is_mpc_3dlut_acquired[mpcc_id]) {
+		*lut = pool->mpc_lut[mpcc_id];
+		*shaper = pool->mpc_shaper[mpcc_id];
+		state = &pool->mpc_lut[mpcc_id]->state;
+		res_ctx->is_mpc_3dlut_acquired[mpcc_id] = true;
+		ret = true;
+	}
+	return ret;
+}
+
+bool dcn32_release_post_bldn_3dlut(
+		struct resource_context *res_ctx,
+		const struct resource_pool *pool,
+		struct dc_3dlut **lut,
+		struct dc_transfer_func **shaper)
+{
+	int i;
+	bool ret = false;
+
+	for (i = 0; i < pool->res_cap->num_mpc_3dlut; i++) {
+		if (pool->mpc_lut[i] == *lut && pool->mpc_shaper[i] == *shaper) {
+			res_ctx->is_mpc_3dlut_acquired[i] = false;
+			pool->mpc_lut[i]->state.raw = 0;
+			*lut = NULL;
+			*shaper = NULL;
+			ret = true;
+			break;
+		}
+	}
+	return ret;
+}
+
+/**
+ ********************************************************************************************
+ * dcn32_get_num_free_pipes: Calculate number of free pipes
+ *
+ * This function assumes that a "used" pipe is a pipe that has
+ * both a stream and a plane assigned to it.
+ *
+ * @param [in] dc: current dc state
+ * @param [in] context: new dc state
+ *
+ * @return: Number of free pipes available in the context
+ *
+ ********************************************************************************************
+ */
+static unsigned int dcn32_get_num_free_pipes(struct dc *dc, struct dc_state *context)
+{
+	unsigned int i;
+	unsigned int free_pipes = 0;
+	unsigned int num_pipes = 0;
+
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+
+		if (pipe->stream && pipe->plane_state && !pipe->top_pipe) {
+			while (pipe) {
+				num_pipes++;
+				pipe = pipe->bottom_pipe;
+			}
+		}
+	}
+
+	free_pipes = dc->res_pool->pipe_count - num_pipes;
+	return free_pipes;
+}
+
+/**
+ ********************************************************************************************
+ * dcn32_assign_subvp_pipe: Function to decide which pipe will use Sub-VP.
+ *
+ * We enter this function if we are Sub-VP capable (i.e. enough pipes available)
+ * and regular P-State switching (i.e. VACTIVE/VBLANK) is not supported, or if
+ * we are forcing SubVP P-State switching on the current config.
+ *
+ * The number of pipes used for the chosen surface must be less than or equal to the
+ * number of free pipes available.
+ *
+ * In general we choose surfaces that have ActiveDRAMClockChangeLatencyMargin <= 0 first,
+ * then among those surfaces we choose the one with the smallest VBLANK time. We only consider
+ * surfaces with ActiveDRAMClockChangeLatencyMargin > 0 if we are forcing a Sub-VP config.
+ *
+ * @param [in] dc: current dc state
+ * @param [in] context: new dc state
+ * @param [out] index: dc pipe index for the pipe chosen to have phantom pipes assigned
+ *
+ * @return: True if a valid pipe assignment was found for Sub-VP. Otherwise false.
+ *
+ ********************************************************************************************
+ */
+
+static bool dcn32_assign_subvp_pipe(struct dc *dc,
+		struct dc_state *context,
+		unsigned int *index)
+{
+	unsigned int i, pipe_idx;
+	unsigned int min_vblank_us = INT_MAX;
+	struct vba_vars_st *vba = &context->bw_ctx.dml.vba;
+	bool valid_assignment_found = false;
+	unsigned int free_pipes = dcn32_get_num_free_pipes(dc, context);
+
+	for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+		unsigned int num_pipes = 0;
+
+		if (!pipe->stream)
+			continue;
+
+		if (pipe->plane_state && !pipe->top_pipe &&
+				pipe->stream->mall_stream_config.type == SUBVP_NONE) {
+			while (pipe) {
+				num_pipes++;
+				pipe = pipe->bottom_pipe;
+			}
+
+			pipe = &context->res_ctx.pipe_ctx[i];
+			if (num_pipes <= free_pipes) {
+				struct dc_stream_state *stream = pipe->stream;
+				unsigned int vblank_us = ((stream->timing.v_total - stream->timing.v_addressable) *
+							stream->timing.h_total /
+							(double)(stream->timing.pix_clk_100hz * 100)) * 1000000;
+				if (vba->ActiveDRAMClockChangeLatencyMargin[vba->pipe_plane[pipe_idx]] <= 0 &&
+						vblank_us < min_vblank_us) {
+					*index = i;
+					min_vblank_us = vblank_us;
+					valid_assignment_found = true;
+				} else if (vba->ActiveDRAMClockChangeLatencyMargin[vba->pipe_plane[pipe_idx]] > 0 &&
+						dc->debug.force_subvp_mclk_switch && !valid_assignment_found) {
+					// Handle case for forcing Sub-VP config. In this case we can assign
+					// phantom pipes to a surface that has active margin > 0.
+					*index = i;
+					valid_assignment_found = true;
+				}
+			}
+		}
+		pipe_idx++;
+	}
+	return valid_assignment_found;
+}
+
+/**
+ * ***************************************************************************************
+ * dcn32_enough_pipes_for_subvp: Function to check if there are "enough" pipes for SubVP.
+ *
+ * This function returns true if there are enough free pipes
+ * to create the required phantom pipes for any given stream
+ * (that does not already have phantom pipe assigned).
+ *
+ * e.g. For a 2 stream config where the first stream uses one
+ * pipe and the second stream uses 2 pipes (i.e. pipe split),
+ * this function will return true because there is 1 remaining
+ * pipe which can be used as the phantom pipe for the non pipe
+ * split pipe.
+ *
+ * @param [in] dc: current dc state
+ * @param [in] context: new dc state
+ *
+ * @return: True if there are enough free pipes to assign phantom pipes to at least one
+ *          stream that does not already have phantom pipes assigned. Otherwise false.
+ *
+ * ***************************************************************************************
+ */
+static bool dcn32_enough_pipes_for_subvp(struct dc *dc, struct dc_state *context)
+{
+	unsigned int i, split_cnt, free_pipes;
+	unsigned int min_pipe_split = dc->res_pool->pipe_count + 1; // init as max number of pipes + 1
+	bool subvp_possible = false;
+
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+
+		// Find the minimum pipe split count for non SubVP pipes
+		if (pipe->stream && pipe->plane_state && !pipe->top_pipe &&
+				pipe->stream->mall_stream_config.type == SUBVP_NONE) {
+			split_cnt = 0;
+			while (pipe) {
+				split_cnt++;
+				pipe = pipe->bottom_pipe;
+			}
+
+			if (split_cnt < min_pipe_split)
+				min_pipe_split = split_cnt;
+		}
+	}
+
+	free_pipes = dcn32_get_num_free_pipes(dc, context);
+
+	// SubVP only possible if at least one pipe is being used (i.e. free_pipes
+	// should not equal to the pipe_count)
+	if (free_pipes >= min_pipe_split && free_pipes < dc->res_pool->pipe_count)
+		subvp_possible = true;
+
+	return subvp_possible;
+}
+
+static void dcn32_enable_phantom_plane(struct dc *dc,
+		struct dc_state *context,
+		struct dc_stream_state *phantom_stream,
+		unsigned int dc_pipe_idx)
+{
+	struct dc_plane_state *phantom_plane = NULL;
+	struct dc_plane_state *prev_phantom_plane = NULL;
+	struct pipe_ctx *curr_pipe = &context->res_ctx.pipe_ctx[dc_pipe_idx];
+
+	while (curr_pipe) {
+		if (curr_pipe->top_pipe && curr_pipe->top_pipe->plane_state == curr_pipe->plane_state)
+			phantom_plane = prev_phantom_plane;
+		else
+			phantom_plane = dc_create_plane_state(dc);
+
+		memcpy(&phantom_plane->address, &curr_pipe->plane_state->address, sizeof(phantom_plane->address));
+		memcpy(&phantom_plane->scaling_quality, &curr_pipe->plane_state->scaling_quality,
+				sizeof(phantom_plane->scaling_quality));
+		memcpy(&phantom_plane->src_rect, &curr_pipe->plane_state->src_rect, sizeof(phantom_plane->src_rect));
+		memcpy(&phantom_plane->dst_rect, &curr_pipe->plane_state->dst_rect, sizeof(phantom_plane->dst_rect));
+		memcpy(&phantom_plane->clip_rect, &curr_pipe->plane_state->clip_rect, sizeof(phantom_plane->clip_rect));
+		memcpy(&phantom_plane->plane_size, &curr_pipe->plane_state->plane_size,
+				sizeof(phantom_plane->plane_size));
+		memcpy(&phantom_plane->tiling_info, &curr_pipe->plane_state->tiling_info,
+				sizeof(phantom_plane->tiling_info));
+		memcpy(&phantom_plane->dcc, &curr_pipe->plane_state->dcc, sizeof(phantom_plane->dcc));
+		phantom_plane->format = curr_pipe->plane_state->format;
+		phantom_plane->rotation = curr_pipe->plane_state->rotation;
+		phantom_plane->visible = curr_pipe->plane_state->visible;
+
+		/* Shadow pipe has small viewport. */
+		phantom_plane->clip_rect.y = 0;
+		phantom_plane->clip_rect.height = phantom_stream->timing.v_addressable;
+
+		dc_add_plane_to_context(dc, phantom_stream, phantom_plane, context);
+
+		curr_pipe = curr_pipe->bottom_pipe;
+		prev_phantom_plane = phantom_plane;
+	}
+}
+
+/**
+ * ***************************************************************************************
+ * dcn32_set_phantom_stream_timing: Set timing params for the phantom stream
+ *
+ * Set timing params of the phantom stream based on calculated output from DML.
+ * This function first gets the DML pipe index using the DC pipe index, then
+ * calls into DML (get_subviewport_lines_needed_in_mall) to get the number of
+ * lines required for SubVP MCLK switching and assigns to the phantom stream
+ * accordingly.
+ *
+ * - The number of SubVP lines calculated in DML does not take into account
+ * FW processing delays and required pstate allow width, so we must include
+ * that separately.
+ *
+ * - Set phantom backporch = vstartup of main pipe
+ *
+ * @param [in] dc: current dc state
+ * @param [in] context: new dc state
+ * @param [in] ref_pipe: Main pipe for the phantom stream
+ * @param [in] pipes: DML pipe params
+ * @param [in] pipe_cnt: number of DML pipes
+ * @param [in] dc_pipe_idx: DC pipe index for the main pipe (i.e. ref_pipe)
+ *
+ * @return: void
+ *
+ * ***************************************************************************************
+ */
+static void dcn32_set_phantom_stream_timing(struct dc *dc,
+		struct dc_state *context,
+		struct pipe_ctx *ref_pipe,
+		struct dc_stream_state *phantom_stream,
+		display_e2e_pipe_params_st *pipes,
+		unsigned int pipe_cnt,
+		unsigned int dc_pipe_idx)
+{
+	unsigned int i, pipe_idx;
+	struct pipe_ctx *pipe;
+	uint32_t phantom_vactive, phantom_bp, pstate_width_fw_delay_lines;
+	unsigned int vlevel = context->bw_ctx.dml.vba.VoltageLevel;
+	unsigned int dcfclk = context->bw_ctx.dml.vba.DCFCLKState[vlevel][context->bw_ctx.dml.vba.maxMpcComb];
+	unsigned int socclk = context->bw_ctx.dml.vba.SOCCLKPerState[vlevel];
+
+	// Find DML pipe index (pipe_idx) using dc_pipe_idx
+	for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
+		pipe = &context->res_ctx.pipe_ctx[i];
+
+		if (!pipe->stream)
+			continue;
+
+		if (i == dc_pipe_idx)
+			break;
+
+		pipe_idx++;
+	}
+
+	// Calculate lines required for pstate allow width and FW processing delays
+	pstate_width_fw_delay_lines = ((double)(dc->caps.subvp_fw_processing_delay_us +
+			dc->caps.subvp_pstate_allow_width_us) / 1000000) *
+			(ref_pipe->stream->timing.pix_clk_100hz * 100) /
+			(double)ref_pipe->stream->timing.h_total;
+
+	// Update clks_cfg for calling into recalculate
+	pipes[0].clks_cfg.voltage = vlevel;
+	pipes[0].clks_cfg.dcfclk_mhz = dcfclk;
+	pipes[0].clks_cfg.socclk_mhz = socclk;
+
+	// DML calculation for MALL region doesn't take into account FW delay
+	// and required pstate allow width for multi-display cases
+	phantom_vactive = get_subviewport_lines_needed_in_mall(&context->bw_ctx.dml, pipes, pipe_cnt, pipe_idx) +
+				pstate_width_fw_delay_lines;
+
+	// For backporch of phantom pipe, use vstartup of the main pipe
+	phantom_bp = get_vstartup(&context->bw_ctx.dml, pipes, pipe_cnt, pipe_idx);
+
+	phantom_stream->dst.y = 0;
+	phantom_stream->dst.height = phantom_vactive;
+	phantom_stream->src.y = 0;
+	phantom_stream->src.height = phantom_vactive;
+
+	phantom_stream->timing.v_addressable = phantom_vactive;
+	phantom_stream->timing.v_front_porch = 1;
+	phantom_stream->timing.v_total = phantom_stream->timing.v_addressable +
+						phantom_stream->timing.v_front_porch +
+						phantom_stream->timing.v_sync_width +
+						phantom_bp;
+}
+
+static struct dc_stream_state *dcn32_enable_phantom_stream(struct dc *dc,
+		struct dc_state *context,
+		display_e2e_pipe_params_st *pipes,
+		unsigned int pipe_cnt,
+		unsigned int dc_pipe_idx)
+{
+	struct dc_stream_state *phantom_stream = NULL;
+	struct pipe_ctx *ref_pipe = &context->res_ctx.pipe_ctx[dc_pipe_idx];
+
+	phantom_stream = dc_create_stream_for_sink(ref_pipe->stream->sink);
+	phantom_stream->signal = SIGNAL_TYPE_VIRTUAL;
+	phantom_stream->dpms_off = true;
+	phantom_stream->mall_stream_config.type = SUBVP_PHANTOM;
+	phantom_stream->mall_stream_config.paired_stream = ref_pipe->stream;
+	ref_pipe->stream->mall_stream_config.type = SUBVP_MAIN;
+	ref_pipe->stream->mall_stream_config.paired_stream = phantom_stream;
+
+	/* stream has limited viewport and small timing */
+	memcpy(&phantom_stream->timing, &ref_pipe->stream->timing, sizeof(phantom_stream->timing));
+	memcpy(&phantom_stream->src, &ref_pipe->stream->src, sizeof(phantom_stream->src));
+	memcpy(&phantom_stream->dst, &ref_pipe->stream->dst, sizeof(phantom_stream->dst));
+	dcn32_set_phantom_stream_timing(dc, context, ref_pipe, phantom_stream, pipes, pipe_cnt, dc_pipe_idx);
+
+	dc_add_stream_to_ctx(dc, context, phantom_stream);
+	return phantom_stream;
+}
+
+void dcn32_remove_phantom_pipes(struct dc *dc, struct dc_state *context)
+{
+	int i;
+	bool removed_pipe = false;
+
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+		// build scaling params for phantom pipes
+		if (pipe->plane_state && pipe->stream && pipe->stream->mall_stream_config.type == SUBVP_PHANTOM) {
+			dc_rem_all_planes_for_stream(dc, pipe->stream, context);
+			dc_remove_stream_from_ctx(dc, context, pipe->stream);
+			removed_pipe = true;
+		}
+
+		// Clear all phantom stream info
+		if (pipe->stream) {
+			pipe->stream->mall_stream_config.type = SUBVP_NONE;
+			pipe->stream->mall_stream_config.paired_stream = NULL;
+		}
+	}
+	if (removed_pipe)
+		dc->hwss.apply_ctx_to_hw(dc, context);
+}
+
+/* TODO: Input to this function should indicate which pipe indexes (or streams)
+ * require a phantom pipe / stream
+ */
+void dcn32_add_phantom_pipes(struct dc *dc, struct dc_state *context,
+		display_e2e_pipe_params_st *pipes,
+		unsigned int pipe_cnt,
+		unsigned int index)
+{
+	struct dc_stream_state *phantom_stream = NULL;
+	unsigned int i;
+
+	// The index of the DC pipe passed into this function is guarenteed to
+	// be a valid candidate for SubVP (i.e. has a plane, stream, doesn't
+	// already have phantom pipe assigned, etc.) by previous checks.
+	phantom_stream = dcn32_enable_phantom_stream(dc, context, pipes, pipe_cnt, index);
+	dcn32_enable_phantom_plane(dc, context, phantom_stream, index);
+
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+
+		// Build scaling params for phantom pipes which were newly added.
+		// We determine which phantom pipes were added by comparing with
+		// the phantom stream.
+		if (pipe->plane_state && pipe->stream && pipe->stream == phantom_stream &&
+				pipe->stream->mall_stream_config.type == SUBVP_PHANTOM) {
+			pipe->stream->use_dynamic_meta = false;
+			pipe->plane_state->flip_immediate = false;
+			if (!resource_build_scaling_params(pipe)) {
+				// Log / remove phantom pipes since failed to build scaling params
+			}
+		}
+	}
+}
+
+static bool dcn32_split_stream_for_mpc_or_odm(
+		const struct dc *dc,
+		struct resource_context *res_ctx,
+		struct pipe_ctx *pri_pipe,
+		struct pipe_ctx *sec_pipe,
+		bool odm)
+{
+	int pipe_idx = sec_pipe->pipe_idx;
+	const struct resource_pool *pool = dc->res_pool;
+
+	if (pri_pipe->plane_state) {
+		/* ODM + window MPO, where MPO window is on left half only */
+		if (pri_pipe->plane_state->clip_rect.x + pri_pipe->plane_state->clip_rect.width <=
+				pri_pipe->stream->src.x + pri_pipe->stream->src.width/2)
+			return true;
+
+		/* ODM + window MPO, where MPO window is on right half only */
+		if (pri_pipe->plane_state->clip_rect.x >= pri_pipe->stream->src.width/2)
+			return true;
+	}
+
+	*sec_pipe = *pri_pipe;
+
+	sec_pipe->pipe_idx = pipe_idx;
+	sec_pipe->plane_res.mi = pool->mis[pipe_idx];
+	sec_pipe->plane_res.hubp = pool->hubps[pipe_idx];
+	sec_pipe->plane_res.ipp = pool->ipps[pipe_idx];
+	sec_pipe->plane_res.xfm = pool->transforms[pipe_idx];
+	sec_pipe->plane_res.dpp = pool->dpps[pipe_idx];
+	sec_pipe->plane_res.mpcc_inst = pool->dpps[pipe_idx]->inst;
+	sec_pipe->stream_res.dsc = NULL;
+	if (odm) {
+		if (pri_pipe->next_odm_pipe) {
+			ASSERT(pri_pipe->next_odm_pipe != sec_pipe);
+			sec_pipe->next_odm_pipe = pri_pipe->next_odm_pipe;
+			sec_pipe->next_odm_pipe->prev_odm_pipe = sec_pipe;
+		}
+		if (pri_pipe->top_pipe && pri_pipe->top_pipe->next_odm_pipe) {
+			pri_pipe->top_pipe->next_odm_pipe->bottom_pipe = sec_pipe;
+			sec_pipe->top_pipe = pri_pipe->top_pipe->next_odm_pipe;
+		}
+		if (pri_pipe->bottom_pipe && pri_pipe->bottom_pipe->next_odm_pipe) {
+			pri_pipe->bottom_pipe->next_odm_pipe->top_pipe = sec_pipe;
+			sec_pipe->bottom_pipe = pri_pipe->bottom_pipe->next_odm_pipe;
+		}
+		pri_pipe->next_odm_pipe = sec_pipe;
+		sec_pipe->prev_odm_pipe = pri_pipe;
+		ASSERT(sec_pipe->top_pipe == NULL);
+
+		if (!sec_pipe->top_pipe)
+			sec_pipe->stream_res.opp = pool->opps[pipe_idx];
+		else
+			sec_pipe->stream_res.opp = sec_pipe->top_pipe->stream_res.opp;
+		if (sec_pipe->stream->timing.flags.DSC == 1) {
+			dcn20_acquire_dsc(dc, res_ctx, &sec_pipe->stream_res.dsc, pipe_idx);
+			ASSERT(sec_pipe->stream_res.dsc);
+			if (sec_pipe->stream_res.dsc == NULL)
+				return false;
+		}
+	} else {
+		if (pri_pipe->bottom_pipe) {
+			ASSERT(pri_pipe->bottom_pipe != sec_pipe);
+			sec_pipe->bottom_pipe = pri_pipe->bottom_pipe;
+			sec_pipe->bottom_pipe->top_pipe = sec_pipe;
+		}
+		pri_pipe->bottom_pipe = sec_pipe;
+		sec_pipe->top_pipe = pri_pipe;
+
+		ASSERT(pri_pipe->plane_state);
+	}
+
+	return true;
+}
+
+static struct pipe_ctx *dcn32_find_split_pipe(
+		struct dc *dc,
+		struct dc_state *context,
+		int old_index)
+{
+	struct pipe_ctx *pipe = NULL;
+	int i;
+
+	if (old_index >= 0 && context->res_ctx.pipe_ctx[old_index].stream == NULL) {
+		pipe = &context->res_ctx.pipe_ctx[old_index];
+		pipe->pipe_idx = old_index;
+	}
+
+	if (!pipe)
+		for (i = dc->res_pool->pipe_count - 1; i >= 0; i--) {
+			if (dc->current_state->res_ctx.pipe_ctx[i].top_pipe == NULL
+					&& dc->current_state->res_ctx.pipe_ctx[i].prev_odm_pipe == NULL) {
+				if (context->res_ctx.pipe_ctx[i].stream == NULL) {
+					pipe = &context->res_ctx.pipe_ctx[i];
+					pipe->pipe_idx = i;
+					break;
+				}
+			}
+		}
+
+	/*
+	 * May need to fix pipes getting tossed from 1 opp to another on flip
+	 * Add for debugging transient underflow during topology updates:
+	 * ASSERT(pipe);
+	 */
+	if (!pipe)
+		for (i = dc->res_pool->pipe_count - 1; i >= 0; i--) {
+			if (context->res_ctx.pipe_ctx[i].stream == NULL) {
+				pipe = &context->res_ctx.pipe_ctx[i];
+				pipe->pipe_idx = i;
+				break;
+			}
+		}
+
+	return pipe;
+}
+
+
+/**
+ * ***************************************************************************************
+ * subvp_subvp_schedulable: Determine if SubVP + SubVP config is schedulable
+ *
+ * High level algorithm:
+ * 1. Find longest microschedule length (in us) between the two SubVP pipes
+ * 2. Check if the worst case overlap (VBLANK in middle of ACTIVE) for both
+ * pipes still allows for the maximum microschedule to fit in the active
+ * region for both pipes.
+ *
+ * @param [in] dc: current dc state
+ * @param [in] context: new dc state
+ *
+ * @return: bool - True if the SubVP + SubVP config is schedulable, false otherwise
+ *
+ * ***************************************************************************************
+ */
+static bool subvp_subvp_schedulable(struct dc *dc, struct dc_state *context)
+{
+	struct pipe_ctx *subvp_pipes[2];
+	struct dc_stream_state *phantom = NULL;
+	uint32_t microschedule_lines = 0;
+	uint32_t index = 0;
+	uint32_t i;
+	uint32_t max_microschedule_us = 0;
+	int32_t vactive1_us, vactive2_us, vblank1_us, vblank2_us;
+
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+		uint32_t time_us = 0;
+
+		/* Loop to calculate the maximum microschedule time between the two SubVP pipes,
+		 * and also to store the two main SubVP pipe pointers in subvp_pipes[2].
+		 */
+		if (pipe->stream && pipe->plane_state && !pipe->top_pipe &&
+				pipe->stream->mall_stream_config.type == SUBVP_MAIN) {
+			phantom = pipe->stream->mall_stream_config.paired_stream;
+			microschedule_lines = (phantom->timing.v_total - phantom->timing.v_front_porch) +
+					phantom->timing.v_addressable;
+
+			// Round up when calculating microschedule time
+			time_us = ((microschedule_lines * phantom->timing.h_total +
+					phantom->timing.pix_clk_100hz * 100 - 1) /
+					(double)(phantom->timing.pix_clk_100hz * 100)) * 1000000 +
+						dc->caps.subvp_prefetch_end_to_mall_start_us +
+						dc->caps.subvp_fw_processing_delay_us;
+			if (time_us > max_microschedule_us)
+				max_microschedule_us = time_us;
+
+			subvp_pipes[index] = pipe;
+			index++;
+
+			// Maximum 2 SubVP pipes
+			if (index == 2)
+				break;
+		}
+	}
+	vactive1_us = ((subvp_pipes[0]->stream->timing.v_addressable * subvp_pipes[0]->stream->timing.h_total) /
+			(double)(subvp_pipes[0]->stream->timing.pix_clk_100hz * 100)) * 1000000;
+	vactive2_us = ((subvp_pipes[1]->stream->timing.v_addressable * subvp_pipes[1]->stream->timing.h_total) /
+				(double)(subvp_pipes[1]->stream->timing.pix_clk_100hz * 100)) * 1000000;
+	vblank1_us = (((subvp_pipes[0]->stream->timing.v_total - subvp_pipes[0]->stream->timing.v_addressable) *
+			subvp_pipes[0]->stream->timing.h_total) /
+			(double)(subvp_pipes[0]->stream->timing.pix_clk_100hz * 100)) * 1000000;
+	vblank2_us = (((subvp_pipes[1]->stream->timing.v_total - subvp_pipes[1]->stream->timing.v_addressable) *
+			subvp_pipes[1]->stream->timing.h_total) /
+			(double)(subvp_pipes[1]->stream->timing.pix_clk_100hz * 100)) * 1000000;
+
+	if ((vactive1_us - vblank2_us) / 2 > max_microschedule_us &&
+			(vactive2_us - vblank1_us) / 2 > max_microschedule_us)
+		return true;
+
+	return false;
+}
+
+/**
+ * ***************************************************************************************
+ * subvp_drr_schedulable: Determine if SubVP + DRR config is schedulable
+ *
+ * High level algorithm:
+ * 1. Get timing for SubVP pipe, phantom pipe, and DRR pipe
+ * 2. Determine the frame time for the DRR display when adding required margin for MCLK switching
+ * (the margin is equal to the MALL region + DRR margin (500us))
+ * 3.If (SubVP Active - Prefetch > Stretched DRR frame + max(MALL region, Stretched DRR frame))
+ * then report the configuration as supported
+ *
+ * @param [in] dc: current dc state
+ * @param [in] context: new dc state
+ * @param [in] drr_pipe: DRR pipe_ctx for the SubVP + DRR config
+ *
+ * @return: bool - True if the SubVP + DRR config is schedulable, false otherwise
+ *
+ * ***************************************************************************************
+ */
+static bool subvp_drr_schedulable(struct dc *dc, struct dc_state *context, struct pipe_ctx *drr_pipe)
+{
+	bool schedulable = false;
+	uint32_t i;
+	struct pipe_ctx *pipe = NULL;
+	struct dc_crtc_timing *main_timing = NULL;
+	struct dc_crtc_timing *phantom_timing = NULL;
+	struct dc_crtc_timing *drr_timing = NULL;
+	int16_t prefetch_us = 0;
+	int16_t mall_region_us = 0;
+	int16_t drr_frame_us = 0;	// nominal frame time
+	int16_t subvp_active_us = 0;
+	int16_t stretched_drr_us = 0;
+	int16_t drr_stretched_vblank_us = 0;
+	int16_t max_vblank_mallregion = 0;
+
+	// Find SubVP pipe
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		pipe = &context->res_ctx.pipe_ctx[i];
+
+		// We check for master pipe, but it shouldn't matter since we only need
+		// the pipe for timing info (stream should be same for any pipe splits)
+		if (!pipe->stream || !pipe->plane_state || pipe->top_pipe || pipe->prev_odm_pipe)
+			continue;
+
+		// Find the SubVP pipe
+		if (pipe->stream->mall_stream_config.type == SUBVP_MAIN)
+			break;
+	}
+
+	main_timing = &pipe->stream->timing;
+	phantom_timing = &pipe->stream->mall_stream_config.paired_stream->timing;
+	drr_timing = &drr_pipe->stream->timing;
+	prefetch_us = (phantom_timing->v_total - phantom_timing->v_front_porch) * phantom_timing->h_total /
+			(double)(phantom_timing->pix_clk_100hz * 100) * 1000000 +
+			dc->caps.subvp_prefetch_end_to_mall_start_us;
+	subvp_active_us = main_timing->v_addressable * main_timing->h_total /
+			(double)(main_timing->pix_clk_100hz * 100) * 1000000;
+	drr_frame_us = drr_timing->v_total * drr_timing->h_total /
+			(double)(drr_timing->pix_clk_100hz * 100) * 1000000;
+	// P-State allow width and FW delays already included phantom_timing->v_addressable
+	mall_region_us = phantom_timing->v_addressable * phantom_timing->h_total /
+			(double)(phantom_timing->pix_clk_100hz * 100) * 1000000;
+	stretched_drr_us = drr_frame_us + mall_region_us + SUBVP_DRR_MARGIN_US;
+	drr_stretched_vblank_us = (drr_timing->v_total - drr_timing->v_addressable) * drr_timing->h_total /
+			(double)(drr_timing->pix_clk_100hz * 100) * 1000000 + (stretched_drr_us - drr_frame_us);
+	max_vblank_mallregion = drr_stretched_vblank_us > mall_region_us ? drr_stretched_vblank_us : mall_region_us;
+
+	/* We consider SubVP + DRR schedulable if the stretched frame duration of the DRR display (i.e. the
+	 * highest refresh rate + margin that can support UCLK P-State switch) passes the static analysis
+	 * for VBLANK: (VACTIVE region of the SubVP pipe can fit the MALL prefetch, VBLANK frame time,
+	 * and the max of (VBLANK blanking time, MALL region)).
+	 */
+	if (stretched_drr_us < (1 / (double)drr_timing->min_refresh_in_uhz) * 1000000 * 1000000 &&
+			subvp_active_us - prefetch_us - stretched_drr_us - max_vblank_mallregion > 0)
+		schedulable = true;
+
+	return schedulable;
+}
+
+/**
+ * ***************************************************************************************
+ * subvp_vblank_schedulable: Determine if SubVP + VBLANK config is schedulable
+ *
+ * High level algorithm:
+ * 1. Get timing for SubVP pipe, phantom pipe, and VBLANK pipe
+ * 2. If (SubVP Active - Prefetch > Vblank Frame Time + max(MALL region, Vblank blanking time))
+ * then report the configuration as supported
+ * 3. If the VBLANK display is DRR, then take the DRR static schedulability path
+ *
+ * @param [in] dc: current dc state
+ * @param [in] context: new dc state
+ *
+ * @return: bool - True if the SubVP + VBLANK/DRR config is schedulable, false otherwise
+ *
+ * ***************************************************************************************
+ */
+static bool subvp_vblank_schedulable(struct dc *dc, struct dc_state *context)
+{
+	struct pipe_ctx *pipe = NULL;
+	struct pipe_ctx *subvp_pipe = NULL;
+	bool found = false;
+	bool schedulable = false;
+	uint32_t i = 0;
+	uint8_t vblank_index = 0;
+	int16_t prefetch_us = 0;
+	int16_t mall_region_us = 0;
+	int16_t vblank_frame_us = 0;
+	int16_t subvp_active_us = 0;
+	int16_t vblank_blank_us = 0;
+	int16_t max_vblank_mallregion = 0;
+	struct dc_crtc_timing *main_timing = NULL;
+	struct dc_crtc_timing *phantom_timing = NULL;
+	struct dc_crtc_timing *vblank_timing = NULL;
+
+	/* For SubVP + VBLANK/DRR cases, we assume there can only be
+	 * a single VBLANK/DRR display. If DML outputs SubVP + VBLANK
+	 * is supported, it is either a single VBLANK case or two VBLANK
+	 * displays which are synchronized (in which case they have identical
+	 * timings).
+	 */
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		pipe = &context->res_ctx.pipe_ctx[i];
+
+		// We check for master pipe, but it shouldn't matter since we only need
+		// the pipe for timing info (stream should be same for any pipe splits)
+		if (!pipe->stream || !pipe->plane_state || pipe->top_pipe || pipe->prev_odm_pipe)
+			continue;
+
+		if (!found && pipe->stream->mall_stream_config.type == SUBVP_NONE) {
+			// Found pipe which is not SubVP or Phantom (i.e. the VBLANK pipe).
+			vblank_index = i;
+			found = true;
+		}
+
+		if (!subvp_pipe && pipe->stream->mall_stream_config.type == SUBVP_MAIN)
+			subvp_pipe = pipe;
+	}
+	// Use ignore_msa_timing_param flag to identify as DRR
+	if (found && pipe->stream->ignore_msa_timing_param) {
+		// SUBVP + DRR case
+		schedulable = subvp_drr_schedulable(dc, context, &context->res_ctx.pipe_ctx[vblank_index]);
+	} else if (found) {
+		main_timing = &subvp_pipe->stream->timing;
+		phantom_timing = &subvp_pipe->stream->mall_stream_config.paired_stream->timing;
+		vblank_timing = &context->res_ctx.pipe_ctx[vblank_index].stream->timing;
+		// Prefetch time is equal to VACTIVE + BP + VSYNC of the phantom pipe
+		// Also include the prefetch end to mallstart delay time
+		prefetch_us = (phantom_timing->v_total - phantom_timing->v_front_porch) * phantom_timing->h_total /
+				(double)(phantom_timing->pix_clk_100hz * 100) * 1000000 +
+				dc->caps.subvp_prefetch_end_to_mall_start_us;
+		// P-State allow width and FW delays already included phantom_timing->v_addressable
+		mall_region_us = phantom_timing->v_addressable * phantom_timing->h_total /
+				(double)(phantom_timing->pix_clk_100hz * 100) * 1000000;
+		vblank_frame_us = vblank_timing->v_total * vblank_timing->h_total /
+				(double)(vblank_timing->pix_clk_100hz * 100) * 1000000;
+		vblank_blank_us =  (vblank_timing->v_total - vblank_timing->v_addressable) * vblank_timing->h_total /
+				(double)(vblank_timing->pix_clk_100hz * 100) * 1000000;
+		subvp_active_us = main_timing->v_addressable * main_timing->h_total /
+				(double)(main_timing->pix_clk_100hz * 100) * 1000000;
+		max_vblank_mallregion = vblank_blank_us > mall_region_us ? vblank_blank_us : mall_region_us;
+
+		// Schedulable if VACTIVE region of the SubVP pipe can fit the MALL prefetch, VBLANK frame time,
+		// and the max of (VBLANK blanking time, MALL region)
+		// TODO: Possibly add some margin (i.e. the below conditions should be [...] > X instead of [...] > 0)
+		if (subvp_active_us - prefetch_us - vblank_frame_us - max_vblank_mallregion > 0)
+			schedulable = true;
+	}
+	return schedulable;
+}
+
+/**
+ * ********************************************************************************************
+ * subvp_validate_static_schedulability: Check which SubVP case is calculated and handle
+ * static analysis based on the case.
+ *
+ * Three cases:
+ * 1. SubVP + SubVP
+ * 2. SubVP + VBLANK (DRR checked internally)
+ * 3. SubVP + VACTIVE (currently unsupported)
+ *
+ * @param [in] dc: current dc state
+ * @param [in] context: new dc state
+ * @param [in] vlevel: Voltage level calculated by DML
+ *
+ * @return: bool - True if statically schedulable, false otherwise
+ *
+ * ********************************************************************************************
+ */
+static bool subvp_validate_static_schedulability(struct dc *dc,
+				struct dc_state *context,
+				int vlevel)
+{
+	bool schedulable = true;	// true by default for single display case
+	struct vba_vars_st *vba = &context->bw_ctx.dml.vba;
+	uint32_t i, pipe_idx;
+	uint8_t subvp_count = 0;
+	uint8_t vactive_count = 0;
+
+	for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+
+		if (!pipe->stream)
+			continue;
+
+		if (pipe->plane_state && !pipe->top_pipe &&
+				pipe->stream->mall_stream_config.type == SUBVP_MAIN)
+			subvp_count++;
+
+		// Count how many planes are capable of VACTIVE switching (SubVP + VACTIVE unsupported)
+		if (vba->ActiveDRAMClockChangeLatencyMargin[vba->pipe_plane[pipe_idx]] > 0) {
+			vactive_count++;
+		}
+		pipe_idx++;
+	}
+
+	if (subvp_count == 2) {
+		// Static schedulability check for SubVP + SubVP case
+		schedulable = subvp_subvp_schedulable(dc, context);
+	} else if (vba->DRAMClockChangeSupport[vlevel][vba->maxMpcComb] == dm_dram_clock_change_vblank_w_mall_sub_vp) {
+		// Static schedulability check for SubVP + VBLANK case. Also handle the case where
+		// DML outputs SubVP + VBLANK + VACTIVE (DML will report as SubVP + VBLANK)
+		if (vactive_count > 0)
+			schedulable = false;
+		else
+			schedulable = subvp_vblank_schedulable(dc, context);
+	} else if (vba->DRAMClockChangeSupport[vlevel][vba->maxMpcComb] == dm_dram_clock_change_vactive_w_mall_sub_vp) {
+		// SubVP + VACTIVE currently unsupported
+		schedulable = false;
+	}
+	return schedulable;
+}
+
+static void dcn32_full_validate_bw_helper(struct dc *dc,
+		struct dc_state *context,
+		display_e2e_pipe_params_st *pipes,
+		int *vlevel,
+		int *split,
+		bool *merge,
+		int *pipe_cnt)
+{
+	struct vba_vars_st *vba = &context->bw_ctx.dml.vba;
+	unsigned int dc_pipe_idx = 0;
+	bool found_supported_config = false;
+	struct pipe_ctx *pipe = NULL;
+	uint32_t non_subvp_pipes = 0;
+	bool drr_pipe_found = false;
+	uint32_t drr_pipe_index = 0;
+	uint32_t i = 0;
+
+	/*
+	 * DML favors voltage over p-state, but we're more interested in
+	 * supporting p-state over voltage. We can't support p-state in
+	 * prefetch mode > 0 so try capping the prefetch mode to start.
+	 */
+	context->bw_ctx.dml.soc.allow_for_pstate_or_stutter_in_vblank_final =
+			dm_prefetch_support_uclk_fclk_and_stutter;
+	*vlevel = dml_get_voltage_level(&context->bw_ctx.dml, pipes, *pipe_cnt);
+	/* This may adjust vlevel and maxMpcComb */
+	if (*vlevel < context->bw_ctx.dml.soc.num_states)
+		*vlevel = dcn20_validate_apply_pipe_split_flags(dc, context, *vlevel, split, merge);
+
+	/* Conditions for setting up phantom pipes for SubVP:
+	 * 1. Not force disable SubVP
+	 * 2. Full update (i.e. !fast_validate)
+	 * 3. Enough pipes are available to support SubVP (TODO: Which pipes will use VACTIVE / VBLANK / SUBVP?)
+	 * 4. Display configuration passes validation
+	 * 5. (Config doesn't support MCLK in VACTIVE/VBLANK || dc->debug.force_subvp_mclk_switch)
+	 */
+	if (!dc->debug.force_disable_subvp &&
+			(*vlevel == context->bw_ctx.dml.soc.num_states ||
+			vba->DRAMClockChangeSupport[*vlevel][vba->maxMpcComb] == dm_dram_clock_change_unsupported ||
+			dc->debug.force_subvp_mclk_switch)) {
+
+		while (!found_supported_config && dcn32_enough_pipes_for_subvp(dc, context) &&
+				dcn32_assign_subvp_pipe(dc, context, &dc_pipe_idx)) {
+
+			dc->res_pool->funcs->add_phantom_pipes(dc, context, pipes, *pipe_cnt, dc_pipe_idx);
+
+			*pipe_cnt = dc->res_pool->funcs->populate_dml_pipes(dc, context, pipes, false);
+			*vlevel = dml_get_voltage_level(&context->bw_ctx.dml, pipes, *pipe_cnt);
+
+			if (*vlevel < context->bw_ctx.dml.soc.num_states &&
+					vba->DRAMClockChangeSupport[*vlevel][vba->maxMpcComb] != dm_dram_clock_change_unsupported
+					&& subvp_validate_static_schedulability(dc, context, *vlevel)) {
+				found_supported_config = true;
+			} else if (*vlevel < context->bw_ctx.dml.soc.num_states &&
+					vba->DRAMClockChangeSupport[*vlevel][vba->maxMpcComb] == dm_dram_clock_change_unsupported) {
+				/* Case where 1 SubVP is added, and DML reports MCLK unsupported. This handles
+				 * the case for SubVP + DRR, where the DRR display does not support MCLK switch
+				 * at it's native refresh rate / timing.
+				 */
+				for (i = 0; i < dc->res_pool->pipe_count; i++) {
+					pipe = &context->res_ctx.pipe_ctx[i];
+					if (pipe->stream && pipe->plane_state && !pipe->top_pipe &&
+							pipe->stream->mall_stream_config.type == SUBVP_NONE) {
+						non_subvp_pipes++;
+						// Use ignore_msa_timing_param flag to identify as DRR
+						if (pipe->stream->ignore_msa_timing_param) {
+							drr_pipe_found = true;
+							drr_pipe_index = i;
+						}
+					}
+				}
+				// If there is only 1 remaining non SubVP pipe that is DRR, check static
+				// schedulability for SubVP + DRR.
+				if (non_subvp_pipes == 1 && drr_pipe_found) {
+					found_supported_config = subvp_drr_schedulable(dc,
+							context, &context->res_ctx.pipe_ctx[drr_pipe_index]);
+				}
+			}
+		}
+
+		// If SubVP pipe config is unsupported (or cannot be used for UCLK switching)
+		// remove phantom pipes and repopulate dml pipes
+		if (!found_supported_config) {
+			dc->res_pool->funcs->remove_phantom_pipes(dc, context);
+			*pipe_cnt = dc->res_pool->funcs->populate_dml_pipes(dc, context, pipes, false);
+		} else {
+			// only call dcn20_validate_apply_pipe_split_flags if we found a supported config
+			memset(split, 0, MAX_PIPES * sizeof(int));
+			memset(merge, 0, MAX_PIPES * sizeof(bool));
+			*vlevel = dcn20_validate_apply_pipe_split_flags(dc, context, *vlevel, split, merge);
+
+			// If found a supported SubVP config, phantom pipes were added to the context.
+			// Program timing for the phantom pipes.
+			dc->hwss.apply_ctx_to_hw(dc, context);
+		}
+	}
+}
+
+static bool dcn32_internal_validate_bw(
+		struct dc *dc,
+		struct dc_state *context,
+		display_e2e_pipe_params_st *pipes,
+		int *pipe_cnt_out,
+		int *vlevel_out,
+		bool fast_validate)
+{
+	bool out = false;
+	bool repopulate_pipes = false;
+	int split[MAX_PIPES] = { 0 };
+	bool merge[MAX_PIPES] = { false };
+	bool newly_split[MAX_PIPES] = { false };
+	int pipe_cnt, i, pipe_idx, vlevel;
+	struct vba_vars_st *vba = &context->bw_ctx.dml.vba;
+
+	ASSERT(pipes);
+	if (!pipes)
+		return false;
+
+	// For each full update, remove all existing phantom pipes first
+	dc->res_pool->funcs->remove_phantom_pipes(dc, context);
+
+	dc->res_pool->funcs->update_soc_for_wm_a(dc, context);
+
+	pipe_cnt = dc->res_pool->funcs->populate_dml_pipes(dc, context, pipes, fast_validate);
+
+	if (!pipe_cnt) {
+		out = true;
+		goto validate_out;
+	}
+
+	dml_log_pipe_params(&context->bw_ctx.dml, pipes, pipe_cnt);
+
+	if (!fast_validate) {
+		dcn32_full_validate_bw_helper(dc, context, pipes, &vlevel, split, merge, &pipe_cnt);
+	}
+
+	if (fast_validate || vlevel == context->bw_ctx.dml.soc.num_states ||
+			vba->DRAMClockChangeSupport[vlevel][vba->maxMpcComb] == dm_dram_clock_change_unsupported) {
+		/*
+		 * If mode is unsupported or there's still no p-state support then
+		 * fall back to favoring voltage.
+		 *
+		 * We don't actually support prefetch mode 2, so require that we
+		 * at least support prefetch mode 1.
+		 */
+		context->bw_ctx.dml.soc.allow_for_pstate_or_stutter_in_vblank_final =
+				dm_prefetch_support_stutter;
+
+		vlevel = dml_get_voltage_level(&context->bw_ctx.dml, pipes, pipe_cnt);
+		if (vlevel < context->bw_ctx.dml.soc.num_states) {
+			memset(split, 0, MAX_PIPES * sizeof(int));
+			memset(merge, 0, MAX_PIPES * sizeof(bool));
+			vlevel = dcn20_validate_apply_pipe_split_flags(dc, context, vlevel, split, merge);
+		}
+	}
+
+	dml_log_mode_support_params(&context->bw_ctx.dml);
+
+	if (vlevel == context->bw_ctx.dml.soc.num_states)
+		goto validate_fail;
+
+	for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+		struct pipe_ctx *mpo_pipe = pipe->bottom_pipe;
+
+		if (!pipe->stream)
+			continue;
+
+		/* We only support full screen mpo with ODM */
+		if (vba->ODMCombineEnabled[vba->pipe_plane[pipe_idx]] != dm_odm_combine_mode_disabled
+				&& pipe->plane_state && mpo_pipe
+				&& memcmp(&mpo_pipe->plane_res.scl_data.recout,
+						&pipe->plane_res.scl_data.recout,
+						sizeof(struct rect)) != 0) {
+			ASSERT(mpo_pipe->plane_state != pipe->plane_state);
+			goto validate_fail;
+		}
+		pipe_idx++;
+	}
+
+	/* merge pipes if necessary */
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+
+		/*skip pipes that don't need merging*/
+		if (!merge[i])
+			continue;
+
+		/* if ODM merge we ignore mpc tree, mpo pipes will have their own flags */
+		if (pipe->prev_odm_pipe) {
+			/*split off odm pipe*/
+			pipe->prev_odm_pipe->next_odm_pipe = pipe->next_odm_pipe;
+			if (pipe->next_odm_pipe)
+				pipe->next_odm_pipe->prev_odm_pipe = pipe->prev_odm_pipe;
+
+			pipe->bottom_pipe = NULL;
+			pipe->next_odm_pipe = NULL;
+			pipe->plane_state = NULL;
+			pipe->stream = NULL;
+			pipe->top_pipe = NULL;
+			pipe->prev_odm_pipe = NULL;
+			if (pipe->stream_res.dsc)
+				dcn20_release_dsc(&context->res_ctx, dc->res_pool, &pipe->stream_res.dsc);
+			memset(&pipe->plane_res, 0, sizeof(pipe->plane_res));
+			memset(&pipe->stream_res, 0, sizeof(pipe->stream_res));
+			repopulate_pipes = true;
+		} else if (pipe->top_pipe && pipe->top_pipe->plane_state == pipe->plane_state) {
+			struct pipe_ctx *top_pipe = pipe->top_pipe;
+			struct pipe_ctx *bottom_pipe = pipe->bottom_pipe;
+
+			top_pipe->bottom_pipe = bottom_pipe;
+			if (bottom_pipe)
+				bottom_pipe->top_pipe = top_pipe;
+
+			pipe->top_pipe = NULL;
+			pipe->bottom_pipe = NULL;
+			pipe->plane_state = NULL;
+			pipe->stream = NULL;
+			memset(&pipe->plane_res, 0, sizeof(pipe->plane_res));
+			memset(&pipe->stream_res, 0, sizeof(pipe->stream_res));
+			repopulate_pipes = true;
+		} else
+			ASSERT(0); /* Should never try to merge master pipe */
+
+	}
+
+	for (i = 0, pipe_idx = -1; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+		struct pipe_ctx *old_pipe = &dc->current_state->res_ctx.pipe_ctx[i];
+		struct pipe_ctx *hsplit_pipe = NULL;
+		bool odm;
+		int old_index = -1;
+
+		if (!pipe->stream || newly_split[i])
+			continue;
+
+		pipe_idx++;
+		odm = vba->ODMCombineEnabled[vba->pipe_plane[pipe_idx]] != dm_odm_combine_mode_disabled;
+
+		if (!pipe->plane_state && !odm)
+			continue;
+
+		if (split[i]) {
+			if (odm) {
+				if (split[i] == 4 && old_pipe->next_odm_pipe && old_pipe->next_odm_pipe->next_odm_pipe)
+					old_index = old_pipe->next_odm_pipe->next_odm_pipe->pipe_idx;
+				else if (old_pipe->next_odm_pipe)
+					old_index = old_pipe->next_odm_pipe->pipe_idx;
+			} else {
+				if (split[i] == 4 && old_pipe->bottom_pipe && old_pipe->bottom_pipe->bottom_pipe &&
+						old_pipe->bottom_pipe->bottom_pipe->plane_state == old_pipe->plane_state)
+					old_index = old_pipe->bottom_pipe->bottom_pipe->pipe_idx;
+				else if (old_pipe->bottom_pipe &&
+						old_pipe->bottom_pipe->plane_state == old_pipe->plane_state)
+					old_index = old_pipe->bottom_pipe->pipe_idx;
+			}
+			hsplit_pipe = dcn32_find_split_pipe(dc, context, old_index);
+			ASSERT(hsplit_pipe);
+			if (!hsplit_pipe)
+				goto validate_fail;
+
+			if (!dcn32_split_stream_for_mpc_or_odm(
+					dc, &context->res_ctx,
+					pipe, hsplit_pipe, odm))
+				goto validate_fail;
+
+			newly_split[hsplit_pipe->pipe_idx] = true;
+			repopulate_pipes = true;
+		}
+		if (split[i] == 4) {
+			struct pipe_ctx *pipe_4to1;
+
+			if (odm && old_pipe->next_odm_pipe)
+				old_index = old_pipe->next_odm_pipe->pipe_idx;
+			else if (!odm && old_pipe->bottom_pipe &&
+						old_pipe->bottom_pipe->plane_state == old_pipe->plane_state)
+				old_index = old_pipe->bottom_pipe->pipe_idx;
+			else
+				old_index = -1;
+			pipe_4to1 = dcn32_find_split_pipe(dc, context, old_index);
+			ASSERT(pipe_4to1);
+			if (!pipe_4to1)
+				goto validate_fail;
+			if (!dcn32_split_stream_for_mpc_or_odm(
+					dc, &context->res_ctx,
+					pipe, pipe_4to1, odm))
+				goto validate_fail;
+			newly_split[pipe_4to1->pipe_idx] = true;
+
+			if (odm && old_pipe->next_odm_pipe && old_pipe->next_odm_pipe->next_odm_pipe
+					&& old_pipe->next_odm_pipe->next_odm_pipe->next_odm_pipe)
+				old_index = old_pipe->next_odm_pipe->next_odm_pipe->next_odm_pipe->pipe_idx;
+			else if (!odm && old_pipe->bottom_pipe && old_pipe->bottom_pipe->bottom_pipe &&
+					old_pipe->bottom_pipe->bottom_pipe->bottom_pipe &&
+					old_pipe->bottom_pipe->bottom_pipe->bottom_pipe->plane_state == old_pipe->plane_state)
+				old_index = old_pipe->bottom_pipe->bottom_pipe->bottom_pipe->pipe_idx;
+			else
+				old_index = -1;
+			pipe_4to1 = dcn32_find_split_pipe(dc, context, old_index);
+			ASSERT(pipe_4to1);
+			if (!pipe_4to1)
+				goto validate_fail;
+			if (!dcn32_split_stream_for_mpc_or_odm(
+					dc, &context->res_ctx,
+					hsplit_pipe, pipe_4to1, odm))
+				goto validate_fail;
+			newly_split[pipe_4to1->pipe_idx] = true;
+		}
+		if (odm)
+			dcn20_build_mapped_resource(dc, context, pipe->stream);
+	}
+
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+
+		if (pipe->plane_state) {
+			if (!resource_build_scaling_params(pipe))
+				goto validate_fail;
+		}
+	}
+
+	/* Actual dsc count per stream dsc validation*/
+	if (!dcn20_validate_dsc(dc, context)) {
+		vba->ValidationStatus[vba->soc.num_states] = DML_FAIL_DSC_VALIDATION_FAILURE;
+		goto validate_fail;
+	}
+
+	if (repopulate_pipes)
+		pipe_cnt = dc->res_pool->funcs->populate_dml_pipes(dc, context, pipes, fast_validate);
+	*vlevel_out = vlevel;
+	*pipe_cnt_out = pipe_cnt;
+
+	out = true;
+	goto validate_out;
+
+validate_fail:
+	out = false;
+
+validate_out:
+	return out;
+}
+
+bool dcn32_validate_bandwidth(struct dc *dc,
+		struct dc_state *context,
+		bool fast_validate)
+{
+	bool out = false;
+
+	BW_VAL_TRACE_SETUP();
+
+	int vlevel = 0;
+	int pipe_cnt = 0;
+	display_e2e_pipe_params_st *pipes = kzalloc(dc->res_pool->pipe_count * sizeof(display_e2e_pipe_params_st), GFP_KERNEL);
+	DC_LOGGER_INIT(dc->ctx->logger);
+
+	BW_VAL_TRACE_COUNT();
+
+    DC_FP_START();
+	out = dcn32_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, fast_validate);
+    DC_FP_END();
+
+	if (pipe_cnt == 0)
+		goto validate_out;
+
+	if (!out)
+		goto validate_fail;
+
+	BW_VAL_TRACE_END_VOLTAGE_LEVEL();
+
+	if (fast_validate) {
+		BW_VAL_TRACE_SKIP(fast);
+		goto validate_out;
+	}
+
+	dc->res_pool->funcs->calculate_wm_and_dlg(dc, context, pipes, pipe_cnt, vlevel);
+
+	BW_VAL_TRACE_END_WATERMARKS();
+
+	goto validate_out;
+
+validate_fail:
+	DC_LOG_WARNING("Mode Validation Warning: %s failed validation.\n",
+		dml_get_status_message(context->bw_ctx.dml.vba.ValidationStatus[context->bw_ctx.dml.vba.soc.num_states]));
+
+	BW_VAL_TRACE_SKIP(fail);
+	out = false;
+
+validate_out:
+	kfree(pipes);
+
+	BW_VAL_TRACE_FINISH();
+
+	return out;
+}
+
+
+static bool is_dual_plane(enum surface_pixel_format format)
+{
+	return format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN || format == SURFACE_PIXEL_FORMAT_GRPH_RGBE_ALPHA;
+}
+
+int dcn32_populate_dml_pipes_from_context(
+	struct dc *dc, struct dc_state *context,
+	display_e2e_pipe_params_st *pipes,
+	bool fast_validate)
+{
+	int i, pipe_cnt;
+	struct resource_context *res_ctx = &context->res_ctx;
+	struct pipe_ctx *pipe;
+
+	dcn20_populate_dml_pipes_from_context(dc, context, pipes, fast_validate);
+
+	for (i = 0, pipe_cnt = 0; i < dc->res_pool->pipe_count; i++) {
+		struct dc_crtc_timing *timing;
+
+		if (!res_ctx->pipe_ctx[i].stream)
+			continue;
+		pipe = &res_ctx->pipe_ctx[i];
+		timing = &pipe->stream->timing;
+
+		pipes[pipe_cnt].pipe.src.gpuvm = true;
+		pipes[pipe_cnt].pipe.src.dcc_fraction_of_zs_req_luma = 0;
+		pipes[pipe_cnt].pipe.src.dcc_fraction_of_zs_req_chroma = 0;
+		pipes[pipe_cnt].pipe.dest.vfront_porch = timing->v_front_porch;
+		pipes[pipe_cnt].pipe.src.gpuvm_min_page_size_kbytes = 256; // according to spreadsheet
+		pipes[pipe_cnt].pipe.src.unbounded_req_mode = false;
+		pipes[pipe_cnt].pipe.scale_ratio_depth.lb_depth = dm_lb_19;
+
+		switch (pipe->stream->mall_stream_config.type) {
+		case SUBVP_MAIN:
+			pipes[pipe_cnt].pipe.src.use_mall_for_pstate_change = dm_use_mall_pstate_change_sub_viewport;
+			break;
+		case SUBVP_PHANTOM:
+			pipes[pipe_cnt].pipe.src.use_mall_for_pstate_change = dm_use_mall_pstate_change_phantom_pipe;
+			pipes[pipe_cnt].pipe.src.use_mall_for_static_screen = dm_use_mall_static_screen_enable;
+			break;
+		case SUBVP_NONE:
+			pipes[pipe_cnt].pipe.src.use_mall_for_pstate_change = dm_use_mall_pstate_change_disable;
+			pipes[pipe_cnt].pipe.src.use_mall_for_static_screen = dm_use_mall_static_screen_disable;
+			break;
+		default:
+			break;
+		}
+
+		pipes[pipe_cnt].dout.dsc_input_bpc = 0;
+		if (pipes[pipe_cnt].dout.dsc_enable) {
+			switch (timing->display_color_depth) {
+			case COLOR_DEPTH_888:
+				pipes[pipe_cnt].dout.dsc_input_bpc = 8;
+				break;
+			case COLOR_DEPTH_101010:
+				pipes[pipe_cnt].dout.dsc_input_bpc = 10;
+				break;
+			case COLOR_DEPTH_121212:
+				pipes[pipe_cnt].dout.dsc_input_bpc = 12;
+				break;
+			default:
+				ASSERT(0);
+				break;
+			}
+		}
+		pipe_cnt++;
+	}
+	context->bw_ctx.dml.ip.det_buffer_size_kbytes = DCN3_2_DEFAULT_DET_SIZE;
+
+	if (pipe_cnt == 1 && pipe->plane_state && !dc->debug.disable_z9_mpc) {
+		if (!is_dual_plane(pipe->plane_state->format)) {
+			context->bw_ctx.dml.ip.det_buffer_size_kbytes = 192;
+			pipes[0].pipe.src.unbounded_req_mode = true;
+		}
+	}
+
+	return pipe_cnt;
+}
+
+void dcn32_calculate_wm_and_dlg_fp(
+		struct dc *dc, struct dc_state *context,
+		display_e2e_pipe_params_st *pipes,
+		int pipe_cnt,
+		int vlevel)
+{
+	int i, pipe_idx, vlevel_temp = 0;
+
+	double dcfclk = dcn3_2_soc.clock_limits[0].dcfclk_mhz;
+	double dcfclk_from_validation = context->bw_ctx.dml.vba.DCFCLKState[vlevel][context->bw_ctx.dml.vba.maxMpcComb];
+	unsigned int min_dram_speed_mts = context->bw_ctx.dml.vba.DRAMSpeed;
+	bool pstate_en = context->bw_ctx.dml.vba.DRAMClockChangeSupport[vlevel][context->bw_ctx.dml.vba.maxMpcComb] !=
+			dm_dram_clock_change_unsupported;
+
+	/* Set B:
+	 * For Set B calculations use clocks from clock_limits[2] when available i.e. when SMU is present,
+	 * otherwise use arbitrary low value from spreadsheet for DCFCLK as lower is safer for watermark
+	 * calculations to cover bootup clocks.
+	 * DCFCLK: soc.clock_limits[2] when available
+	 * UCLK: soc.clock_limits[2] when available
+	 */
+	if (dcn3_2_soc.num_states > 2) {
+		vlevel_temp = 2;
+		dcfclk = dcn3_2_soc.clock_limits[2].dcfclk_mhz;
+	} else
+		dcfclk = 615; //DCFCLK Vmin_lv
+
+	pipes[0].clks_cfg.voltage = vlevel_temp;
+	pipes[0].clks_cfg.dcfclk_mhz = dcfclk;
+	pipes[0].clks_cfg.socclk_mhz = context->bw_ctx.dml.soc.clock_limits[vlevel_temp].socclk_mhz;
+
+	if (dc->clk_mgr->bw_params->wm_table.nv_entries[WM_B].valid) {
+		context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_B].dml_input.pstate_latency_us;
+		context->bw_ctx.dml.soc.fclk_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_B].dml_input.fclk_change_latency_us;
+		context->bw_ctx.dml.soc.sr_enter_plus_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_B].dml_input.sr_enter_plus_exit_time_us;
+		context->bw_ctx.dml.soc.sr_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_B].dml_input.sr_exit_time_us;
+	}
+	context->bw_ctx.bw.dcn.watermarks.b.urgent_ns = get_wm_urgent(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.cstate_enter_plus_exit_ns = get_wm_stutter_enter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.cstate_exit_ns = get_wm_stutter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.pstate_change_ns = get_wm_dram_clock_change(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.b.pte_meta_urgent_ns = get_wm_memory_trip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.b.frac_urg_bw_nom = get_fraction_of_urgent_bandwidth(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.b.frac_urg_bw_flip = get_fraction_of_urgent_bandwidth_imm_flip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.b.urgent_latency_ns = get_urgent_latency(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.b.cstate_pstate.fclk_pstate_change_ns = get_fclk_watermark(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.b.usr_retraining_ns = get_usr_retraining_watermark(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+
+	/* Set D:
+	 * All clocks min.
+	 * DCFCLK: Min, as reported by PM FW when available
+	 * UCLK  : Min, as reported by PM FW when available
+	 * sr_enter_exit/sr_exit should be lower than used for DRAM (TBD after bringup or later, use as decided in Clk Mgr)
+	 */
+
+	if (dcn3_2_soc.num_states > 2) {
+		vlevel_temp = 0;
+		dcfclk = dc->clk_mgr->bw_params->clk_table.entries[0].dcfclk_mhz;
+	} else
+		dcfclk = 615; //DCFCLK Vmin_lv
+
+	pipes[0].clks_cfg.voltage = vlevel_temp;
+	pipes[0].clks_cfg.dcfclk_mhz = dcfclk;
+	pipes[0].clks_cfg.socclk_mhz = context->bw_ctx.dml.soc.clock_limits[vlevel_temp].socclk_mhz;
+
+	if (dc->clk_mgr->bw_params->wm_table.nv_entries[WM_D].valid) {
+		context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_D].dml_input.pstate_latency_us;
+		context->bw_ctx.dml.soc.fclk_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_D].dml_input.fclk_change_latency_us;
+		context->bw_ctx.dml.soc.sr_enter_plus_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_D].dml_input.sr_enter_plus_exit_time_us;
+		context->bw_ctx.dml.soc.sr_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_D].dml_input.sr_exit_time_us;
+	}
+	context->bw_ctx.bw.dcn.watermarks.d.urgent_ns = get_wm_urgent(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.cstate_enter_plus_exit_ns = get_wm_stutter_enter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.cstate_exit_ns = get_wm_stutter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.pstate_change_ns = get_wm_dram_clock_change(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.d.pte_meta_urgent_ns = get_wm_memory_trip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.d.frac_urg_bw_nom = get_fraction_of_urgent_bandwidth(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.d.frac_urg_bw_flip = get_fraction_of_urgent_bandwidth_imm_flip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.d.urgent_latency_ns = get_urgent_latency(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.d.cstate_pstate.fclk_pstate_change_ns = get_fclk_watermark(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.d.usr_retraining_ns = get_usr_retraining_watermark(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+
+	/* Set C, for Dummy P-State:
+	 * All clocks min.
+	 * DCFCLK: Min, as reported by PM FW, when available
+	 * UCLK  : Min,  as reported by PM FW, when available
+	 * pstate latency as per UCLK state dummy pstate latency
+	 */
+
+	if (dc->clk_mgr->bw_params->wm_table.nv_entries[WM_C].valid) {
+		unsigned int min_dram_speed_mts_margin = 160;
+
+		if ((!pstate_en))
+			min_dram_speed_mts = dc->clk_mgr->bw_params->clk_table.entries[dc->clk_mgr->bw_params->clk_table.num_entries - 1].memclk_mhz * 16;
+
+		/* find largest table entry that is lower than dram speed, but lower than DPM0 still uses DPM0 */
+		for (i = 3; i > 0; i--)
+			if (min_dram_speed_mts + min_dram_speed_mts_margin > dc->clk_mgr->bw_params->dummy_pstate_table[i].dram_speed_mts)
+				break;
+
+		context->bw_ctx.dml.soc.dram_clock_change_latency_us = dc->clk_mgr->bw_params->dummy_pstate_table[i].dummy_pstate_latency_us;
+		context->bw_ctx.dml.soc.dummy_pstate_latency_us = dc->clk_mgr->bw_params->dummy_pstate_table[i].dummy_pstate_latency_us;
+		context->bw_ctx.dml.soc.fclk_change_latency_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_C].dml_input.fclk_change_latency_us;
+		context->bw_ctx.dml.soc.sr_enter_plus_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_C].dml_input.sr_enter_plus_exit_time_us;
+		context->bw_ctx.dml.soc.sr_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_C].dml_input.sr_exit_time_us;
+	}
+	context->bw_ctx.bw.dcn.watermarks.c.urgent_ns = get_wm_urgent(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.cstate_enter_plus_exit_ns = get_wm_stutter_enter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.cstate_exit_ns = get_wm_stutter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.pstate_change_ns = get_wm_dram_clock_change(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.c.pte_meta_urgent_ns = get_wm_memory_trip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.c.frac_urg_bw_nom = get_fraction_of_urgent_bandwidth(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.c.frac_urg_bw_flip = get_fraction_of_urgent_bandwidth_imm_flip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.c.urgent_latency_ns = get_urgent_latency(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.fclk_pstate_change_ns = get_fclk_watermark(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	context->bw_ctx.bw.dcn.watermarks.c.usr_retraining_ns = get_usr_retraining_watermark(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+
+	if ((!pstate_en) && (dc->clk_mgr->bw_params->wm_table.nv_entries[WM_C].valid)) {
+		/* The only difference between A and C is p-state latency, if p-state is not supported
+		 * with full p-state latency we want to calculate DLG based on dummy p-state latency,
+		 * Set A p-state watermark set to 0 on DCN32, when p-state unsupported, for now keep as DCN32.
+		 */
+		context->bw_ctx.bw.dcn.watermarks.a = context->bw_ctx.bw.dcn.watermarks.c;
+		context->bw_ctx.bw.dcn.watermarks.a.cstate_pstate.pstate_change_ns = 0;
+	} else {
+		/* Set A:
+		 * All clocks min.
+		 * DCFCLK: Min, as reported by PM FW, when available
+		 * UCLK: Min, as reported by PM FW, when available
+		 */
+		dc->res_pool->funcs->update_soc_for_wm_a(dc, context);
+		context->bw_ctx.bw.dcn.watermarks.a.urgent_ns = get_wm_urgent(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+		context->bw_ctx.bw.dcn.watermarks.a.cstate_pstate.cstate_enter_plus_exit_ns = get_wm_stutter_enter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+		context->bw_ctx.bw.dcn.watermarks.a.cstate_pstate.cstate_exit_ns = get_wm_stutter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+		context->bw_ctx.bw.dcn.watermarks.a.cstate_pstate.pstate_change_ns = get_wm_dram_clock_change(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+		context->bw_ctx.bw.dcn.watermarks.a.pte_meta_urgent_ns = get_wm_memory_trip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+		context->bw_ctx.bw.dcn.watermarks.a.frac_urg_bw_nom = get_fraction_of_urgent_bandwidth(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+		context->bw_ctx.bw.dcn.watermarks.a.frac_urg_bw_flip = get_fraction_of_urgent_bandwidth_imm_flip(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+		context->bw_ctx.bw.dcn.watermarks.a.urgent_latency_ns = get_urgent_latency(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+		context->bw_ctx.bw.dcn.watermarks.a.cstate_pstate.fclk_pstate_change_ns = get_fclk_watermark(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+		context->bw_ctx.bw.dcn.watermarks.a.usr_retraining_ns = get_usr_retraining_watermark(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+	}
+
+	pipes[0].clks_cfg.voltage = vlevel;
+	pipes[0].clks_cfg.dcfclk_mhz = dcfclk_from_validation;
+	pipes[0].clks_cfg.socclk_mhz = context->bw_ctx.dml.soc.clock_limits[vlevel].socclk_mhz;
+
+	for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
+		if (!context->res_ctx.pipe_ctx[i].stream)
+			continue;
+
+		pipes[pipe_idx].clks_cfg.dispclk_mhz = get_dispclk_calculated(&context->bw_ctx.dml, pipes, pipe_cnt);
+		pipes[pipe_idx].clks_cfg.dppclk_mhz = get_dppclk_calculated(&context->bw_ctx.dml, pipes, pipe_cnt, pipe_idx);
+
+		if (dc->config.forced_clocks) {
+			pipes[pipe_idx].clks_cfg.dispclk_mhz = context->bw_ctx.dml.soc.clock_limits[0].dispclk_mhz;
+			pipes[pipe_idx].clks_cfg.dppclk_mhz = context->bw_ctx.dml.soc.clock_limits[0].dppclk_mhz;
+		}
+		if (dc->debug.min_disp_clk_khz > pipes[pipe_idx].clks_cfg.dispclk_mhz * 1000)
+			pipes[pipe_idx].clks_cfg.dispclk_mhz = dc->debug.min_disp_clk_khz / 1000.0;
+		if (dc->debug.min_dpp_clk_khz > pipes[pipe_idx].clks_cfg.dppclk_mhz * 1000)
+			pipes[pipe_idx].clks_cfg.dppclk_mhz = dc->debug.min_dpp_clk_khz / 1000.0;
+
+		pipe_idx++;
+	}
+
+	context->perf_params.stutter_period_us = context->bw_ctx.dml.vba.StutterPeriod;
+
+	dcn32_calculate_dlg_params(dc, context, pipes, pipe_cnt, vlevel);
+
+	if (!pstate_en)
+		/* Restore full p-state latency */
+		context->bw_ctx.dml.soc.dram_clock_change_latency_us =
+				dc->clk_mgr->bw_params->wm_table.nv_entries[WM_A].dml_input.pstate_latency_us;
+}
+
+static struct dc_cap_funcs cap_funcs = {
+	.get_dcc_compression_cap = dcn20_get_dcc_compression_cap
+};
+
+
+static void dcn32_get_optimal_dcfclk_fclk_for_uclk(unsigned int uclk_mts,
+		unsigned int *optimal_dcfclk,
+		unsigned int *optimal_fclk)
+{
+	double bw_from_dram, bw_from_dram1, bw_from_dram2;
+
+	bw_from_dram1 = uclk_mts * dcn3_2_soc.num_chans *
+		dcn3_2_soc.dram_channel_width_bytes * (dcn3_2_soc.max_avg_dram_bw_use_normal_percent / 100);
+	bw_from_dram2 = uclk_mts * dcn3_2_soc.num_chans *
+		dcn3_2_soc.dram_channel_width_bytes * (dcn3_2_soc.max_avg_sdp_bw_use_normal_percent / 100);
+
+	bw_from_dram = (bw_from_dram1 < bw_from_dram2) ? bw_from_dram1 : bw_from_dram2;
+
+	if (optimal_fclk)
+		*optimal_fclk = bw_from_dram /
+		(dcn3_2_soc.fabric_datapath_to_dcn_data_return_bytes * (dcn3_2_soc.max_avg_sdp_bw_use_normal_percent / 100));
+
+	if (optimal_dcfclk)
+		*optimal_dcfclk =  bw_from_dram /
+		(dcn3_2_soc.return_bus_width_bytes * (dcn3_2_soc.max_avg_sdp_bw_use_normal_percent / 100));
+}
+
+void dcn32_calculate_wm_and_dlg(
+		struct dc *dc, struct dc_state *context,
+		display_e2e_pipe_params_st *pipes,
+		int pipe_cnt,
+		int vlevel)
+{
+    DC_FP_START();
+    dcn32_calculate_wm_and_dlg_fp(
+		dc, context,
+		pipes,
+		pipe_cnt,
+		vlevel);
+    DC_FP_END();
+}
+
+static bool is_dtbclk_required(struct dc *dc, struct dc_state *context)
+{
+	int i;
+
+	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+		if (!context->res_ctx.pipe_ctx[i].stream)
+			continue;
+		if (is_dp_128b_132b_signal(&context->res_ctx.pipe_ctx[i]))
+			return true;
+	}
+	return false;
+}
+
+void dcn32_calculate_dlg_params(struct dc *dc, struct dc_state *context, display_e2e_pipe_params_st *pipes,
+		int pipe_cnt, int vlevel)
+{
+	int i, pipe_idx;
+	bool usr_retraining_support = false;
+
+	/* Writeback MCIF_WB arbitration parameters */
+	dc->res_pool->funcs->set_mcif_arb_params(dc, context, pipes, pipe_cnt);
+
+	context->bw_ctx.bw.dcn.clk.dispclk_khz = context->bw_ctx.dml.vba.DISPCLK * 1000;
+	context->bw_ctx.bw.dcn.clk.dcfclk_khz = context->bw_ctx.dml.vba.DCFCLK * 1000;
+	context->bw_ctx.bw.dcn.clk.socclk_khz = context->bw_ctx.dml.vba.SOCCLK * 1000;
+	context->bw_ctx.bw.dcn.clk.dramclk_khz = context->bw_ctx.dml.vba.DRAMSpeed * 1000 / 16;
+	context->bw_ctx.bw.dcn.clk.dcfclk_deep_sleep_khz = context->bw_ctx.dml.vba.DCFCLKDeepSleep * 1000;
+	context->bw_ctx.bw.dcn.clk.fclk_khz = context->bw_ctx.dml.vba.FabricClock * 1000;
+	context->bw_ctx.bw.dcn.clk.p_state_change_support =
+			context->bw_ctx.dml.vba.DRAMClockChangeSupport[vlevel][context->bw_ctx.dml.vba.maxMpcComb]
+					!= dm_dram_clock_change_unsupported;
+
+	/*
+	 * TODO: needs FAMS
+	 * Pstate change might not be supported by hardware, but it might be
+	 * possible with firmware driven vertical blank stretching.
+	 */
+	// context->bw_ctx.bw.dcn.clk.p_state_change_support |= context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching;
+
+	context->bw_ctx.bw.dcn.clk.dppclk_khz = 0;
+	context->bw_ctx.bw.dcn.clk.dtbclk_en = is_dtbclk_required(dc, context);
+	if (context->bw_ctx.dml.vba.FCLKChangeSupport[vlevel][context->bw_ctx.dml.vba.maxMpcComb] == dm_fclock_change_unsupported)
+		context->bw_ctx.bw.dcn.clk.fclk_p_state_change_support = false;
+	else
+		context->bw_ctx.bw.dcn.clk.fclk_p_state_change_support = true;
+
+	usr_retraining_support = context->bw_ctx.dml.vba.USRRetrainingSupport[vlevel][context->bw_ctx.dml.vba.maxMpcComb];
+	ASSERT(usr_retraining_support);
+
+	if (context->bw_ctx.bw.dcn.clk.dispclk_khz < dc->debug.min_disp_clk_khz)
+		context->bw_ctx.bw.dcn.clk.dispclk_khz = dc->debug.min_disp_clk_khz;
+
+	for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
+		if (!context->res_ctx.pipe_ctx[i].stream)
+			continue;
+		pipes[pipe_idx].pipe.dest.vstartup_start = get_vstartup(&context->bw_ctx.dml, pipes, pipe_cnt,
+				pipe_idx);
+		pipes[pipe_idx].pipe.dest.vupdate_offset = get_vupdate_offset(&context->bw_ctx.dml, pipes, pipe_cnt,
+				pipe_idx);
+		pipes[pipe_idx].pipe.dest.vupdate_width = get_vupdate_width(&context->bw_ctx.dml, pipes, pipe_cnt,
+				pipe_idx);
+		pipes[pipe_idx].pipe.dest.vready_offset = get_vready_offset(&context->bw_ctx.dml, pipes, pipe_cnt,
+				pipe_idx);
+		if (context->res_ctx.pipe_ctx[i].stream->mall_stream_config.type == SUBVP_PHANTOM) {
+			// Phantom pipe requires that DET_SIZE = 0 and no unbounded requests
+			context->res_ctx.pipe_ctx[i].det_buffer_size_kb = 0;
+			context->res_ctx.pipe_ctx[i].unbounded_req = false;
+		} else {
+			context->res_ctx.pipe_ctx[i].det_buffer_size_kb =
+					context->bw_ctx.dml.ip.det_buffer_size_kbytes;
+			context->res_ctx.pipe_ctx[i].unbounded_req = pipes[pipe_idx].pipe.src.unbounded_req_mode;
+		}
+		if (context->bw_ctx.bw.dcn.clk.dppclk_khz < pipes[pipe_idx].clks_cfg.dppclk_mhz * 1000)
+			context->bw_ctx.bw.dcn.clk.dppclk_khz = pipes[pipe_idx].clks_cfg.dppclk_mhz * 1000;
+		context->res_ctx.pipe_ctx[i].plane_res.bw.dppclk_khz = pipes[pipe_idx].clks_cfg.dppclk_mhz * 1000;
+		context->res_ctx.pipe_ctx[i].pipe_dlg_param = pipes[pipe_idx].pipe.dest;
+		pipe_idx++;
+	}
+	/*save a original dppclock copy*/
+	context->bw_ctx.bw.dcn.clk.bw_dppclk_khz = context->bw_ctx.bw.dcn.clk.dppclk_khz;
+	context->bw_ctx.bw.dcn.clk.bw_dispclk_khz = context->bw_ctx.bw.dcn.clk.dispclk_khz;
+	context->bw_ctx.bw.dcn.clk.max_supported_dppclk_khz = context->bw_ctx.dml.soc.clock_limits[vlevel].dppclk_mhz
+			* 1000;
+	context->bw_ctx.bw.dcn.clk.max_supported_dispclk_khz = context->bw_ctx.dml.soc.clock_limits[vlevel].dispclk_mhz
+			* 1000;
+
+	context->bw_ctx.bw.dcn.compbuf_size_kb = context->bw_ctx.dml.ip.config_return_buffer_size_in_kbytes
+			- context->bw_ctx.dml.ip.det_buffer_size_kbytes * pipe_idx;
+
+	for (i = 0, pipe_idx = 0; i < dc->res_pool->pipe_count; i++) {
+
+		if (!context->res_ctx.pipe_ctx[i].stream)
+			continue;
+
+		/* cstate disabled on 201 */
+//		if (dc->ctx->dce_version == DCN_VERSION_2_01)
+//			cstate_en = false;
+
+
+		context->bw_ctx.dml.funcs.rq_dlg_get_dlg_reg_v2(&context->bw_ctx.dml,
+				&context->res_ctx.pipe_ctx[i].dlg_regs, &context->res_ctx.pipe_ctx[i].ttu_regs, pipes,
+				pipe_cnt, pipe_idx);
+
+		context->bw_ctx.dml.funcs.rq_dlg_get_rq_reg_v2(&context->res_ctx.pipe_ctx[i].rq_regs,
+				&context->bw_ctx.dml, pipes, pipe_cnt, pipe_idx);
+
+		pipe_idx++;
+	}
+}
+
+/* dcn32_update_bw_bounding_box
+ * This would override some dcn3_2 ip_or_soc initial parameters hardcoded from spreadsheet
+ * with actual values as per dGPU SKU:
+ * -with passed few options from dc->config
+ * -with dentist_vco_frequency from Clk Mgr (currently hardcoded, but might need to get it from PM FW)
+ * -with passed latency values (passed in ns units) in dc-> bb override for debugging purposes
+ * -with passed latencies from VBIOS (in 100_ns units) if available for certain dGPU SKU
+ * -with number of DRAM channels from VBIOS (which differ for certain dGPU SKU of the same ASIC)
+ * -clocks levels with passed clk_table entries from Clk Mgr as reported by PM FW for different
+ *  clocks (which might differ for certain dGPU SKU of the same ASIC)
+ */
+static void dcn32_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
+{
+	if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) {
+
+		/* Overrides from dc->config options */
+		dcn3_2_ip.clamp_min_dcfclk = dc->config.clamp_min_dcfclk;
+
+		/* Override from passed dc->bb_overrides if available*/
+		if ((int)(dcn3_2_soc.sr_exit_time_us * 1000) != dc->bb_overrides.sr_exit_time_ns
+				&& dc->bb_overrides.sr_exit_time_ns) {
+			dcn3_2_soc.sr_exit_time_us = dc->bb_overrides.sr_exit_time_ns / 1000.0;
+		}
+
+		if ((int)(dcn3_2_soc.sr_enter_plus_exit_time_us * 1000)
+				!= dc->bb_overrides.sr_enter_plus_exit_time_ns
+				&& dc->bb_overrides.sr_enter_plus_exit_time_ns) {
+			dcn3_2_soc.sr_enter_plus_exit_time_us =
+				dc->bb_overrides.sr_enter_plus_exit_time_ns / 1000.0;
+		}
+
+		if ((int)(dcn3_2_soc.urgent_latency_us * 1000) != dc->bb_overrides.urgent_latency_ns
+			&& dc->bb_overrides.urgent_latency_ns) {
+			dcn3_2_soc.urgent_latency_us = dc->bb_overrides.urgent_latency_ns / 1000.0;
+		}
+
+		if ((int)(dcn3_2_soc.dram_clock_change_latency_us * 1000)
+				!= dc->bb_overrides.dram_clock_change_latency_ns
+				&& dc->bb_overrides.dram_clock_change_latency_ns) {
+			dcn3_2_soc.dram_clock_change_latency_us =
+				dc->bb_overrides.dram_clock_change_latency_ns / 1000.0;
+		}
+
+		if ((int)(dcn3_2_soc.dummy_pstate_latency_us * 1000)
+				!= dc->bb_overrides.dummy_clock_change_latency_ns
+				&& dc->bb_overrides.dummy_clock_change_latency_ns) {
+			dcn3_2_soc.dummy_pstate_latency_us =
+				dc->bb_overrides.dummy_clock_change_latency_ns / 1000.0;
+		}
+
+		/* Override from VBIOS if VBIOS bb_info available */
+		if (dc->ctx->dc_bios->funcs->get_soc_bb_info) {
+			struct bp_soc_bb_info bb_info = {0};
+
+			if (dc->ctx->dc_bios->funcs->get_soc_bb_info(dc->ctx->dc_bios, &bb_info) == BP_RESULT_OK) {
+				if (bb_info.dram_clock_change_latency_100ns > 0)
+					dcn3_2_soc.dram_clock_change_latency_us = bb_info.dram_clock_change_latency_100ns * 10;
+
+			if (bb_info.dram_sr_enter_exit_latency_100ns > 0)
+				dcn3_2_soc.sr_enter_plus_exit_time_us = bb_info.dram_sr_enter_exit_latency_100ns * 10;
+
+			if (bb_info.dram_sr_exit_latency_100ns > 0)
+				dcn3_2_soc.sr_exit_time_us = bb_info.dram_sr_exit_latency_100ns * 10;
+			}
+		}
+
+		/* Override from VBIOS for num_chan */
+		if (dc->ctx->dc_bios->vram_info.num_chans)
+			dcn3_2_soc.num_chans = dc->ctx->dc_bios->vram_info.num_chans;
+
+		if (dc->ctx->dc_bios->vram_info.dram_channel_width_bytes)
+			dcn3_2_soc.dram_channel_width_bytes = dc->ctx->dc_bios->vram_info.dram_channel_width_bytes;
+
+	}
+
+	/* Override dispclk_dppclk_vco_speed_mhz from Clk Mgr */
+	dcn3_2_soc.dispclk_dppclk_vco_speed_mhz = dc->clk_mgr->dentist_vco_freq_khz / 1000.0;
+	dc->dml.soc.dispclk_dppclk_vco_speed_mhz = dc->clk_mgr->dentist_vco_freq_khz / 1000.0;
+
+	/* Overrides Clock levelsfrom CLK Mgr table entries as reported by PM FW */
+	if ((!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment)) && (bw_params->clk_table.entries[0].memclk_mhz)) {
+		unsigned int i = 0, j = 0, num_states = 0;
+
+		unsigned int dcfclk_mhz[DC__VOLTAGE_STATES] = {0};
+		unsigned int dram_speed_mts[DC__VOLTAGE_STATES] = {0};
+		unsigned int optimal_uclk_for_dcfclk_sta_targets[DC__VOLTAGE_STATES] = {0};
+		unsigned int optimal_dcfclk_for_uclk[DC__VOLTAGE_STATES] = {0};
+
+		unsigned int dcfclk_sta_targets[DC__VOLTAGE_STATES] = {615, 906, 1324, 1564};
+		unsigned int num_dcfclk_sta_targets = 4, num_uclk_states = 0;
+		unsigned int max_dcfclk_mhz = 0, max_dispclk_mhz = 0, max_dppclk_mhz = 0, max_phyclk_mhz = 0;
+
+		for (i = 0; i < MAX_NUM_DPM_LVL; i++) {
+			if (bw_params->clk_table.entries[i].dcfclk_mhz > max_dcfclk_mhz)
+				max_dcfclk_mhz = bw_params->clk_table.entries[i].dcfclk_mhz;
+			if (bw_params->clk_table.entries[i].dispclk_mhz > max_dispclk_mhz)
+				max_dispclk_mhz = bw_params->clk_table.entries[i].dispclk_mhz;
+			if (bw_params->clk_table.entries[i].dppclk_mhz > max_dppclk_mhz)
+				max_dppclk_mhz = bw_params->clk_table.entries[i].dppclk_mhz;
+			if (bw_params->clk_table.entries[i].phyclk_mhz > max_phyclk_mhz)
+				max_phyclk_mhz = bw_params->clk_table.entries[i].phyclk_mhz;
+		}
+		if (!max_dcfclk_mhz)
+			max_dcfclk_mhz = dcn3_2_soc.clock_limits[0].dcfclk_mhz;
+		if (!max_dispclk_mhz)
+			max_dispclk_mhz = dcn3_2_soc.clock_limits[0].dispclk_mhz;
+		if (!max_dppclk_mhz)
+			max_dppclk_mhz = dcn3_2_soc.clock_limits[0].dppclk_mhz;
+		if (!max_phyclk_mhz)
+			max_phyclk_mhz = dcn3_2_soc.clock_limits[0].phyclk_mhz;
+
+		if (max_dcfclk_mhz > dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
+			// If max DCFCLK is greater than the max DCFCLK STA target, insert into the DCFCLK STA target array
+			dcfclk_sta_targets[num_dcfclk_sta_targets] = max_dcfclk_mhz;
+			num_dcfclk_sta_targets++;
+		} else if (max_dcfclk_mhz < dcfclk_sta_targets[num_dcfclk_sta_targets-1]) {
+			// If max DCFCLK is less than the max DCFCLK STA target, cap values and remove duplicates
+			for (i = 0; i < num_dcfclk_sta_targets; i++) {
+				if (dcfclk_sta_targets[i] > max_dcfclk_mhz) {
+					dcfclk_sta_targets[i] = max_dcfclk_mhz;
+					break;
+				}
+			}
+			// Update size of array since we "removed" duplicates
+			num_dcfclk_sta_targets = i + 1;
+		}
+
+		num_uclk_states = bw_params->clk_table.num_entries;
+
+		// Calculate optimal dcfclk for each uclk
+		for (i = 0; i < num_uclk_states; i++) {
+			dcn32_get_optimal_dcfclk_fclk_for_uclk(bw_params->clk_table.entries[i].memclk_mhz * 16,
+					&optimal_dcfclk_for_uclk[i], NULL);
+			if (optimal_dcfclk_for_uclk[i] < bw_params->clk_table.entries[0].dcfclk_mhz) {
+				optimal_dcfclk_for_uclk[i] = bw_params->clk_table.entries[0].dcfclk_mhz;
+			}
+		}
+
+		// Calculate optimal uclk for each dcfclk sta target
+		for (i = 0; i < num_dcfclk_sta_targets; i++) {
+			for (j = 0; j < num_uclk_states; j++) {
+				if (dcfclk_sta_targets[i] < optimal_dcfclk_for_uclk[j]) {
+					optimal_uclk_for_dcfclk_sta_targets[i] =
+							bw_params->clk_table.entries[j].memclk_mhz * 16;
+					break;
+				}
+			}
+		}
+
+		i = 0;
+		j = 0;
+		// create the final dcfclk and uclk table
+		while (i < num_dcfclk_sta_targets && j < num_uclk_states && num_states < DC__VOLTAGE_STATES) {
+			if (dcfclk_sta_targets[i] < optimal_dcfclk_for_uclk[j] && i < num_dcfclk_sta_targets) {
+				dcfclk_mhz[num_states] = dcfclk_sta_targets[i];
+				dram_speed_mts[num_states++] = optimal_uclk_for_dcfclk_sta_targets[i++];
+			} else {
+				if (j < num_uclk_states && optimal_dcfclk_for_uclk[j] <= max_dcfclk_mhz) {
+					dcfclk_mhz[num_states] = optimal_dcfclk_for_uclk[j];
+					dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+				} else {
+					j = num_uclk_states;
+				}
+			}
+		}
+
+		while (i < num_dcfclk_sta_targets && num_states < DC__VOLTAGE_STATES) {
+			dcfclk_mhz[num_states] = dcfclk_sta_targets[i];
+			dram_speed_mts[num_states++] = optimal_uclk_for_dcfclk_sta_targets[i++];
+		}
+
+		while (j < num_uclk_states && num_states < DC__VOLTAGE_STATES &&
+				optimal_dcfclk_for_uclk[j] <= max_dcfclk_mhz) {
+			dcfclk_mhz[num_states] = optimal_dcfclk_for_uclk[j];
+			dram_speed_mts[num_states++] = bw_params->clk_table.entries[j++].memclk_mhz * 16;
+		}
+
+		dcn3_2_soc.num_states = num_states;
+		for (i = 0; i < dcn3_2_soc.num_states; i++) {
+			dcn3_2_soc.clock_limits[i].state = i;
+			dcn3_2_soc.clock_limits[i].dcfclk_mhz = dcfclk_mhz[i];
+			dcn3_2_soc.clock_limits[i].fabricclk_mhz = dcfclk_mhz[i];
+			dcn3_2_soc.clock_limits[i].dram_speed_mts = dram_speed_mts[i];
+
+			/* Fill all states with max values of all these clocks */
+			dcn3_2_soc.clock_limits[i].dispclk_mhz = max_dispclk_mhz;
+			dcn3_2_soc.clock_limits[i].dppclk_mhz  = max_dppclk_mhz;
+			dcn3_2_soc.clock_limits[i].phyclk_mhz  = max_phyclk_mhz;
+			dcn3_2_soc.clock_limits[i].dscclk_mhz  = max_dispclk_mhz / 3;
+
+			/* Populate from bw_params for DTBCLK, SOCCLK */
+			if (!bw_params->clk_table.entries[i].dtbclk_mhz && i > 0)
+				dcn3_2_soc.clock_limits[i].dtbclk_mhz  = dcn3_2_soc.clock_limits[i-1].dtbclk_mhz;
+			else
+				dcn3_2_soc.clock_limits[i].dtbclk_mhz  = bw_params->clk_table.entries[i].dtbclk_mhz;
+
+			if (!bw_params->clk_table.entries[i].socclk_mhz && i > 0)
+				dcn3_2_soc.clock_limits[i].socclk_mhz = dcn3_2_soc.clock_limits[i-1].socclk_mhz;
+			else
+				dcn3_2_soc.clock_limits[i].socclk_mhz = bw_params->clk_table.entries[i].socclk_mhz;
+
+			/* These clocks cannot come from bw_params, always fill from dcn3_2_soc[0] */
+			/* PHYCLK_D18, PHYCLK_D32 */
+			dcn3_2_soc.clock_limits[i].phyclk_d18_mhz = dcn3_2_soc.clock_limits[0].phyclk_d18_mhz;
+			dcn3_2_soc.clock_limits[i].phyclk_d32_mhz = dcn3_2_soc.clock_limits[0].phyclk_d32_mhz;
+		}
+
+		/* Re-init DML with updated bb */
+		dml_init_instance(&dc->dml, &dcn3_2_soc, &dcn3_2_ip, DML_PROJECT_DCN32);
+		if (dc->current_state)
+			dml_init_instance(&dc->current_state->bw_ctx.dml, &dcn3_2_soc, &dcn3_2_ip, DML_PROJECT_DCN32);
+	}
+}
+
+static struct resource_funcs dcn32_res_pool_funcs = {
+	.destroy = dcn32_destroy_resource_pool,
+	.link_enc_create = dcn32_link_encoder_create,
+	.link_enc_create_minimal = NULL,
+	.panel_cntl_create = dcn32_panel_cntl_create,
+	.validate_bandwidth = dcn32_validate_bandwidth,
+	.calculate_wm_and_dlg = dcn32_calculate_wm_and_dlg,
+	.populate_dml_pipes = dcn32_populate_dml_pipes_from_context,
+	.acquire_idle_pipe_for_layer = dcn20_acquire_idle_pipe_for_layer,
+	.add_stream_to_ctx = dcn30_add_stream_to_ctx,
+	.add_dsc_to_stream_resource = dcn20_add_dsc_to_stream_resource,
+	.remove_stream_from_ctx = dcn20_remove_stream_from_ctx,
+	.populate_dml_writeback_from_context = dcn30_populate_dml_writeback_from_context,
+	.set_mcif_arb_params = dcn30_set_mcif_arb_params,
+	.find_first_free_match_stream_enc_for_link = dcn10_find_first_free_match_stream_enc_for_link,
+	.acquire_post_bldn_3dlut = dcn32_acquire_post_bldn_3dlut,
+	.release_post_bldn_3dlut = dcn32_release_post_bldn_3dlut,
+	.update_bw_bounding_box = dcn32_update_bw_bounding_box,
+	.patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+	.update_soc_for_wm_a = dcn30_update_soc_for_wm_a,
+	.add_phantom_pipes = dcn32_add_phantom_pipes,
+	.remove_phantom_pipes = dcn32_remove_phantom_pipes,
+};
+
+
+static bool dcn32_resource_construct(
+	uint8_t num_virtual_links,
+	struct dc *dc,
+	struct dcn32_resource_pool *pool)
+{
+	int i, j;
+	struct dc_context *ctx = dc->ctx;
+	struct irq_service_init_data init_data;
+	struct ddc_service_init_data ddc_init_data = {0};
+	uint32_t pipe_fuses = 0;
+	uint32_t num_pipes  = 4;
+
+    DC_FP_START();
+
+	ctx->dc_bios->regs = &bios_regs;
+
+	pool->base.res_cap = &res_cap_dcn32;
+	/* max number of pipes for ASIC before checking for pipe fuses */
+	num_pipes  = pool->base.res_cap->num_timing_generator;
+	pipe_fuses = REG_READ(CC_DC_PIPE_DIS);
+
+	for (i = 0; i < pool->base.res_cap->num_timing_generator; i++)
+		if (pipe_fuses & 1 << i)
+			num_pipes--;
+
+	if (pipe_fuses & 1)
+		ASSERT(0); //Unexpected - Pipe 0 should always be fully functional!
+
+	if (pipe_fuses & CC_DC_PIPE_DIS__DC_FULL_DIS_MASK)
+		ASSERT(0); //Entire DCN is harvested!
+
+	/* within dml lib, initial value is hard coded, if ASIC pipe is fused, the
+	 * value will be changed, update max_num_dpp and max_num_otg for dml.
+	 */
+	dcn3_2_ip.max_num_dpp = num_pipes;
+	dcn3_2_ip.max_num_otg = num_pipes;
+
+	pool->base.funcs = &dcn32_res_pool_funcs;
+
+	/*************************************************
+	 *  Resource + asic cap harcoding                *
+	 *************************************************/
+	pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE;
+	pool->base.timing_generator_count = num_pipes;
+	pool->base.pipe_count = num_pipes;
+	pool->base.mpcc_count = num_pipes;
+	dc->caps.max_downscale_ratio = 600;
+	dc->caps.i2c_speed_in_khz = 100;
+	dc->caps.i2c_speed_in_khz_hdcp = 100; /*1.4 w/a applied by default*/
+	dc->caps.max_cursor_size = 256;
+	dc->caps.min_horizontal_blanking_period = 80;
+	dc->caps.dmdata_alloc_size = 2048;
+	dc->caps.mall_size_per_mem_channel = 0;
+	dc->caps.mall_size_total = 0;
+	dc->caps.cursor_cache_size = dc->caps.max_cursor_size * dc->caps.max_cursor_size * 8;
+
+	dc->caps.cache_line_size = 64;
+	dc->caps.cache_num_ways = 16;
+	dc->caps.max_cab_allocation_bytes = 67108864; // 64MB = 1024 * 1024 * 64
+	dc->caps.subvp_fw_processing_delay_us = 15;
+	dc->caps.subvp_prefetch_end_to_mall_start_us = 15;
+	dc->caps.subvp_pstate_allow_width_us = 20;
+	dc->caps.subvp_vertical_int_margin_us = 30;
+
+	dc->caps.max_slave_planes = 2;
+	dc->caps.max_slave_yuv_planes = 2;
+	dc->caps.max_slave_rgb_planes = 2;
+	dc->caps.post_blend_color_processing = true;
+	dc->caps.force_dp_tps4_for_cp2520 = true;
+	dc->caps.dp_hpo = true;
+	dc->caps.edp_dsc_support = true;
+	dc->caps.extended_aux_timeout_support = true;
+	dc->caps.dmcub_support = true;
+
+	/* Color pipeline capabilities */
+	dc->caps.color.dpp.dcn_arch = 1;
+	dc->caps.color.dpp.input_lut_shared = 0;
+	dc->caps.color.dpp.icsc = 1;
+	dc->caps.color.dpp.dgam_ram = 0; // must use gamma_corr
+	dc->caps.color.dpp.dgam_rom_caps.srgb = 1;
+	dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1;
+	dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1;
+	dc->caps.color.dpp.dgam_rom_caps.pq = 1;
+	dc->caps.color.dpp.dgam_rom_caps.hlg = 1;
+	dc->caps.color.dpp.post_csc = 1;
+	dc->caps.color.dpp.gamma_corr = 1;
+	dc->caps.color.dpp.dgam_rom_for_yuv = 0;
+
+	dc->caps.color.dpp.hw_3d_lut = 1;
+	dc->caps.color.dpp.ogam_ram = 0;  //Blnd Gam also removed
+	// no OGAM ROM on DCN2 and later ASICs
+	dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
+	dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
+	dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
+	dc->caps.color.dpp.ogam_rom_caps.pq = 0;
+	dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
+	dc->caps.color.dpp.ocsc = 0;
+
+	dc->caps.color.mpc.gamut_remap = 1;
+	dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //4, configurable to be before or after BLND in MPCC
+	dc->caps.color.mpc.ogam_ram = 1;
+	dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
+	dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
+	dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
+	dc->caps.color.mpc.ogam_rom_caps.pq = 0;
+	dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
+	dc->caps.color.mpc.ocsc = 1;
+
+	/* Use pipe context based otg sync logic */
+	dc->config.use_pipe_ctx_sync_logic = true;
+
+	/* read VBIOS LTTPR caps */
+	{
+		if (ctx->dc_bios->funcs->get_lttpr_caps) {
+			enum bp_result bp_query_result;
+			uint8_t is_vbios_lttpr_enable = 0;
+
+			bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
+			dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
+		}
+
+		/* interop bit is implicit */
+		{
+			dc->caps.vbios_lttpr_aware = true;
+		}
+	}
+
+	if (dc->ctx->dce_environment == DCE_ENV_PRODUCTION_DRV)
+		dc->debug = debug_defaults_drv;
+	else if (dc->ctx->dce_environment == DCE_ENV_FPGA_MAXIMUS) {
+		dc->debug = debug_defaults_diags;
+	} else
+		dc->debug = debug_defaults_diags;
+	// Init the vm_helper
+	if (dc->vm_helper)
+		vm_helper_init(dc->vm_helper, 16);
+
+	/*************************************************
+	 *  Create resources                             *
+	 *************************************************/
+
+	/* Clock Sources for Pixel Clock*/
+	pool->base.clock_sources[DCN32_CLK_SRC_PLL0] =
+			dcn32_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL0,
+				&clk_src_regs[0], false);
+	pool->base.clock_sources[DCN32_CLK_SRC_PLL1] =
+			dcn32_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL1,
+				&clk_src_regs[1], false);
+	pool->base.clock_sources[DCN32_CLK_SRC_PLL2] =
+			dcn32_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL2,
+				&clk_src_regs[2], false);
+	pool->base.clock_sources[DCN32_CLK_SRC_PLL3] =
+			dcn32_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL3,
+				&clk_src_regs[3], false);
+	pool->base.clock_sources[DCN32_CLK_SRC_PLL4] =
+			dcn32_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL4,
+				&clk_src_regs[4], false);
+
+	pool->base.clk_src_count = DCN32_CLK_SRC_TOTAL;
+
+	/* todo: not reuse phy_pll registers */
+	pool->base.dp_clock_source =
+			dcn32_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_ID_DP_DTO,
+				&clk_src_regs[0], true);
+
+	for (i = 0; i < pool->base.clk_src_count; i++) {
+		if (pool->base.clock_sources[i] == NULL) {
+			dm_error("DC: failed to create clock sources!\n");
+			BREAK_TO_DEBUGGER();
+			goto create_fail;
+		}
+	}
+
+	/* DCCG */
+	pool->base.dccg = dccg32_create(ctx, &dccg_regs, &dccg_shift, &dccg_mask);
+	if (pool->base.dccg == NULL) {
+		dm_error("DC: failed to create dccg!\n");
+		BREAK_TO_DEBUGGER();
+		goto create_fail;
+	}
+
+	/* DML */
+	if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment))
+		dml_init_instance(&dc->dml, &dcn3_2_soc, &dcn3_2_ip, DML_PROJECT_DCN32);
+
+	/* IRQ Service */
+	init_data.ctx = dc->ctx;
+	pool->base.irqs = dal_irq_service_dcn32_create(&init_data);
+	if (!pool->base.irqs)
+		goto create_fail;
+
+	/* HUBBUB */
+	pool->base.hubbub = dcn32_hubbub_create(ctx);
+	if (pool->base.hubbub == NULL) {
+		BREAK_TO_DEBUGGER();
+		dm_error("DC: failed to create hubbub!\n");
+		goto create_fail;
+	}
+
+	/* HUBPs, DPPs, OPPs, TGs, ABMs */
+	for (i = 0, j = 0; i < pool->base.res_cap->num_timing_generator; i++) {
+
+		/* if pipe is disabled, skip instance of HW pipe,
+		 * i.e, skip ASIC register instance
+		 */
+		if (pipe_fuses & 1 << i)
+			continue;
+
+		/* HUBPs */
+		pool->base.hubps[j] = dcn32_hubp_create(ctx, i);
+		if (pool->base.hubps[j] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC: failed to create hubps!\n");
+			goto create_fail;
+		}
+
+		/* DPPs */
+		pool->base.dpps[j] = dcn32_dpp_create(ctx, i);
+		if (pool->base.dpps[j] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC: failed to create dpps!\n");
+			goto create_fail;
+		}
+
+		/* OPPs */
+		pool->base.opps[j] = dcn32_opp_create(ctx, i);
+		if (pool->base.opps[j] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC: failed to create output pixel processor!\n");
+			goto create_fail;
+		}
+
+		/* TGs */
+		pool->base.timing_generators[j] = dcn32_timing_generator_create(
+				ctx, i);
+		if (pool->base.timing_generators[j] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error("DC: failed to create tg!\n");
+			goto create_fail;
+		}
+
+		/* ABMs */
+		pool->base.multiple_abms[j] = dmub_abm_create(ctx,
+				&abm_regs[i],
+				&abm_shift,
+				&abm_mask);
+		if (pool->base.multiple_abms[j] == NULL) {
+			dm_error("DC: failed to create abm for pipe %d!\n", i);
+			BREAK_TO_DEBUGGER();
+			goto create_fail;
+		}
+
+		/* index for resource pool arrays for next valid pipe */
+		j++;
+	}
+
+	/* PSR */
+	pool->base.psr = dmub_psr_create(ctx);
+	if (pool->base.psr == NULL) {
+		dm_error("DC: failed to create psr obj!\n");
+		BREAK_TO_DEBUGGER();
+		goto create_fail;
+	}
+
+	/* MPCCs */
+	pool->base.mpc = dcn32_mpc_create(ctx, pool->base.res_cap->num_timing_generator, pool->base.res_cap->num_mpc_3dlut);
+	if (pool->base.mpc == NULL) {
+		BREAK_TO_DEBUGGER();
+		dm_error("DC: failed to create mpc!\n");
+		goto create_fail;
+	}
+
+	/* DSCs */
+	for (i = 0; i < pool->base.res_cap->num_dsc; i++) {
+		pool->base.dscs[i] = dcn32_dsc_create(ctx, i);
+		if (pool->base.dscs[i] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error("DC: failed to create display stream compressor %d!\n", i);
+			goto create_fail;
+		}
+	}
+
+	/* DWB */
+	if (!dcn32_dwbc_create(ctx, &pool->base)) {
+		BREAK_TO_DEBUGGER();
+		dm_error("DC: failed to create dwbc!\n");
+		goto create_fail;
+	}
+
+	/* MMHUBBUB */
+	if (!dcn32_mmhubbub_create(ctx, &pool->base)) {
+		BREAK_TO_DEBUGGER();
+		dm_error("DC: failed to create mcif_wb!\n");
+		goto create_fail;
+	}
+
+	/* AUX and I2C */
+	for (i = 0; i < pool->base.res_cap->num_ddc; i++) {
+		pool->base.engines[i] = dcn32_aux_engine_create(ctx, i);
+		if (pool->base.engines[i] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC:failed to create aux engine!!\n");
+			goto create_fail;
+		}
+		pool->base.hw_i2cs[i] = dcn32_i2c_hw_create(ctx, i);
+		if (pool->base.hw_i2cs[i] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC:failed to create hw i2c!!\n");
+			goto create_fail;
+		}
+		pool->base.sw_i2cs[i] = NULL;
+	}
+
+	/* Audio, HWSeq, Stream Encoders including HPO and virtual, MPC 3D LUTs */
+	if (!resource_construct(num_virtual_links, dc, &pool->base,
+			(!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment) ?
+			&res_create_funcs : &res_create_maximus_funcs)))
+			goto create_fail;
+
+	/* HW Sequencer init functions and Plane caps */
+	dcn32_hw_sequencer_init_functions(dc);
+
+	dc->caps.max_planes =  pool->base.pipe_count;
+
+	for (i = 0; i < dc->caps.max_planes; ++i)
+		dc->caps.planes[i] = plane_cap;
+
+	dc->cap_funcs = cap_funcs;
+
+	if (dc->ctx->dc_bios->fw_info.oem_i2c_present) {
+		ddc_init_data.ctx = dc->ctx;
+		ddc_init_data.link = NULL;
+		ddc_init_data.id.id = dc->ctx->dc_bios->fw_info.oem_i2c_obj_id;
+		ddc_init_data.id.enum_id = 0;
+		ddc_init_data.id.type = OBJECT_TYPE_GENERIC;
+		pool->base.oem_device = dal_ddc_service_create(&ddc_init_data);
+	} else {
+		pool->base.oem_device = NULL;
+	}
+
+    DC_FP_END();
+
+	return true;
+
+create_fail:
+
+    DC_FP_END();
+
+	dcn32_resource_destruct(pool);
+
+	return false;
+}
+
+struct resource_pool *dcn32_create_resource_pool(
+		const struct dc_init_data *init_data,
+		struct dc *dc)
+{
+	struct dcn32_resource_pool *pool =
+		kzalloc(sizeof(struct dcn32_resource_pool), GFP_KERNEL);
+
+	if (!pool)
+		return NULL;
+
+	if (dcn32_resource_construct(init_data->num_virtual_links, dc, pool))
+		return &pool->base;
+
+	BREAK_TO_DEBUGGER();
+	kfree(pool);
+	return NULL;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.h
new file mode 100644
index 000000000000..10b58f1c724a
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.h
@@ -0,0 +1,88 @@
+/*
+ * Copyright 2020 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef _DCN32_RESOURCE_H_
+#define _DCN32_RESOURCE_H_
+
+#include "core_types.h"
+
+#define TO_DCN32_RES_POOL(pool)\
+	container_of(pool, struct dcn32_resource_pool, base)
+
+struct dcn32_resource_pool {
+	struct resource_pool base;
+};
+
+struct resource_pool *dcn32_create_resource_pool(
+		const struct dc_init_data *init_data,
+		struct dc *dc);
+
+void dcn32_calculate_dlg_params(
+		struct dc *dc, struct dc_state *context,
+		display_e2e_pipe_params_st *pipes,
+		int pipe_cnt,
+		int vlevel);
+
+struct panel_cntl *dcn32_panel_cntl_create(
+		const struct panel_cntl_init_data *init_data);
+
+bool dcn32_acquire_post_bldn_3dlut(
+		struct resource_context *res_ctx,
+		const struct resource_pool *pool,
+		int mpcc_id,
+		struct dc_3dlut **lut,
+		struct dc_transfer_func **shaper);
+
+bool dcn32_release_post_bldn_3dlut(
+		struct resource_context *res_ctx,
+		const struct resource_pool *pool,
+		struct dc_3dlut **lut,
+		struct dc_transfer_func **shaper);
+
+void dcn32_remove_phantom_pipes(struct dc *dc,
+		struct dc_state *context);
+
+void dcn32_add_phantom_pipes(struct dc *dc,
+		struct dc_state *context,
+		display_e2e_pipe_params_st *pipes,
+		unsigned int pipe_cnt,
+		unsigned int index);
+
+bool dcn32_validate_bandwidth(struct dc *dc,
+		struct dc_state *context,
+		bool fast_validate);
+
+int dcn32_populate_dml_pipes_from_context(
+	struct dc *dc, struct dc_state *context,
+	display_e2e_pipe_params_st *pipes,
+	bool fast_validate);
+
+void dcn32_calculate_wm_and_dlg(
+		struct dc *dc, struct dc_state *context,
+		display_e2e_pipe_params_st *pipes,
+		int pipe_cnt,
+		int vlevel);
+
+#endif /* _DCN32_RESOURCE_H_ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/Makefile b/drivers/gpu/drm/amd/display/dc/dcn321/Makefile
new file mode 100644
index 000000000000..99515cb3ed31
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/Makefile
@@ -0,0 +1,34 @@
+#
+# (c) Copyright 2020 Advanced Micro Devices, Inc. All the rights reserved
+#
+#  All rights reserved.  This notice is intended as a precaution against
+#  inadvertent publication and does not imply publication or any waiver
+#  of confidentiality.  The year included in the foregoing notice is the
+#  year of creation of the work.
+#
+#  Authors: AMD
+#
+# Makefile for dcn321.
+
+DCN321 = dcn321_resource.o
+
+CFLAGS_$(AMDDALPATH)/dc/dcn321/dcn321_resource.o := -mhard-float -msse
+
+ifdef CONFIG_CC_IS_GCC
+ifeq ($(call cc-ifversion, -lt, 0701, y), y)
+IS_OLD_GCC = 1
+endif
+endif
+
+ifdef IS_OLD_GCC
+# Stack alignment mismatch, proceed with caution.
+# GCC < 7.1 cannot compile code using `double` and -mpreferred-stack-boundary=3
+# (8B stack alignment).
+CFLAGS_$(AMDDALPATH)/dc/dcn321/dcn321_resource.o += -mpreferred-stack-boundary=4
+else
+CFLAGS_$(AMDDALPATH)/dc/dcn321/dcn321_resource.o += -msse2
+endif
+
+AMD_DAL_DCN321 = $(addprefix $(AMDDALPATH)/dc/dcn321/,$(DCN321))
+
+AMD_DISPLAY_FILES += $(AMD_DAL_DCN321)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
index a8fb4ab8ced1..6bc477a212b0 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
@@ -1,3 +1,4 @@
+// SPDX-License-Identifier: MIT
 /*
  * Copyright 2019 Advanced Micro Devices, Inc.
  *
@@ -23,7 +24,6 @@
  *
  */
 
-
 #include "dm_services.h"
 #include "dc.h"
 
@@ -98,28 +98,25 @@
 #define MAX_INSTANCE                                        8
 #define MAX_SEGMENT                                         6
 
-
-struct IP_BASE_INSTANCE
-{
-    unsigned int segment[MAX_SEGMENT];
+struct IP_BASE_INSTANCE {
+	unsigned int segment[MAX_SEGMENT];
 };
 
-struct IP_BASE
-{
-    struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
+struct IP_BASE {
+	struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
 };
 
 static const struct IP_BASE DCN_BASE = { { { { 0x00000012, 0x000000C0, 0x000034C0, 0x00009000, 0x02403C00, 0 } },
-                                        { { 0, 0, 0, 0, 0, 0 } },
-                                        { { 0, 0, 0, 0, 0, 0 } },
-                                        { { 0, 0, 0, 0, 0, 0 } },
-                                        { { 0, 0, 0, 0, 0, 0 } },
-                                        { { 0, 0, 0, 0, 0, 0 } },
-                                        { { 0, 0, 0, 0, 0, 0 } },
-                                        { { 0, 0, 0, 0, 0, 0 } } } };
+					{ { 0, 0, 0, 0, 0, 0 } },
+					{ { 0, 0, 0, 0, 0, 0 } },
+					{ { 0, 0, 0, 0, 0, 0 } },
+					{ { 0, 0, 0, 0, 0, 0 } },
+					{ { 0, 0, 0, 0, 0, 0 } },
+					{ { 0, 0, 0, 0, 0, 0 } },
+					{ { 0, 0, 0, 0, 0, 0 } } } };
 
 #define DC_LOGGER_INIT(logger)
-#define fixed16_to_double(x) (((double) x) / ((double) (1 << 16)))
+#define fixed16_to_double(x) (((double)x) / ((double) (1 << 16)))
 #define fixed16_to_double_to_cpu(x) fixed16_to_double(le32_to_cpu(x))
 
 #define DCN3_2_DEFAULT_DET_SIZE 256
@@ -1534,9 +1531,8 @@ static void dcn321_resource_destruct(struct dcn321_resource_pool *pool)
 			pool->base.hubps[i] = NULL;
 		}
 
-		if (pool->base.irqs != NULL) {
+		if (pool->base.irqs != NULL)
 			dal_irq_service_destroy(&pool->base.irqs);
-		}
 	}
 
 	for (i = 0; i < pool->base.res_cap->num_ddc; i++) {
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.h b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.h
new file mode 100644
index 000000000000..2732085a0e88
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.h
@@ -0,0 +1,42 @@
+/*
+ * Copyright 2020 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef _DCN321_RESOURCE_H_
+#define _DCN321_RESOURCE_H_
+
+#include "core_types.h"
+
+#define TO_DCN321_RES_POOL(pool)\
+	container_of(pool, struct dcn321_resource_pool, base)
+
+struct dcn321_resource_pool {
+	struct resource_pool base;
+};
+
+struct resource_pool *dcn321_create_resource_pool(
+		const struct dc_init_data *init_data,
+		struct dc *dc);
+
+#endif /* _DCN321_RESOURCE_H_ */
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 11/43] drm/amd/display: Add dependant changes for DCN32/321
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (8 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 10/43] drm/amd/display: add DCN32/321 specific files for Display Core Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 12/43] drm/amd/display: Add DM support for DCN32/DCN321 Alex Deucher
                   ` (31 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

[Why&How]
This patch adds necessary changes needed in DC files outside DCN32/321
specific tree

v2: squash in updates (Alex)

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/Makefile       |   2 +
 .../drm/amd/display/dc/bios/bios_parser2.c    | 949 ++++++++++++++----
 .../dc/bios/bios_parser_types_internal2.h     |   1 +
 .../display/dc/bios/command_table_helper2.c   |   2 +
 .../gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c  |   2 +-
 drivers/gpu/drm/amd/display/dc/core/dc.c      |  14 +-
 .../gpu/drm/amd/display/dc/core/dc_resource.c |  20 +-
 drivers/gpu/drm/amd/display/dc/dc.h           |  19 +
 .../gpu/drm/amd/display/dc/dc_bios_types.h    |   5 +
 drivers/gpu/drm/amd/display/dc/dc_hw_types.h  |   1 +
 drivers/gpu/drm/amd/display/dc/dc_stream.h    |  21 +
 drivers/gpu/drm/amd/display/dc/dce/dce_abm.h  |  45 +
 .../drm/amd/display/dc/dce/dce_clock_source.h |  15 +
 .../drm/amd/display/dc/dcn10/dcn10_hubbub.h   |  33 +
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c |   5 -
 .../gpu/drm/amd/display/dc/dcn10/dcn10_optc.h |   2 +
 .../display/dc/dcn10/dcn10_stream_encoder.h   |  28 +-
 .../gpu/drm/amd/display/dc/dcn20/dcn20_dccg.h |  34 +-
 .../gpu/drm/amd/display/dc/dcn20/dcn20_hubp.h |  25 +-
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.c    |  32 +-
 .../dc/dcn30/dcn30_dio_stream_encoder.c       |  16 +-
 .../dc/dcn30/dcn30_dio_stream_encoder.h       |  35 +
 .../gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c  |   8 +-
 .../gpu/drm/amd/display/dc/dcn30/dcn30_dpp.h  |  16 +
 .../gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c  |  14 +-
 .../gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h  | 147 +++
 .../gpu/drm/amd/display/dc/dcn30/dcn30_optc.h |   9 +
 .../gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c |  73 +-
 .../gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h |  14 +-
 .../gpu/drm/amd/display/dc/dcn31/dcn31_optc.c |  22 +-
 .../gpu/drm/amd/display/dc/dcn31/dcn31_optc.h |   6 +-
 .../dc/dcn32/dcn32_dio_stream_encoder.c       |  36 +-
 .../dc/dcn32/dcn32_dio_stream_encoder.h       |  15 +-
 .../drm/amd/display/dc/dcn32/dcn32_hwseq.c    |   4 +-
 .../gpu/drm/amd/display/dc/dcn32/dcn32_optc.c |  36 +-
 .../drm/amd/display/dc/dcn32/dcn32_resource.c |   2 +-
 .../drm/amd/display/dc/dml/dcn20/dcn20_fpu.c  |  59 +-
 .../gpu/drm/amd/display/dc/inc/core_types.h   |  10 +
 drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h  |   9 +-
 .../gpu/drm/amd/display/dc/inc/hw/dchubbub.h  |   3 +
 drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h  |   7 +
 .../gpu/drm/amd/display/dc/inc/hw/mem_input.h |   2 +
 .../amd/display/dc/inc/hw/stream_encoder.h    |   4 +
 .../amd/display/dc/inc/hw/timing_generator.h  |  10 +-
 .../gpu/drm/amd/display/dc/inc/hw_sequencer.h |   2 +
 .../amd/display/dc/inc/hw_sequencer_private.h |   5 +
 drivers/gpu/drm/amd/display/dc/inc/resource.h |   7 +
 .../amd/display/include/bios_parser_types.h   |  10 +
 48 files changed, 1529 insertions(+), 307 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/Makefile b/drivers/gpu/drm/amd/display/dc/Makefile
index b4eca0236435..4de8e1871711 100644
--- a/drivers/gpu/drm/amd/display/dc/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/Makefile
@@ -38,6 +38,8 @@ DC_LIBS += dcn303
 DC_LIBS += dcn31
 DC_LIBS += dcn315
 DC_LIBS += dcn316
+DC_LIBS += dcn32
+DC_LIBS += dcn321
 endif
 
 DC_LIBS += dce120
diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
index 23a3b640f0ee..bbc0a5769e88 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
@@ -165,9 +165,21 @@ static uint8_t bios_parser_get_connectors_number(struct dc_bios *dcb)
 	unsigned int count = 0;
 	unsigned int i;
 
-	for (i = 0; i < bp->object_info_tbl.v1_4->number_of_path; i++) {
-		if (bp->object_info_tbl.v1_4->display_path[i].encoderobjid != 0)
-			count++;
+	switch (bp->object_info_tbl.revision.minor) {
+	default:
+	case 4:
+		for (i = 0; i < bp->object_info_tbl.v1_4->number_of_path; i++)
+			if (bp->object_info_tbl.v1_4->display_path[i].encoderobjid != 0)
+				count++;
+
+		break;
+
+	case 5:
+		for (i = 0; i < bp->object_info_tbl.v1_5->number_of_path; i++)
+			if (bp->object_info_tbl.v1_5->display_path[i].encoderobjid != 0)
+				count++;
+
+		break;
 	}
 	return count;
 }
@@ -182,16 +194,34 @@ static struct graphics_object_id bios_parser_get_connector_id(
 	struct object_info_table *tbl = &bp->object_info_tbl;
 	struct display_object_info_table_v1_4 *v1_4 = tbl->v1_4;
 
-	if (v1_4->number_of_path > i) {
-		/* If display_objid is generic object id,  the encoderObj
-		 * /extencoderobjId should be 0
-		 */
-		if (v1_4->display_path[i].encoderobjid != 0 &&
-				v1_4->display_path[i].display_objid != 0)
-			object_id = object_id_from_bios_object_id(
+	struct display_object_info_table_v1_5 *v1_5 = tbl->v1_5;
+
+	switch (bp->object_info_tbl.revision.minor) {
+	default:
+	case 4:
+		if (v1_4->number_of_path > i) {
+			/* If display_objid is generic object id,  the encoderObj
+			 * /extencoderobjId should be 0
+			 */
+			if (v1_4->display_path[i].encoderobjid != 0 &&
+			    v1_4->display_path[i].display_objid != 0)
+				object_id = object_id_from_bios_object_id(
 					v1_4->display_path[i].display_objid);
-	}
+		}
+		break;
 
+	case 5:
+		if (v1_5->number_of_path > i) {
+			/* If display_objid is generic object id,  the encoderObjId
+		 * should be 0
+		 */
+			if (v1_5->display_path[i].encoderobjid != 0 &&
+			    v1_5->display_path[i].display_objid != 0)
+				object_id = object_id_from_bios_object_id(
+					v1_5->display_path[i].display_objid);
+		}
+		break;
+	}
 	return object_id;
 }
 
@@ -201,8 +231,8 @@ static enum bp_result bios_parser_get_src_obj(struct dc_bios *dcb,
 {
 	struct bios_parser *bp = BP_FROM_DCB(dcb);
 	unsigned int i;
-	enum bp_result  bp_result = BP_RESULT_BADINPUT;
-	struct graphics_object_id obj_id = {0};
+	enum bp_result bp_result = BP_RESULT_BADINPUT;
+	struct graphics_object_id obj_id = { 0 };
 	struct object_info_table *tbl = &bp->object_info_tbl;
 
 	if (!src_object_id)
@@ -217,37 +247,84 @@ static enum bp_result bios_parser_get_src_obj(struct dc_bios *dcb,
 		 * If found in for loop, should break.
 		 * DAL2 implementation may be changed too
 		 */
-		for (i = 0; i < tbl->v1_4->number_of_path; i++) {
-			obj_id = object_id_from_bios_object_id(
-			tbl->v1_4->display_path[i].encoderobjid);
-			if (object_id.type == obj_id.type &&
-					object_id.id == obj_id.id &&
-						object_id.enum_id ==
-							obj_id.enum_id) {
-				*src_object_id =
-				object_id_from_bios_object_id(0x1100);
-				/* break; */
+		switch (bp->object_info_tbl.revision.minor) {
+		default:
+		case 4:
+			for (i = 0; i < tbl->v1_4->number_of_path; i++) {
+				obj_id = object_id_from_bios_object_id(
+					tbl->v1_4->display_path[i].encoderobjid);
+				if (object_id.type == obj_id.type &&
+				    object_id.id == obj_id.id &&
+				    object_id.enum_id == obj_id.enum_id) {
+					*src_object_id =
+						object_id_from_bios_object_id(
+							0x1100);
+					/* break; */
+				}
+			}
+			bp_result = BP_RESULT_OK;
+			break;
+
+		case 5:
+			for (i = 0; i < tbl->v1_5->number_of_path; i++) {
+				obj_id = object_id_from_bios_object_id(
+					tbl->v1_5->display_path[i].encoderobjid);
+				if (object_id.type == obj_id.type &&
+				    object_id.id == obj_id.id &&
+				    object_id.enum_id == obj_id.enum_id) {
+					*src_object_id =
+						object_id_from_bios_object_id(
+							0x1100);
+					/* break; */
+				}
 			}
+			bp_result = BP_RESULT_OK;
+			break;
 		}
-		bp_result = BP_RESULT_OK;
 		break;
 	case OBJECT_TYPE_CONNECTOR:
-		for (i = 0; i < tbl->v1_4->number_of_path; i++) {
-			obj_id = object_id_from_bios_object_id(
-				tbl->v1_4->display_path[i].display_objid);
-
-			if (object_id.type == obj_id.type &&
-				object_id.id == obj_id.id &&
-					object_id.enum_id == obj_id.enum_id) {
-				*src_object_id =
-				object_id_from_bios_object_id(
-				tbl->v1_4->display_path[i].encoderobjid);
-				/* break; */
+		switch (bp->object_info_tbl.revision.minor) {
+		default:
+		case 4:
+			for (i = 0; i < tbl->v1_4->number_of_path; i++) {
+				obj_id = object_id_from_bios_object_id(
+					tbl->v1_4->display_path[i]
+						.display_objid);
+
+				if (object_id.type == obj_id.type &&
+				    object_id.id == obj_id.id &&
+				    object_id.enum_id == obj_id.enum_id) {
+					*src_object_id =
+						object_id_from_bios_object_id(
+							tbl->v1_4
+								->display_path[i]
+								.encoderobjid);
+					/* break; */
+				}
 			}
+			bp_result = BP_RESULT_OK;
+			break;
 		}
 		bp_result = BP_RESULT_OK;
 		break;
+		case 5:
+			for (i = 0; i < tbl->v1_5->number_of_path; i++) {
+				obj_id = object_id_from_bios_object_id(
+								       tbl->v1_5->display_path[i].display_objid);
+
+				if (object_id.type == obj_id.type &&
+				    object_id.id == obj_id.id &&
+				    object_id.enum_id == obj_id.enum_id) {
+					*src_object_id = object_id_from_bios_object_id(
+										       tbl->v1_5->display_path[i].encoderobjid);
+					/* break; */
+				}
+			}
+		bp_result = BP_RESULT_OK;
+		break;
+
 	default:
+		bp_result = BP_RESULT_OK;
 		break;
 	}
 
@@ -290,12 +367,55 @@ static struct atom_display_object_path_v2 *get_bios_object(
 	}
 }
 
+/* from graphics_object_id, find display path which includes the object_id */
+static struct atom_display_object_path_v3 *get_bios_object_from_path_v3(
+	struct bios_parser *bp,
+	struct graphics_object_id id)
+{
+	unsigned int i;
+	struct graphics_object_id obj_id = {0};
+
+	switch (id.type) {
+	case OBJECT_TYPE_ENCODER:
+		for (i = 0; i < bp->object_info_tbl.v1_5->number_of_path; i++) {
+			obj_id = object_id_from_bios_object_id(
+					bp->object_info_tbl.v1_5->display_path[i].encoderobjid);
+			if (id.type == obj_id.type && id.id == obj_id.id
+					&& id.enum_id == obj_id.enum_id)
+				return &bp->object_info_tbl.v1_5->display_path[i];
+		}
+        break;
+
+	case OBJECT_TYPE_CONNECTOR:
+	case OBJECT_TYPE_GENERIC:
+		/* Both Generic and Connector Object ID
+		 * will be stored on display_objid
+		 */
+		for (i = 0; i < bp->object_info_tbl.v1_5->number_of_path; i++) {
+			obj_id = object_id_from_bios_object_id(
+					bp->object_info_tbl.v1_5->display_path[i].display_objid);
+			if (id.type == obj_id.type && id.id == obj_id.id
+					&& id.enum_id == obj_id.enum_id)
+				return &bp->object_info_tbl.v1_5->display_path[i];
+		}
+        break;
+
+	default:
+		return NULL;
+	}
+
+    return NULL;
+}
+
 static enum bp_result bios_parser_get_i2c_info(struct dc_bios *dcb,
 	struct graphics_object_id id,
 	struct graphics_object_i2c_info *info)
 {
 	uint32_t offset;
 	struct atom_display_object_path_v2 *object;
+
+	struct atom_display_object_path_v3 *object_path_v3;
+
 	struct atom_common_record_header *header;
 	struct atom_i2c_record *record;
 	struct atom_i2c_record dummy_record = {0};
@@ -313,12 +433,25 @@ static enum bp_result bios_parser_get_i2c_info(struct dc_bios *dcb,
 			return BP_RESULT_NORECORD;
 	}
 
-	object = get_bios_object(bp, id);
+	switch (bp->object_info_tbl.revision.minor) {
+	    case 4:
+	    default:
+	        object = get_bios_object(bp, id);
 
-	if (!object)
-		return BP_RESULT_BADINPUT;
+	        if (!object)
+				return BP_RESULT_BADINPUT;
 
-	offset = object->disp_recordoffset + bp->object_info_tbl_offset;
+	        offset = object->disp_recordoffset + bp->object_info_tbl_offset;
+	        break;
+	    case 5:
+		object_path_v3 = get_bios_object_from_path_v3(bp, id);
+
+		if (!object_path_v3)
+			return BP_RESULT_BADINPUT;
+
+		offset = object_path_v3->disp_recordoffset + bp->object_info_tbl_offset;
+		break;
+	}
 
 	for (;;) {
 		header = GET_IMAGE(struct atom_common_record_header, offset);
@@ -421,6 +554,41 @@ static enum bp_result get_gpio_i2c_info(
 	return BP_RESULT_OK;
 }
 
+static struct atom_hpd_int_record *get_hpd_record_for_path_v3(
+	struct bios_parser *bp,
+	struct atom_display_object_path_v3 *object)
+{
+	struct atom_common_record_header *header;
+	uint32_t offset;
+
+	if (!object) {
+		BREAK_TO_DEBUGGER(); /* Invalid object */
+		return NULL;
+	}
+
+	offset = object->disp_recordoffset + bp->object_info_tbl_offset;
+
+	for (;;) {
+		header = GET_IMAGE(struct atom_common_record_header, offset);
+
+		if (!header)
+			return NULL;
+
+		if (header->record_type == ATOM_RECORD_END_TYPE ||
+			!header->record_size)
+			break;
+
+		if (header->record_type == ATOM_HPD_INT_RECORD_TYPE
+			&& sizeof(struct atom_hpd_int_record) <=
+							header->record_size)
+			return (struct atom_hpd_int_record *) header;
+
+		offset += header->record_size;
+	}
+
+	return NULL;
+}
+
 static enum bp_result bios_parser_get_hpd_info(
 	struct dc_bios *dcb,
 	struct graphics_object_id id,
@@ -428,17 +596,32 @@ static enum bp_result bios_parser_get_hpd_info(
 {
 	struct bios_parser *bp = BP_FROM_DCB(dcb);
 	struct atom_display_object_path_v2 *object;
+	struct atom_display_object_path_v3 *object_path_v3;
 	struct atom_hpd_int_record *record = NULL;
 
 	if (!info)
 		return BP_RESULT_BADINPUT;
 
-	object = get_bios_object(bp, id);
+	switch (bp->object_info_tbl.revision.minor) {
+	    case 4:
+	    default:
+	        object = get_bios_object(bp, id);
 
-	if (!object)
-		return BP_RESULT_BADINPUT;
+			if (!object)
+				return BP_RESULT_BADINPUT;
+
+	        record = get_hpd_record(bp, object);
+
+	        break;
+	    case 5:
+		object_path_v3 = get_bios_object_from_path_v3(bp, id);
 
-	record = get_hpd_record(bp, object);
+		if (!object_path_v3)
+			return BP_RESULT_BADINPUT;
+
+		record = get_hpd_record_for_path_v3(bp, object_path_v3);
+		break;
+	}
 
 	if (record != NULL) {
 		info->hpd_int_gpio_uid = record->pin_id;
@@ -526,25 +709,9 @@ static enum bp_result bios_parser_get_gpio_pin_info(
 		return BP_RESULT_UNSUPPORTED;
 
 	/* Temporary hard code gpio pin info */
-#if defined(FOR_SIMNOW_BOOT)
-	{
-		struct  atom_gpio_pin_assignment  gpio_pin[8] = {
-				{0x5db5, 0, 0, 1, 0},
-				{0x5db5, 8, 8, 2, 0},
-				{0x5db5, 0x10, 0x10, 3, 0},
-				{0x5db5, 0x18, 0x14, 4, 0},
-				{0x5db5, 0x1A, 0x18, 5, 0},
-				{0x5db5, 0x1C, 0x1C, 6, 0},
-		};
-
-		count = 6;
-		memmove(header->gpio_pin, gpio_pin, sizeof(gpio_pin));
-	}
-#else
 	count = (le16_to_cpu(header->table_header.structuresize)
 			- sizeof(struct atom_common_table_header))
 				/ sizeof(struct atom_gpio_pin_assignment);
-#endif
 	for (i = 0; i < count; ++i) {
 		if (header->gpio_pin[i].gpio_id != gpio_id)
 			continue;
@@ -633,19 +800,37 @@ static enum bp_result bios_parser_get_device_tag(
 	struct bios_parser *bp = BP_FROM_DCB(dcb);
 	struct atom_display_object_path_v2 *object;
 
+	struct atom_display_object_path_v3 *object_path_v3;
+
+
 	if (!info)
 		return BP_RESULT_BADINPUT;
 
-	/* getBiosObject will return MXM object */
-	object = get_bios_object(bp, connector_object_id);
+	switch (bp->object_info_tbl.revision.minor) {
+	    case 4:
+	    default:
+	        /* getBiosObject will return MXM object */
+	        object = get_bios_object(bp, connector_object_id);
 
-	if (!object) {
-		BREAK_TO_DEBUGGER(); /* Invalid object id */
-		return BP_RESULT_BADINPUT;
-	}
+			if (!object) {
+				BREAK_TO_DEBUGGER(); /* Invalid object id */
+				return BP_RESULT_BADINPUT;
+			}
 
-	info->acpi_device = 0; /* BIOS no longer provides this */
-	info->dev_id = device_type_from_device_id(object->device_tag);
+	        info->acpi_device = 0; /* BIOS no longer provides this */
+	        info->dev_id = device_type_from_device_id(object->device_tag);
+	        break;
+	    case 5:
+		object_path_v3 = get_bios_object_from_path_v3(bp, connector_object_id);
+
+		if (!object_path_v3) {
+			BREAK_TO_DEBUGGER(); /* Invalid object id */
+			return BP_RESULT_BADINPUT;
+		}
+		info->acpi_device = 0; /* BIOS no longer provides this */
+		info->dev_id = device_type_from_device_id(object_path_v3->device_tag);
+		break;
+	}
 
 	return BP_RESULT_OK;
 }
@@ -803,6 +988,71 @@ static enum bp_result get_ss_info_v4_2(
 	return result;
 }
 
+static enum bp_result get_ss_info_v4_5(
+	struct bios_parser *bp,
+	uint32_t id,
+	uint32_t index,
+	struct spread_spectrum_info *ss_info)
+{
+	enum bp_result result = BP_RESULT_OK;
+	struct atom_display_controller_info_v4_5 *disp_cntl_tbl = NULL;
+
+	if (!ss_info)
+		return BP_RESULT_BADINPUT;
+
+	if (!DATA_TABLES(dce_info))
+		return BP_RESULT_BADBIOSTABLE;
+
+	disp_cntl_tbl =  GET_IMAGE(struct atom_display_controller_info_v4_5,
+							DATA_TABLES(dce_info));
+	if (!disp_cntl_tbl)
+		return BP_RESULT_BADBIOSTABLE;
+
+	ss_info->type.STEP_AND_DELAY_INFO = false;
+	ss_info->spread_percentage_divider = 1000;
+	/* BIOS no longer uses target clock.  Always enable for now */
+	ss_info->target_clock_range = 0xffffffff;
+
+	switch (id) {
+	case AS_SIGNAL_TYPE_DVI:
+		ss_info->spread_spectrum_percentage =
+				disp_cntl_tbl->dvi_ss_percentage;
+		ss_info->spread_spectrum_range =
+				disp_cntl_tbl->dvi_ss_rate_10hz * 10;
+		if (disp_cntl_tbl->dvi_ss_mode & ATOM_SS_CENTRE_SPREAD_MODE)
+			ss_info->type.CENTER_MODE = true;
+		break;
+	case AS_SIGNAL_TYPE_HDMI:
+		ss_info->spread_spectrum_percentage =
+				disp_cntl_tbl->hdmi_ss_percentage;
+		ss_info->spread_spectrum_range =
+				disp_cntl_tbl->hdmi_ss_rate_10hz * 10;
+		if (disp_cntl_tbl->hdmi_ss_mode & ATOM_SS_CENTRE_SPREAD_MODE)
+			ss_info->type.CENTER_MODE = true;
+		break;
+	case AS_SIGNAL_TYPE_DISPLAY_PORT:
+		ss_info->spread_spectrum_percentage =
+				disp_cntl_tbl->dp_ss_percentage;
+		ss_info->spread_spectrum_range =
+				disp_cntl_tbl->dp_ss_rate_10hz * 10;
+		if (disp_cntl_tbl->dp_ss_mode & ATOM_SS_CENTRE_SPREAD_MODE)
+			ss_info->type.CENTER_MODE = true;
+		break;
+	case AS_SIGNAL_TYPE_GPU_PLL:
+		/* atom_smu_info_v4_0 does not have fields for SS for SMU Display PLL anymore.
+		 * SMU Display PLL supposed to be without spread.
+		 * Better place for it would be in atom_display_controller_info_v4_5 table.
+		 */
+		result = BP_RESULT_UNSUPPORTED;
+		break;
+	default:
+		result = BP_RESULT_UNSUPPORTED;
+		break;
+	}
+
+	return result;
+}
+
 /**
  * bios_parser_get_spread_spectrum_info
  * Get spread spectrum information from the ASIC_InternalSS_Info(ver 2.1 or
@@ -847,6 +1097,9 @@ static enum bp_result bios_parser_get_spread_spectrum_info(
 		case 3:
 		case 4:
 			return get_ss_info_v4_2(bp, signal, index, ss_info);
+		case 5:
+			return get_ss_info_v4_5(bp, signal, index, ss_info);
+
 		default:
 			ASSERT(0);
 			break;
@@ -887,6 +1140,31 @@ static enum bp_result get_soc_bb_info_v4_4(
 	return result;
 }
 
+static enum bp_result get_soc_bb_info_v4_5(
+	struct bios_parser *bp,
+	struct bp_soc_bb_info *soc_bb_info)
+{
+	enum bp_result result = BP_RESULT_OK;
+	struct atom_display_controller_info_v4_5 *disp_cntl_tbl = NULL;
+
+	if (!soc_bb_info)
+		return BP_RESULT_BADINPUT;
+
+	if (!DATA_TABLES(dce_info))
+		return BP_RESULT_BADBIOSTABLE;
+
+	disp_cntl_tbl =  GET_IMAGE(struct atom_display_controller_info_v4_5,
+							DATA_TABLES(dce_info));
+	if (!disp_cntl_tbl)
+		return BP_RESULT_BADBIOSTABLE;
+
+	soc_bb_info->dram_clock_change_latency_100ns = disp_cntl_tbl->max_mclk_chg_lat;
+	soc_bb_info->dram_sr_enter_exit_latency_100ns = disp_cntl_tbl->max_sr_enter_exit_lat;
+	soc_bb_info->dram_sr_exit_latency_100ns = disp_cntl_tbl->max_sr_exit_lat;
+
+	return result;
+}
+
 static enum bp_result bios_parser_get_soc_bb_info(
 	struct dc_bios *dcb,
 	struct bp_soc_bb_info *soc_bb_info)
@@ -916,6 +1194,9 @@ static enum bp_result bios_parser_get_soc_bb_info(
 		case 4:
 			result = get_soc_bb_info_v4_4(bp, soc_bb_info);
 			break;
+		case 5:
+			result = get_soc_bb_info_v4_5(bp, soc_bb_info);
+			break;
 		default:
 			break;
 		}
@@ -1023,6 +1304,30 @@ static enum bp_result get_disp_caps_v4_4(
 	return result;
 }
 
+static enum bp_result get_disp_caps_v4_5(
+	struct bios_parser *bp,
+	uint8_t *dce_caps)
+{
+	enum bp_result result = BP_RESULT_OK;
+	struct atom_display_controller_info_v4_5 *disp_cntl_tbl = NULL;
+
+	if (!dce_caps)
+		return BP_RESULT_BADINPUT;
+
+	if (!DATA_TABLES(dce_info))
+		return BP_RESULT_BADBIOSTABLE;
+
+	disp_cntl_tbl = GET_IMAGE(struct atom_display_controller_info_v4_5,
+							DATA_TABLES(dce_info));
+
+	if (!disp_cntl_tbl)
+		return BP_RESULT_BADBIOSTABLE;
+
+	*dce_caps = disp_cntl_tbl->display_caps;
+
+	return result;
+}
+
 static enum bp_result bios_parser_get_lttpr_interop(
 	struct dc_bios *dcb,
 	uint8_t *dce_caps)
@@ -1057,6 +1362,11 @@ static enum bp_result bios_parser_get_lttpr_interop(
 			result = get_disp_caps_v4_4(bp, dce_caps);
 			*dce_caps = !!(*dce_caps & DCE_INFO_CAPS_VBIOS_LTTPR_TRANSPARENT_ENABLE);
 			break;
+		case 5:
+			result = get_disp_caps_v4_5(bp, dce_caps);
+			*dce_caps = !!(*dce_caps & DCE_INFO_CAPS_VBIOS_LTTPR_TRANSPARENT_ENABLE);
+			break;
+
 		default:
 			break;
 		}
@@ -1102,6 +1412,10 @@ static enum bp_result bios_parser_get_lttpr_caps(
 			result = get_disp_caps_v4_4(bp, dce_caps);
 			*dce_caps = !!(*dce_caps & DCE_INFO_CAPS_LTTPR_SUPPORT_ENABLE);
 			break;
+		case 5:
+			result = get_disp_caps_v4_5(bp, dce_caps);
+			*dce_caps = !!(*dce_caps & DCE_INFO_CAPS_LTTPR_SUPPORT_ENABLE);
+
 		default:
 			break;
 		}
@@ -1218,7 +1532,6 @@ static enum bp_result bios_parser_get_embedded_panel_info(
 		default:
 			break;
 		}
-		break;
 	default:
 		break;
 	}
@@ -1274,8 +1587,17 @@ static bool bios_parser_is_device_id_supported(
 
 	uint32_t mask = get_support_mask_for_device_id(id);
 
-	return (le16_to_cpu(bp->object_info_tbl.v1_4->supporteddevices) &
-								mask) != 0;
+	switch (bp->object_info_tbl.revision.minor) {
+	    case 4:
+	    default:
+	        return (le16_to_cpu(bp->object_info_tbl.v1_4->supporteddevices) & mask) != 0;
+			break;
+	    case 5:
+			return (le16_to_cpu(bp->object_info_tbl.v1_5->supporteddevices) & mask) != 0;
+			break;
+	}
+
+    return false;
 }
 
 static uint32_t bios_parser_get_ss_entry_number(
@@ -1408,12 +1730,21 @@ static void bios_parser_set_scratch_critical_state(
 	bios_set_scratch_critical_state(dcb, state);
 }
 
+struct atom_dig_transmitter_info_header_v5_3 {
+    struct atom_common_table_header table_header;
+    uint16_t dpphy_hdmi_settings_offset;
+    uint16_t dpphy_dvi_settings_offset;
+    uint16_t dpphy_dp_setting_table_offset;
+    uint16_t uniphy_xbar_settings_v2_table_offset;
+    uint16_t dpphy_internal_reg_overide_offset;
+};
+
 static enum bp_result bios_parser_get_firmware_info(
 	struct dc_bios *dcb,
 	struct dc_firmware_info *info)
 {
 	struct bios_parser *bp = BP_FROM_DCB(dcb);
-	enum bp_result result = BP_RESULT_BADBIOSTABLE;
+	static enum bp_result result = BP_RESULT_BADBIOSTABLE;
 	struct atom_common_table_header *header;
 
 	struct atom_data_revision revision;
@@ -1590,6 +1921,11 @@ static enum bp_result get_firmware_info_v3_4(
 	struct atom_data_revision revision;
 	struct atom_display_controller_info_v4_1 *dce_info_v4_1 = NULL;
 	struct atom_display_controller_info_v4_4 *dce_info_v4_4 = NULL;
+
+	struct atom_smu_info_v3_5 *smu_info_v3_5 = NULL;
+	struct atom_display_controller_info_v4_5 *dce_info_v4_5 = NULL;
+	struct atom_smu_info_v4_0 *smu_info_v4_0 = NULL;
+
 	if (!info)
 		return BP_RESULT_BADINPUT;
 
@@ -1609,6 +1945,22 @@ static enum bp_result get_firmware_info_v3_4(
 	switch (revision.major) {
 	case 4:
 		switch (revision.minor) {
+		case 5:
+			dce_info_v4_5 = GET_IMAGE(struct atom_display_controller_info_v4_5,
+							DATA_TABLES(dce_info));
+
+			if (!dce_info_v4_5)
+				return BP_RESULT_BADBIOSTABLE;
+
+			 /* 100MHz expected */
+			info->pll_info.crystal_frequency = dce_info_v4_5->dce_refclk_10khz * 10;
+			info->dp_phy_ref_clk             = dce_info_v4_5->dpphy_refclk_10khz * 10;
+			 /* 50MHz expected */
+			info->i2c_engine_ref_clk         = dce_info_v4_5->i2c_engine_refclk_10khz * 10;
+
+			/* For DCN32/321 Display PLL VCO Frequency from dce_info_v4_5 may not be reliable */
+			break;
+
 		case 4:
 			dce_info_v4_4 = GET_IMAGE(struct atom_display_controller_info_v4_4,
 							DATA_TABLES(dce_info));
@@ -1650,6 +2002,45 @@ static enum bp_result get_firmware_info_v3_4(
 					DATA_TABLES(smu_info));
 	get_atom_data_table_revision(header, &revision);
 
+	switch (revision.major) {
+	case 3:
+		switch (revision.minor) {
+		case 5:
+			smu_info_v3_5 = GET_IMAGE(struct atom_smu_info_v3_5,
+							DATA_TABLES(smu_info));
+
+			if (!smu_info_v3_5)
+				return BP_RESULT_BADBIOSTABLE;
+
+			info->default_engine_clk = smu_info_v3_5->bootup_dcefclk_10khz * 10;
+			break;
+
+		default:
+			break;
+		}
+		break;
+
+	case 4:
+		switch (revision.minor) {
+		case 0:
+			smu_info_v4_0 = GET_IMAGE(struct atom_smu_info_v4_0,
+							DATA_TABLES(smu_info));
+
+			if (!smu_info_v4_0)
+				return BP_RESULT_BADBIOSTABLE;
+
+			/* For DCN32/321 bootup DCFCLK from smu_info_v4_0 may not be reliable */
+			break;
+
+		default:
+			break;
+		}
+		break;
+
+	default:
+		break;
+	}
+
 	 // We need to convert from 10KHz units into KHz units.
 	info->default_memory_clk = firmware_info->bootup_mclk_in10khz * 10;
 
@@ -1675,6 +2066,12 @@ static enum bp_result bios_parser_get_encoder_cap_info(
 	if (!info)
 		return BP_RESULT_BADINPUT;
 
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+	/* encoder cap record not available in v1_5 */
+	if (bp->object_info_tbl.revision.minor == 5)
+		return BP_RESULT_NORECORD;
+#endif
+
 	object = get_bios_object(bp, object_id);
 
 	if (!object)
@@ -1781,6 +2178,42 @@ static struct atom_disp_connector_caps_record *get_disp_connector_caps_record(
 	return NULL;
 }
 
+static struct atom_connector_caps_record *get_connector_caps_record(
+	struct bios_parser *bp,
+	struct atom_display_object_path_v3 *object)
+{
+	struct atom_common_record_header *header;
+	uint32_t offset;
+
+	if (!object) {
+		BREAK_TO_DEBUGGER(); /* Invalid object */
+		return NULL;
+	}
+
+	offset = object->disp_recordoffset + bp->object_info_tbl_offset;
+
+	for (;;) {
+		header = GET_IMAGE(struct atom_common_record_header, offset);
+
+		if (!header)
+			return NULL;
+
+		offset += header->record_size;
+
+		if (header->record_type == ATOM_RECORD_END_TYPE ||
+				!header->record_size)
+			break;
+
+		if (header->record_type != ATOM_CONNECTOR_CAP_RECORD_TYPE)
+			continue;
+
+		if (sizeof(struct atom_connector_caps_record) <= header->record_size)
+			return (struct atom_connector_caps_record *)header;
+	}
+
+	return NULL;
+}
+
 static enum bp_result bios_parser_get_disp_connector_caps_info(
 	struct dc_bios *dcb,
 	struct graphics_object_id object_id,
@@ -1788,25 +2221,116 @@ static enum bp_result bios_parser_get_disp_connector_caps_info(
 {
 	struct bios_parser *bp = BP_FROM_DCB(dcb);
 	struct atom_display_object_path_v2 *object;
+
+	struct atom_display_object_path_v3 *object_path_v3;
+	struct atom_connector_caps_record *record_path_v3;
+
 	struct atom_disp_connector_caps_record *record = NULL;
 
 	if (!info)
 		return BP_RESULT_BADINPUT;
 
-	object = get_bios_object(bp, object_id);
+	switch (bp->object_info_tbl.revision.minor) {
+	    case 4:
+	    default:
+		    object = get_bios_object(bp, object_id);
 
-	if (!object)
+		    if (!object)
+			    return BP_RESULT_BADINPUT;
+
+		    record = get_disp_connector_caps_record(bp, object);
+		    if (!record)
+			    return BP_RESULT_NORECORD;
+
+		    info->INTERNAL_DISPLAY =
+			    (record->connectcaps & ATOM_CONNECTOR_CAP_INTERNAL_DISPLAY) ? 1 : 0;
+		    info->INTERNAL_DISPLAY_BL =
+			    (record->connectcaps & ATOM_CONNECTOR_CAP_INTERNAL_DISPLAY_BL) ? 1 : 0;
+		    break;
+	    case 5:
+		object_path_v3 = get_bios_object_from_path_v3(bp, object_id);
+
+		if (!object_path_v3)
+			return BP_RESULT_BADINPUT;
+
+		record_path_v3 = get_connector_caps_record(bp, object_path_v3);
+		if (!record_path_v3)
+			return BP_RESULT_NORECORD;
+
+		info->INTERNAL_DISPLAY = (record_path_v3->connector_caps & ATOM_CONNECTOR_CAP_INTERNAL_DISPLAY)
+									? 1 : 0;
+		info->INTERNAL_DISPLAY_BL = (record_path_v3->connector_caps & ATOM_CONNECTOR_CAP_INTERNAL_DISPLAY_BL)
+										? 1 : 0;
+		break;
+	}
+
+	return BP_RESULT_OK;
+}
+
+static struct atom_connector_speed_record *get_connector_speed_cap_record(
+	struct bios_parser *bp,
+	struct atom_display_object_path_v3 *object)
+{
+	struct atom_common_record_header *header;
+	uint32_t offset;
+
+	if (!object) {
+		BREAK_TO_DEBUGGER(); /* Invalid object */
+		return NULL;
+	}
+
+	offset = object->disp_recordoffset + bp->object_info_tbl_offset;
+
+	for (;;) {
+		header = GET_IMAGE(struct atom_common_record_header, offset);
+
+		if (!header)
+			return NULL;
+
+		offset += header->record_size;
+
+		if (header->record_type == ATOM_RECORD_END_TYPE ||
+				!header->record_size)
+			break;
+
+		if (header->record_type != ATOM_CONNECTOR_SPEED_UPTO)
+			continue;
+
+		if (sizeof(struct atom_connector_speed_record) <= header->record_size)
+			return (struct atom_connector_speed_record *)header;
+	}
+
+	return NULL;
+}
+
+static enum bp_result bios_parser_get_connector_speed_cap_info(
+	struct dc_bios *dcb,
+	struct graphics_object_id object_id,
+	struct bp_connector_speed_cap_info *info)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	struct atom_display_object_path_v3 *object_path_v3;
+	//struct atom_connector_speed_record *record = NULL;
+	struct atom_connector_speed_record *record;
+
+	if (!info)
+		return BP_RESULT_BADINPUT;
+
+	object_path_v3 = get_bios_object_from_path_v3(bp, object_id);
+
+	if (!object_path_v3)
 		return BP_RESULT_BADINPUT;
 
-	record = get_disp_connector_caps_record(bp, object);
+	record = get_connector_speed_cap_record(bp, object_path_v3);
 	if (!record)
 		return BP_RESULT_NORECORD;
 
-	info->INTERNAL_DISPLAY = (record->connectcaps & ATOM_CONNECTOR_CAP_INTERNAL_DISPLAY)
-									? 1 : 0;
-	info->INTERNAL_DISPLAY_BL = (record->connectcaps & ATOM_CONNECTOR_CAP_INTERNAL_DISPLAY_BL)
-											? 1 : 0;
-
+	info->DP_HBR2_EN = (record->connector_max_speed >= 5400) ? 1 : 0;
+	info->DP_HBR3_EN = (record->connector_max_speed >= 8100) ? 1 : 0;
+	info->HDMI_6GB_EN = (record->connector_max_speed >= 5940) ? 1 : 0;
+	info->DP_UHBR10_EN = (record->connector_max_speed >= 10000) ? 1 : 0;
+	info->DP_UHBR13_5_EN = (record->connector_max_speed >= 13500) ? 1 : 0;
+	info->DP_UHBR20_EN = (record->connector_max_speed >= 20000) ? 1 : 0;
 	return BP_RESULT_OK;
 }
 
@@ -1815,7 +2339,7 @@ static enum bp_result get_vram_info_v23(
 	struct dc_vram_info *info)
 {
 	struct atom_vram_info_header_v2_3 *info_v23;
-	enum bp_result result = BP_RESULT_OK;
+	static enum bp_result result = BP_RESULT_OK;
 
 	info_v23 = GET_IMAGE(struct atom_vram_info_header_v2_3,
 						DATA_TABLES(vram_info));
@@ -1834,7 +2358,7 @@ static enum bp_result get_vram_info_v24(
 	struct dc_vram_info *info)
 {
 	struct atom_vram_info_header_v2_4 *info_v24;
-	enum bp_result result = BP_RESULT_OK;
+	static enum bp_result result = BP_RESULT_OK;
 
 	info_v24 = GET_IMAGE(struct atom_vram_info_header_v2_4,
 						DATA_TABLES(vram_info));
@@ -1853,7 +2377,7 @@ static enum bp_result get_vram_info_v25(
 	struct dc_vram_info *info)
 {
 	struct atom_vram_info_header_v2_5 *info_v25;
-	enum bp_result result = BP_RESULT_OK;
+	static enum bp_result result = BP_RESULT_OK;
 
 	info_v25 = GET_IMAGE(struct atom_vram_info_header_v2_5,
 						DATA_TABLES(vram_info));
@@ -1878,7 +2402,7 @@ static enum bp_result get_vram_info_v25(
  * integrated_info *info - [out] store and output integrated info
  *
  * @return
- * enum bp_result - BP_RESULT_OK if information is available,
+ * static enum bp_result - BP_RESULT_OK if information is available,
  *                  BP_RESULT_BADBIOSTABLE otherwise.
  */
 static enum bp_result get_integrated_info_v11(
@@ -2369,17 +2893,19 @@ static enum bp_result get_integrated_info_v2_2(
  * integrated_info *info - [out] store and output integrated info
  *
  * @return
- * enum bp_result - BP_RESULT_OK if information is available,
+ * static enum bp_result - BP_RESULT_OK if information is available,
  *                  BP_RESULT_BADBIOSTABLE otherwise.
  */
 static enum bp_result construct_integrated_info(
 	struct bios_parser *bp,
 	struct integrated_info *info)
 {
-	enum bp_result result = BP_RESULT_BADBIOSTABLE;
+	static enum bp_result result = BP_RESULT_BADBIOSTABLE;
 
 	struct atom_common_table_header *header;
 	struct atom_data_revision revision;
+
+	struct clock_voltage_caps temp = {0, 0};
 	uint32_t i;
 	uint32_t j;
 
@@ -2427,8 +2953,10 @@ static enum bp_result construct_integrated_info(
 				info->disp_clk_voltage[j-1].max_supported_clk
 				) {
 				/* swap j and j - 1*/
-				swap(info->disp_clk_voltage[j - 1],
-				     info->disp_clk_voltage[j]);
+				temp = info->disp_clk_voltage[j-1];
+				info->disp_clk_voltage[j-1] =
+					info->disp_clk_voltage[j];
+				info->disp_clk_voltage[j] = temp;
 			}
 		}
 	}
@@ -2441,7 +2969,7 @@ static enum bp_result bios_parser_get_vram_info(
 		struct dc_vram_info *info)
 {
 	struct bios_parser *bp = BP_FROM_DCB(dcb);
-	enum bp_result result = BP_RESULT_BADBIOSTABLE;
+	static enum bp_result result = BP_RESULT_BADBIOSTABLE;
 	struct atom_common_table_header *header;
 	struct atom_data_revision revision;
 
@@ -2507,7 +3035,7 @@ static enum bp_result update_slot_layout_info(
 	struct atom_display_object_path_v2 *object;
 	struct atom_bracket_layout_record *record;
 	struct atom_common_record_header *record_header;
-	enum bp_result result;
+	static enum bp_result result;
 	struct bios_parser *bp;
 	struct object_info_table *tbl;
 	struct display_object_info_table_v1_4 *v1_4;
@@ -2613,6 +3141,104 @@ static enum bp_result update_slot_layout_info(
 	return result;
 }
 
+static enum bp_result update_slot_layout_info_v2(
+	struct dc_bios *dcb,
+	unsigned int i,
+	struct slot_layout_info *slot_layout_info)
+{
+	unsigned int record_offset;
+	struct atom_display_object_path_v3 *object;
+	struct atom_bracket_layout_record_v2 *record;
+	struct atom_common_record_header *record_header;
+	static enum bp_result result;
+	struct bios_parser *bp;
+	struct object_info_table *tbl;
+	struct display_object_info_table_v1_5 *v1_5;
+	struct graphics_object_id connector_id;
+
+	record = NULL;
+	record_header = NULL;
+	result = BP_RESULT_NORECORD;
+
+	bp = BP_FROM_DCB(dcb);
+	tbl = &bp->object_info_tbl;
+	v1_5 = tbl->v1_5;
+
+	object = &v1_5->display_path[i];
+	record_offset = (unsigned int)
+		(object->disp_recordoffset) +
+		(unsigned int)(bp->object_info_tbl_offset);
+
+	for (;;) {
+
+		record_header = (struct atom_common_record_header *)
+			GET_IMAGE(struct atom_common_record_header,
+			record_offset);
+		if (record_header == NULL) {
+			result = BP_RESULT_BADBIOSTABLE;
+			break;
+		}
+
+		/* the end of the list */
+		if (record_header->record_type == ATOM_RECORD_END_TYPE ||
+			record_header->record_size == 0)	{
+			break;
+		}
+
+		if (record_header->record_type ==
+			ATOM_BRACKET_LAYOUT_V2_RECORD_TYPE &&
+			sizeof(struct atom_bracket_layout_record_v2)
+			<= record_header->record_size) {
+			record = (struct atom_bracket_layout_record_v2 *)
+				(record_header);
+			result = BP_RESULT_OK;
+			break;
+		}
+
+		record_offset += record_header->record_size;
+	}
+
+	/* return if the record not found */
+	if (result != BP_RESULT_OK)
+		return result;
+
+	/* get slot sizes */
+	connector_id = object_id_from_bios_object_id(object->display_objid);
+
+	slot_layout_info->length = record->bracketlen;
+	slot_layout_info->width = record->bracketwidth;
+	slot_layout_info->num_of_connectors = v1_5->number_of_path;
+	slot_layout_info->connectors[i].position = record->conn_num;
+	slot_layout_info->connectors[i].connector_id = connector_id;
+
+	switch (connector_id.id) {
+	case CONNECTOR_ID_SINGLE_LINK_DVID:
+	case CONNECTOR_ID_DUAL_LINK_DVID:
+		slot_layout_info->connectors[i].connector_type = CONNECTOR_LAYOUT_TYPE_DVI_D;
+		slot_layout_info->connectors[i].length = CONNECTOR_SIZE_DVI;
+		break;
+
+	case CONNECTOR_ID_HDMI_TYPE_A:
+		slot_layout_info->connectors[i].connector_type = CONNECTOR_LAYOUT_TYPE_HDMI;
+		slot_layout_info->connectors[i].length = CONNECTOR_SIZE_HDMI;
+		break;
+
+	case CONNECTOR_ID_DISPLAY_PORT:
+		if (record->mini_type == MINI_TYPE_NORMAL) {
+			slot_layout_info->connectors[i].connector_type = CONNECTOR_LAYOUT_TYPE_DP;
+			slot_layout_info->connectors[i].length = CONNECTOR_SIZE_DP;
+		} else {
+			slot_layout_info->connectors[i].connector_type = CONNECTOR_LAYOUT_TYPE_MINI_DP;
+			slot_layout_info->connectors[i].length = CONNECTOR_SIZE_MINI_DP;
+		}
+		break;
+
+	default:
+		slot_layout_info->connectors[i].connector_type = CONNECTOR_LAYOUT_TYPE_UNKNOWN;
+		slot_layout_info->connectors[i].length = CONNECTOR_SIZE_UNKNOWN;
+	}
+	return result;
+}
 
 static enum bp_result get_bracket_layout_record(
 	struct dc_bios *dcb,
@@ -2621,9 +3247,10 @@ static enum bp_result get_bracket_layout_record(
 {
 	unsigned int i;
 	struct bios_parser *bp = BP_FROM_DCB(dcb);
-	enum bp_result result;
+	static enum bp_result result;
 	struct object_info_table *tbl;
 	struct display_object_info_table_v1_4 *v1_4;
+    struct display_object_info_table_v1_5 *v1_5;
 
 	if (slot_layout_info == NULL) {
 		DC_LOG_DETECTION_EDID_PARSER("Invalid slot_layout_info\n");
@@ -2633,14 +3260,21 @@ static enum bp_result get_bracket_layout_record(
 	v1_4 = tbl->v1_4;
 
 	result = BP_RESULT_NORECORD;
-	for (i = 0; i < v1_4->number_of_path; ++i)	{
-
-		if (bracket_layout_id ==
-			v1_4->display_path[i].display_objid) {
-			result = update_slot_layout_info(dcb, i,
-				slot_layout_info);
+	switch (bp->object_info_tbl.revision.minor) {
+		case 4:
+		default:
+			for (i = 0; i < v1_4->number_of_path; ++i)	{
+				if (bracket_layout_id ==
+					v1_4->display_path[i].display_objid) {
+					result = update_slot_layout_info(dcb, i, slot_layout_info);
+					break;
+				}
+			}
+		    break;
+		case 5:
+			for (i = 0; i < v1_5->number_of_path; ++i)
+				result = update_slot_layout_info_v2(dcb, i, slot_layout_info);
 			break;
-		}
 	}
 	return result;
 }
@@ -2650,7 +3284,10 @@ static enum bp_result bios_get_board_layout_info(
 	struct board_layout_info *board_layout_info)
 {
 	unsigned int i;
-	enum bp_result record_result;
+
+	struct bios_parser *bp;
+
+	static enum bp_result record_result;
 
 	const unsigned int slot_index_to_vbios_id[MAX_BOARD_SLOTS] = {
 		GENERICOBJECT_BRACKET_LAYOUT_ENUM_ID1,
@@ -2658,6 +3295,9 @@ static enum bp_result bios_get_board_layout_info(
 		0, 0
 	};
 
+
+	bp = BP_FROM_DCB(dcb);
+
 	if (board_layout_info == NULL) {
 		DC_LOG_DETECTION_EDID_PARSER("Invalid board_layout_info\n");
 		return BP_RESULT_BADINPUT;
@@ -2692,99 +3332,6 @@ static uint16_t bios_parser_pack_data_tables(
 	struct dc_bios *dcb,
 	void *dst)
 {
-#ifdef PACK_BIOS_DATA
-	struct bios_parser *bp = BP_FROM_DCB(dcb);
-	struct atom_rom_header_v2_2 *rom_header = NULL;
-	struct atom_rom_header_v2_2 *packed_rom_header = NULL;
-	struct atom_common_table_header *data_tbl_header = NULL;
-	struct atom_master_list_of_data_tables_v2_1 *data_tbl_list = NULL;
-	struct atom_master_data_table_v2_1 *packed_master_data_tbl = NULL;
-	struct atom_data_revision tbl_rev = {0};
-	uint16_t *rom_header_offset = NULL;
-	const uint8_t *bios = bp->base.bios;
-	uint8_t *bios_dst = (uint8_t *)dst;
-	uint16_t packed_rom_header_offset;
-	uint16_t packed_masterdatatable_offset;
-	uint16_t packed_data_tbl_offset;
-	uint16_t data_tbl_offset;
-	unsigned int i;
-
-	rom_header_offset =
-		GET_IMAGE(uint16_t, OFFSET_TO_ATOM_ROM_HEADER_POINTER);
-
-	if (!rom_header_offset)
-		return 0;
-
-	rom_header = GET_IMAGE(struct atom_rom_header_v2_2, *rom_header_offset);
-
-	if (!rom_header)
-		return 0;
-
-	get_atom_data_table_revision(&rom_header->table_header, &tbl_rev);
-	if (!(tbl_rev.major >= 2 && tbl_rev.minor >= 2))
-		return 0;
-
-	get_atom_data_table_revision(&bp->master_data_tbl->table_header, &tbl_rev);
-	if (!(tbl_rev.major >= 2 && tbl_rev.minor >= 1))
-		return 0;
-
-	packed_rom_header_offset =
-		OFFSET_TO_ATOM_ROM_HEADER_POINTER + sizeof(*rom_header_offset);
-
-	packed_masterdatatable_offset =
-		packed_rom_header_offset + rom_header->table_header.structuresize;
-
-	packed_data_tbl_offset =
-		packed_masterdatatable_offset +
-		bp->master_data_tbl->table_header.structuresize;
-
-	packed_rom_header =
-		(struct atom_rom_header_v2_2 *)(bios_dst + packed_rom_header_offset);
-
-	packed_master_data_tbl =
-		(struct atom_master_data_table_v2_1 *)(bios_dst +
-		packed_masterdatatable_offset);
-
-	memcpy(bios_dst, bios, OFFSET_TO_ATOM_ROM_HEADER_POINTER);
-
-	*((uint16_t *)(bios_dst + OFFSET_TO_ATOM_ROM_HEADER_POINTER)) =
-		packed_rom_header_offset;
-
-	memcpy(bios_dst + packed_rom_header_offset, rom_header,
-		rom_header->table_header.structuresize);
-
-	packed_rom_header->masterdatatable_offset = packed_masterdatatable_offset;
-
-	memcpy(&packed_master_data_tbl->table_header,
-		&bp->master_data_tbl->table_header,
-		sizeof(bp->master_data_tbl->table_header));
-
-	data_tbl_list = &bp->master_data_tbl->listOfdatatables;
-
-	/* Each data table offset in data table list is 2 bytes,
-	 * we can use that to iterate through listOfdatatables
-	 * without knowing the name of each member.
-	 */
-	for (i = 0; i < sizeof(*data_tbl_list)/sizeof(uint16_t); i++) {
-		data_tbl_offset = *((uint16_t *)data_tbl_list + i);
-
-		if (data_tbl_offset) {
-			data_tbl_header =
-				(struct atom_common_table_header *)(bios + data_tbl_offset);
-
-			memcpy(bios_dst + packed_data_tbl_offset, data_tbl_header,
-				data_tbl_header->structuresize);
-
-			*((uint16_t *)&packed_master_data_tbl->listOfdatatables + i) =
-				packed_data_tbl_offset;
-
-			packed_data_tbl_offset += data_tbl_header->structuresize;
-		} else {
-			*((uint16_t *)&packed_master_data_tbl->listOfdatatables + i) = 0;
-		}
-	}
-	return packed_data_tbl_offset;
-#endif
 	// TODO: There is data bytes alignment issue, disable it for now.
 	return 0;
 }
@@ -2814,6 +3361,13 @@ static struct atom_dc_golden_table_v1 *bios_get_golden_table(
 			dc_golden_offset = DATA_TABLES(dce_info) + disp_cntl_tbl_4_4->dc_golden_table_offset;
 			*dc_golden_table_ver = disp_cntl_tbl_4_4->dc_golden_table_ver;
 			break;
+		case 5:
+		default:
+			/* For atom_display_controller_info_v4_5 there is no need to get golden table from
+			 * dc_golden_table_offset as all these fields previously in golden table used for AUX
+			 * pre-charge settings are now available directly in atom_display_controller_info_v4_5.
+			 */
+			break;
 		}
 		break;
 	}
@@ -2916,6 +3470,7 @@ static const struct dc_vbios_funcs vbios_funcs = {
 	.bios_parser_destroy = firmware_parser_destroy,
 
 	.get_board_layout_info = bios_get_board_layout_info,
+	/* TODO: use this fn in hw init?*/
 	.pack_data_tables = bios_parser_pack_data_tables,
 
 	.get_atom_dc_golden_table = bios_get_atom_dc_golden_table,
@@ -2929,6 +3484,8 @@ static const struct dc_vbios_funcs vbios_funcs = {
 	.get_lttpr_caps = bios_parser_get_lttpr_caps,
 
 	.get_lttpr_interop = bios_parser_get_lttpr_interop,
+
+	.get_connector_speed_cap_info = bios_parser_get_connector_speed_cap_info,
 };
 
 static bool bios_parser2_construct(
@@ -3002,6 +3559,16 @@ static bool bios_parser2_construct(
 			return false;
 
 		bp->object_info_tbl.v1_4 = tbl_v1_4;
+	} else if (bp->object_info_tbl.revision.major == 1
+		&& bp->object_info_tbl.revision.minor == 5) {
+		struct display_object_info_table_v1_5 *tbl_v1_5;
+
+		tbl_v1_5 = GET_IMAGE(struct display_object_info_table_v1_5,
+			bp->object_info_tbl_offset);
+		if (!tbl_v1_5)
+			return false;
+
+		bp->object_info_tbl.v1_5 = tbl_v1_5;
 	} else {
 		ASSERT(0);
 		return false;
diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h b/drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h
index bf1f5c86e65c..41d02d473082 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h
@@ -40,6 +40,7 @@ struct object_info_table {
 	struct atom_data_revision revision;
 	union {
 		struct display_object_info_table_v1_4 *v1_4;
+		struct display_object_info_table_v1_5 *v1_5;
 	};
 };
 
diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c b/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
index f3792286f571..f22593bcb862 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
@@ -77,6 +77,8 @@ bool dal_bios_parser_init_cmd_tbl_helper2(
 	case DCN_VERSION_3_1:
 	case DCN_VERSION_3_15:
 	case DCN_VERSION_3_16:
+	case DCN_VERSION_3_2:
+	case DCN_VERSION_3_21:
 		*h = dal_cmd_tbl_helper_dce112_get_table2();
 		return true;
 
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
index e803e59abd56..d145dcbca778 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
@@ -328,8 +328,8 @@ struct clk_mgr *dc_clk_mgr_create(struct dc_context *ctx, struct pp_smu_funcs *p
 	    dcn32_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
 	    return &clk_mgr->base;
 	    break;
-#endif
 	}
+#endif
 	default:
 		ASSERT(0); /* Unknown Asic */
 		break;
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index f14449401188..e3e3b2791632 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -3054,11 +3054,15 @@ static void commit_planes_for_stream(struct dc *dc,
 
 	}
 
-	if (should_lock_all_pipes && dc->hwss.interdependent_update_lock) {
-		dc->hwss.interdependent_update_lock(dc, context, false);
-	} else {
-		dc->hwss.pipe_control_lock(dc, top_pipe_to_program, false);
-	}
+#ifdef CONFIG_DRM_AMD_DC_DCN
+		if (update_type != UPDATE_TYPE_FAST)
+			if (dc->hwss.commit_subvp_config)
+				dc->hwss.commit_subvp_config(dc, context);
+#endif
+		if (should_lock_all_pipes && dc->hwss.interdependent_update_lock)
+			dc->hwss.interdependent_update_lock(dc, context, false);
+		else
+			dc->hwss.pipe_control_lock(dc, top_pipe_to_program, false);
 
 	if ((update_type != UPDATE_TYPE_FAST) && stream->update_flags.bits.dsc_changed)
 		if (top_pipe_to_program->stream_res.tg->funcs->lock_doublebuffer_enable) {
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
index 6774dd8bb53e..b087452e4590 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
@@ -67,6 +67,8 @@
 #include "dcn31/dcn31_resource.h"
 #include "dcn315/dcn315_resource.h"
 #include "dcn316/dcn316_resource.h"
+#include "../dcn32/dcn32_resource.h"
+#include "../dcn321/dcn321_resource.h"
 
 #define DC_LOGGER_INIT(logger)
 
@@ -162,7 +164,11 @@ enum dce_version resource_parse_asic_id(struct hw_asic_id asic_id)
 		if (ASICREV_IS_GC_10_3_7(asic_id.hw_internal_rev))
 			dc_version = DCN_VERSION_3_16;
 		break;
-
+	case AMDGPU_FAMILY_GC_11_0_0:
+		dc_version = DCN_VERSION_3_2;
+		if (ASICREV_IS_GC_11_0_2(asic_id.hw_internal_rev))
+			dc_version = DCN_VERSION_3_21;
+		break;
 	default:
 		dc_version = DCE_VERSION_UNKNOWN;
 		break;
@@ -258,6 +264,12 @@ struct resource_pool *dc_create_resource_pool(struct dc  *dc,
 	case DCN_VERSION_3_16:
 		res_pool = dcn316_create_resource_pool(init_data, dc);
 		break;
+	case DCN_VERSION_3_2:
+		res_pool = dcn32_create_resource_pool(init_data, dc);
+		break;
+	case DCN_VERSION_3_21:
+		res_pool = dcn321_create_resource_pool(init_data, dc);
+		break;
 #endif
 	default:
 		break;
@@ -1982,6 +1994,11 @@ enum dc_status dc_remove_stream_from_ctx(
 				dc->res_pool,
 			del_pipe->stream_res.stream_enc,
 			false);
+	/* Release link encoder from stream in new dc_state. */
+	if (dc->res_pool->funcs->link_enc_unassign)
+		dc->res_pool->funcs->link_enc_unassign(new_ctx, del_pipe->stream);
+
+#if defined(CONFIG_DRM_AMD_DC_DCN)
 	if (is_dp_128b_132b_signal(del_pipe)) {
 		update_hpo_dp_stream_engine_usage(
 			&new_ctx->res_ctx, dc->res_pool,
@@ -1989,6 +2006,7 @@ enum dc_status dc_remove_stream_from_ctx(
 			false);
 		remove_hpo_dp_link_enc_from_ctx(&new_ctx->res_ctx, del_pipe, del_pipe->stream);
 	}
+#endif
 
 	if (del_pipe->stream_res.audio)
 		update_audio_usage(
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index 2eabcdef8903..b6decdf820fa 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -162,6 +162,10 @@ struct dc_color_caps {
 	struct mpc_color_caps mpc;
 };
 
+struct dc_dmub_caps {
+    bool psr;
+};
+
 struct dc_caps {
 	uint32_t max_streams;
 	uint32_t max_links;
@@ -196,12 +200,22 @@ struct dc_caps {
 	unsigned int cursor_cache_size;
 	struct dc_plane_cap planes[MAX_PLANES];
 	struct dc_color_caps color;
+	struct dc_dmub_caps dmub_caps;
 	bool dp_hpo;
 	bool hdmi_frl_pcon_support;
 	bool edp_dsc_support;
 	bool vbios_lttpr_aware;
 	bool vbios_lttpr_enable;
 	uint32_t max_otg_num;
+#ifdef CONFIG_DRM_AMD_DC_DCN
+	uint32_t max_cab_allocation_bytes;
+	uint32_t cache_line_size;
+	uint32_t cache_num_ways;
+	uint16_t subvp_fw_processing_delay_us;
+	uint16_t subvp_prefetch_end_to_mall_start_us;
+	uint16_t subvp_pstate_allow_width_us;
+	uint16_t subvp_vertical_int_margin_us;
+#endif
 };
 
 struct dc_bug_wa {
@@ -425,6 +439,8 @@ struct dc_clocks {
 	 */
 	bool prev_p_state_change_support;
 	bool fclk_prev_p_state_change_support;
+	int num_ways;
+	int prev_num_ways;
 	enum dtm_pstate dtm_level;
 	int max_supported_dppclk_khz;
 	int max_supported_dispclk_khz;
@@ -719,6 +735,9 @@ struct dc_debug_options {
 	bool enable_z9_disable_interface;
 	bool enable_sw_cntl_psr;
 	union dpia_debug_options dpia_debug;
+	bool force_disable_subvp;
+	bool force_subvp_mclk_switch;
+	bool force_usr_allow;
 	bool apply_vendor_specific_lttpr_wa;
 	bool extended_blank_optimization;
 	union aux_wake_wa_options aux_wake_wa;
diff --git a/drivers/gpu/drm/amd/display/dc/dc_bios_types.h b/drivers/gpu/drm/amd/display/dc/dc_bios_types.h
index 67abda44eb1f..260ac4458870 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_bios_types.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_bios_types.h
@@ -156,6 +156,11 @@ struct dc_vbios_funcs {
 	enum bp_result (*get_lttpr_interop)(
 			struct dc_bios *dcb,
 			uint8_t *dce_caps);
+
+	enum bp_result (*get_connector_speed_cap_info)(
+		struct dc_bios *bios,
+		struct graphics_object_id object_id,
+		struct bp_connector_speed_cap_info *info);
 };
 
 struct bios_registers {
diff --git a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
index aa7e3a07191d..d75416dc9fae 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
@@ -780,6 +780,7 @@ struct dc_crtc_timing {
 	uint32_t v_sync_width;
 
 	uint32_t pix_clk_100hz;
+	uint32_t min_refresh_in_uhz;
 
 	uint32_t vic;
 	uint32_t hdmi_vic;
diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h b/drivers/gpu/drm/amd/display/dc/dc_stream.h
index 58941f4defb3..772b4a61f166 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
@@ -145,6 +145,24 @@ struct test_pattern {
 	unsigned int cust_pattern_size;
 };
 
+#ifdef CONFIG_DRM_AMD_DC_DCN
+#define SUBVP_DRR_MARGIN_US 500 // 500us for DRR margin (SubVP + DRR)
+
+enum mall_stream_type {
+	SUBVP_NONE, // subvp not in use
+	SUBVP_MAIN, // subvp in use, this stream is main stream
+	SUBVP_PHANTOM, // subvp in use, this stream is a phantom stream
+};
+
+struct mall_stream_config {
+	/* MALL stream config to indicate if the stream is phantom or not.
+	 * We will use a phantom stream to indicate that the pipe is phantom.
+	 */
+	enum mall_stream_type type;
+	struct dc_stream_state *paired_stream;	// master / slave stream
+};
+#endif
+
 struct dc_stream_state {
 	// sink is deprecated, new code should not reference
 	// this pointer
@@ -255,6 +273,9 @@ struct dc_stream_state {
 
 	bool has_non_synchronizable_pclk;
 	bool vblank_synchronized;
+#ifdef CONFIG_DRM_AMD_DC_DCN
+	struct mall_stream_config mall_stream_config;
+#endif
 };
 
 #define ABM_LEVEL_IMMEDIATE_DISABLE 255
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_abm.h b/drivers/gpu/drm/amd/display/dc/dce/dce_abm.h
index b699d1b2ba83..e6c06325742a 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_abm.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_abm.h
@@ -128,6 +128,21 @@
 	SRI(DC_ABM1_ACE_THRES_12, ABM, id), \
 	NBIO_SR(BIOS_SCRATCH_2)
 
+#define ABM_DCN32_REG_LIST(id)\
+	SRI(DC_ABM1_HG_SAMPLE_RATE, ABM, id), \
+	SRI(DC_ABM1_LS_SAMPLE_RATE, ABM, id), \
+	SRI(BL1_PWM_BL_UPDATE_SAMPLE_RATE, ABM, id), \
+	SRI(DC_ABM1_HG_MISC_CTRL, ABM, id), \
+	SRI(DC_ABM1_IPCSC_COEFF_SEL, ABM, id), \
+	SRI(BL1_PWM_CURRENT_ABM_LEVEL, ABM, id), \
+	SRI(BL1_PWM_TARGET_ABM_LEVEL, ABM, id), \
+	SRI(BL1_PWM_USER_LEVEL, ABM, id), \
+	SRI(DC_ABM1_LS_MIN_MAX_PIXEL_VALUE_THRES, ABM, id), \
+	SRI(DC_ABM1_HGLS_REG_READ_PROGRESS, ABM, id), \
+	SRI(DC_ABM1_ACE_OFFSET_SLOPE_0, ABM, id), \
+	SRI(DC_ABM1_ACE_THRES_12, ABM, id), \
+	NBIO_SR(BIOS_SCRATCH_2)
+
 #define ABM_SF(reg_name, field_name, post_fix)\
 	.field_name = reg_name ## __ ## field_name ## post_fix
 
@@ -203,6 +218,36 @@
 
 #define ABM_MASK_SH_LIST_DCN30(mask_sh) ABM_MASK_SH_LIST_DCN10(mask_sh)
 
+#define ABM_MASK_SH_LIST_DCN32(mask_sh) \
+	ABM_SF(ABM0_DC_ABM1_HG_MISC_CTRL, \
+			ABM1_HG_NUM_OF_BINS_SEL, mask_sh), \
+	ABM_SF(ABM0_DC_ABM1_HG_MISC_CTRL, \
+			ABM1_HG_VMAX_SEL, mask_sh), \
+	ABM_SF(ABM0_DC_ABM1_HG_MISC_CTRL, \
+			ABM1_HG_BIN_BITWIDTH_SIZE_SEL, mask_sh), \
+	ABM_SF(ABM0_DC_ABM1_IPCSC_COEFF_SEL, \
+			ABM1_IPCSC_COEFF_SEL_R, mask_sh), \
+	ABM_SF(ABM0_DC_ABM1_IPCSC_COEFF_SEL, \
+			ABM1_IPCSC_COEFF_SEL_G, mask_sh), \
+	ABM_SF(ABM0_DC_ABM1_IPCSC_COEFF_SEL, \
+			ABM1_IPCSC_COEFF_SEL_B, mask_sh), \
+	ABM_SF(ABM0_BL1_PWM_CURRENT_ABM_LEVEL, \
+			BL1_PWM_CURRENT_ABM_LEVEL, mask_sh), \
+	ABM_SF(ABM0_BL1_PWM_TARGET_ABM_LEVEL, \
+			BL1_PWM_TARGET_ABM_LEVEL, mask_sh), \
+	ABM_SF(ABM0_BL1_PWM_USER_LEVEL, \
+			BL1_PWM_USER_LEVEL, mask_sh), \
+	ABM_SF(ABM0_DC_ABM1_LS_MIN_MAX_PIXEL_VALUE_THRES, \
+			ABM1_LS_MIN_PIXEL_VALUE_THRES, mask_sh), \
+	ABM_SF(ABM0_DC_ABM1_LS_MIN_MAX_PIXEL_VALUE_THRES, \
+			ABM1_LS_MAX_PIXEL_VALUE_THRES, mask_sh), \
+	ABM_SF(ABM0_DC_ABM1_HGLS_REG_READ_PROGRESS, \
+			ABM1_HG_REG_READ_MISSED_FRAME_CLEAR, mask_sh), \
+	ABM_SF(ABM0_DC_ABM1_HGLS_REG_READ_PROGRESS, \
+			ABM1_LS_REG_READ_MISSED_FRAME_CLEAR, mask_sh), \
+	ABM_SF(ABM0_DC_ABM1_HGLS_REG_READ_PROGRESS, \
+			ABM1_BL_REG_READ_MISSED_FRAME_CLEAR, mask_sh)
+
 #define ABM_REG_FIELD_LIST(type) \
 	type ABM1_HG_NUM_OF_BINS_SEL; \
 	type ABM1_HG_VMAX_SEL; \
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.h b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.h
index 9eec3524335f..e0c390fcc12c 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.h
@@ -164,6 +164,10 @@
 	CS_SF(PHYPLLA_PIXCLK_RESYNC_CNTL, PHYPLLA_DCCG_DEEP_COLOR_CNTL, mask_sh),\
 	CS_SF(OTG0_PIXEL_RATE_CNTL, DP_DTO0_ENABLE, mask_sh)
 
+#define CS_COMMON_MASK_SH_LIST_DCN3_2(mask_sh)\
+	CS_COMMON_MASK_SH_LIST_DCN2_0(mask_sh),\
+	CS_SF(OTG0_PIXEL_RATE_CNTL, PIPE0_DTO_SRC_SEL, mask_sh)
+
 #define CS_COMMON_REG_LIST_DCN1_0(index, pllid) \
 		SRI(PIXCLK_RESYNC_CNTL, PHYPLL, pllid),\
 		SRII(PHASE, DP_DTO, 0),\
@@ -197,12 +201,23 @@
 	type DP_DTO0_MODULO; \
 	type DP_DTO0_ENABLE;
 
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+#define CS_REG_FIELD_LIST_DCN32(type) \
+	type PIPE0_DTO_SRC_SEL;
+#endif
+
 struct dce110_clk_src_shift {
 	CS_REG_FIELD_LIST(uint8_t)
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+	CS_REG_FIELD_LIST_DCN32(uint8_t)
+#endif
 };
 
 struct dce110_clk_src_mask{
 	CS_REG_FIELD_LIST(uint32_t)
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+	CS_REG_FIELD_LIST_DCN32(uint32_t)
+#endif
 };
 
 struct dce110_clk_src_regs {
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.h
index 39485bdeb90e..e48fd044f572 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.h
@@ -158,8 +158,39 @@ struct dcn_hubbub_registers {
 	uint32_t DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_Z8_C;
 	uint32_t DCHUBBUB_ARB_ALLOW_SR_ENTER_WATERMARK_Z8_D;
 	uint32_t DCHUBBUB_ARB_ALLOW_SR_EXIT_WATERMARK_Z8_D;
+	uint32_t DCHUBBUB_ARB_USR_RETRAINING_CNTL;
+	uint32_t DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_A;
+	uint32_t DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_B;
+	uint32_t DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_C;
+	uint32_t DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_D;
+	uint32_t DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_A;
+	uint32_t DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_B;
+	uint32_t DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_C;
+	uint32_t DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_D;
+	uint32_t DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_A;
+	uint32_t DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_B;
+	uint32_t DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_C;
+	uint32_t DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_D;
 };
 
+#define HUBBUB_REG_FIELD_LIST_DCN32(type) \
+		type DCHUBBUB_ARB_ALLOW_USR_RETRAINING_FORCE_VALUE;\
+		type DCHUBBUB_ARB_ALLOW_USR_RETRAINING_FORCE_ENABLE;\
+		type DCHUBBUB_ARB_DO_NOT_FORCE_ALLOW_USR_RETRAINING_DURING_PSTATE_CHANGE_REQUEST;\
+		type DCHUBBUB_ARB_DO_NOT_FORCE_ALLOW_USR_RETRAINING_DURING_PRE_CSTATE;\
+		type DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_A;\
+		type DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_B;\
+		type DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_C;\
+		type DCHUBBUB_ARB_USR_RETRAINING_WATERMARK_D;\
+		type DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_A;\
+		type DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_B;\
+		type DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_C;\
+		type DCHUBBUB_ARB_UCLK_PSTATE_CHANGE_WATERMARK_D;\
+		type DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_A;\
+		type DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_B;\
+		type DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_C;\
+		type DCHUBBUB_ARB_FCLK_PSTATE_CHANGE_WATERMARK_D
+
 /* set field name */
 #define HUBBUB_SF(reg_name, field_name, post_fix)\
 	.field_name = reg_name ## __ ## field_name ## post_fix
@@ -337,6 +368,7 @@ struct dcn_hubbub_shift {
 	HUBBUB_STUTTER_REG_FIELD_LIST(uint8_t);
 	HUBBUB_HVM_REG_FIELD_LIST(uint8_t);
 	HUBBUB_RET_REG_FIELD_LIST(uint8_t);
+	HUBBUB_REG_FIELD_LIST_DCN32(uint8_t);
 };
 
 struct dcn_hubbub_mask {
@@ -344,6 +376,7 @@ struct dcn_hubbub_mask {
 	HUBBUB_STUTTER_REG_FIELD_LIST(uint32_t);
 	HUBBUB_HVM_REG_FIELD_LIST(uint32_t);
 	HUBBUB_RET_REG_FIELD_LIST(uint32_t);
+	HUBBUB_REG_FIELD_LIST_DCN32(uint32_t);
 };
 
 struct dc;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index e3a62873c0e7..1cb206dc8352 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -1375,11 +1375,6 @@ void dcn10_init_pipes(struct dc *dc, struct dc_state *context)
 		pipe_ctx->stream_res.tg = NULL;
 		pipe_ctx->plane_res.hubp = NULL;
 
-		if (tg->funcs->is_tg_enabled(tg)) {
-			if (tg->funcs->init_odm)
-				tg->funcs->init_odm(tg);
-		}
-
 		tg->funcs->tg_init(tg);
 	}
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h
index c50c29984d51..df155cc2bfea 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h
@@ -514,6 +514,7 @@ struct dcn_optc_registers {
 	type DIG_UPDATE_POSITION_X;\
 	type DIG_UPDATE_POSITION_Y;\
 	type OTG_H_TIMING_DIV_MODE;\
+	type OTG_H_TIMING_DIV_MODE_MANUAL;\
 	type OTG_DRR_TIMING_DBUF_UPDATE_MODE;\
 	type OTG_CRC_DSC_MODE;\
 	type OTG_CRC_DATA_STREAM_COMBINE_MODE;\
@@ -553,6 +554,7 @@ struct optc {
 	int vupdate_offset;
 	int vupdate_width;
 	int vready_offset;
+	struct dc_crtc_timing orginal_patched_timing;
 	enum signal_type signal;
 };
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h
index 293595a33982..fa6ff540e8f2 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h
@@ -73,6 +73,7 @@
 	SRI(HDMI_ACR_48_1, DIG, id),\
 	SRI(DP_DB_CNTL, DP, id), \
 	SRI(DP_MSA_MISC, DP, id), \
+	SRI(DP_MSA_VBID_MISC, DP, id), \
 	SRI(DP_MSA_COLORIMETRY, DP, id), \
 	SRI(DP_MSA_TIMING_PARAM1, DP, id), \
 	SRI(DP_MSA_TIMING_PARAM2, DP, id), \
@@ -186,6 +187,7 @@ struct dcn10_stream_enc_registers {
 	uint32_t HDMI_GENERIC_PACKET_CONTROL9;
 	uint32_t HDMI_GENERIC_PACKET_CONTROL10;
 	uint32_t DIG_CLOCK_PATTERN;
+	uint32_t DIG_FIFO_CTRL0;
 };
 
 
@@ -338,7 +340,8 @@ struct dcn10_stream_enc_registers {
 	SE_SF(DIG0_DIG_CLOCK_PATTERN, DIG_CLOCK_PATTERN, mask_sh)
 
 #define SE_COMMON_MASK_SH_LIST_SOC(mask_sh)\
-	SE_COMMON_MASK_SH_LIST_SOC_BASE(mask_sh)
+	SE_COMMON_MASK_SH_LIST_SOC_BASE(mask_sh),\
+	SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_ACP_SEND, mask_sh)
 
 #define SE_COMMON_MASK_SH_LIST_DCN10(mask_sh)\
 	SE_COMMON_MASK_SH_LIST_SOC(mask_sh),\
@@ -567,16 +570,39 @@ struct dcn10_stream_enc_registers {
 	type DP_SEC_GSP11_ENABLE;\
 	type DP_SEC_GSP11_LINE_NUM
 
+#define SE_REG_FIELD_LIST_DCN3_2(type) \
+	type DIG_SYMCLK_FE_ON;\
+	type DIG_FIFO_READ_START_LEVEL;\
+	type DIG_FIFO_ENABLE;\
+	type DIG_FIFO_RESET;\
+	type DIG_FIFO_RESET_DONE
+
+/* TODO: fix this */
+#if defined(CONFIG_DRM_AMD_DC_HDCP)
+#define DIG0_HDMI_VBI_PACKET_CONTROL__HDMI_ACP_SEND__SHIFT 0xc
+#define DIG0_HDMI_VBI_PACKET_CONTROL__HDMI_ACP_SEND_MASK 0x00001000L
+#endif
+
 struct dcn10_stream_encoder_shift {
 	SE_REG_FIELD_LIST_DCN1_0(uint8_t);
+#if defined(CONFIG_DRM_AMD_DC_HDCP)
+	uint8_t HDMI_ACP_SEND;
+#endif
 	SE_REG_FIELD_LIST_DCN2_0(uint8_t);
 	SE_REG_FIELD_LIST_DCN3_0(uint8_t);
+	SE_REG_FIELD_LIST_DCN3_2(uint8_t);
+
 };
 
 struct dcn10_stream_encoder_mask {
 	SE_REG_FIELD_LIST_DCN1_0(uint32_t);
+#if defined(CONFIG_DRM_AMD_DC_HDCP)
+	uint32_t HDMI_ACP_SEND;
+#endif
 	SE_REG_FIELD_LIST_DCN2_0(uint32_t);
 	SE_REG_FIELD_LIST_DCN3_0(uint32_t);
+	SE_REG_FIELD_LIST_DCN3_2(uint32_t);
+
 };
 
 struct dcn10_stream_encoder {
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.h b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.h
index b3c9a9724efd..2b9d3e63191b 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.h
@@ -133,6 +133,8 @@
 	type OTG_DROP_PIXEL[MAX_PIPES];
 
 #define DCCG3_REG_FIELD_LIST(type) \
+	type HDMICHARCLK0_EN;\
+	type HDMICHARCLK0_SRC_SEL;\
 	type PHYASYMCLK_FORCE_EN;\
 	type PHYASYMCLK_FORCE_SRC_SEL;\
 	type PHYBSYMCLK_FORCE_EN;\
@@ -203,16 +205,45 @@
 	type PHYDSYMCLK_GATE_DISABLE; \
 	type PHYESYMCLK_GATE_DISABLE;
 
+#define DCCG32_REG_FIELD_LIST(type) \
+	type DPSTREAMCLK0_EN;\
+	type DPSTREAMCLK1_EN;\
+	type DPSTREAMCLK2_EN;\
+	type DPSTREAMCLK3_EN;\
+	type DPSTREAMCLK0_SRC_SEL;\
+	type DPSTREAMCLK1_SRC_SEL;\
+	type DPSTREAMCLK2_SRC_SEL;\
+	type DPSTREAMCLK3_SRC_SEL;\
+	type HDMISTREAMCLK0_EN;\
+	type OTG0_PIXEL_RATE_DIVK1;\
+	type OTG0_PIXEL_RATE_DIVK2;\
+	type OTG1_PIXEL_RATE_DIVK1;\
+	type OTG1_PIXEL_RATE_DIVK2;\
+	type OTG2_PIXEL_RATE_DIVK1;\
+	type OTG2_PIXEL_RATE_DIVK2;\
+	type OTG3_PIXEL_RATE_DIVK1;\
+	type OTG3_PIXEL_RATE_DIVK2;\
+	type DTBCLK_P0_SRC_SEL;\
+	type DTBCLK_P0_EN;\
+	type DTBCLK_P1_SRC_SEL;\
+	type DTBCLK_P1_EN;\
+	type DTBCLK_P2_SRC_SEL;\
+	type DTBCLK_P2_EN;\
+	type DTBCLK_P3_SRC_SEL;\
+	type DTBCLK_P3_EN;
+
 struct dccg_shift {
 	DCCG_REG_FIELD_LIST(uint8_t)
 	DCCG3_REG_FIELD_LIST(uint8_t)
 	DCCG31_REG_FIELD_LIST(uint8_t)
+	DCCG32_REG_FIELD_LIST(uint8_t)
 };
 
 struct dccg_mask {
 	DCCG_REG_FIELD_LIST(uint32_t)
 	DCCG3_REG_FIELD_LIST(uint32_t)
 	DCCG31_REG_FIELD_LIST(uint32_t)
+	DCCG32_REG_FIELD_LIST(uint32_t)
 };
 
 struct dccg_registers {
@@ -247,7 +278,8 @@ struct dccg_registers {
 	uint32_t DCCG_GATE_DISABLE_CNTL3;
 	uint32_t HDMISTREAMCLK0_DTO_PARAM;
 	uint32_t DCCG_GATE_DISABLE_CNTL4;
-
+	uint32_t OTG_PIXEL_RATE_DIV;
+	uint32_t DTBCLK_P_CNTL;
 };
 
 struct dcn_dccg {
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.h b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.h
index 9204c3ef323b..efa2adf4f83d 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hubp.h
@@ -161,6 +161,12 @@
 	DCN21_HUBP_REG_COMMON_VARIABLE_LIST;\
 	uint32_t DCN_DMDATA_VM_CNTL
 
+#define DCN32_HUBP_REG_COMMON_VARIABLE_LIST \
+	DCN30_HUBP_REG_COMMON_VARIABLE_LIST;\
+	uint32_t DCHUBP_MALL_CONFIG;\
+	uint32_t DCHUBP_VMPG_CONFIG;\
+	uint32_t UCLK_PSTATE_FORCE
+
 #define DCN2_HUBP_REG_FIELD_VARIABLE_LIST(type) \
 	DCN_HUBP_REG_FIELD_BASE_LIST(type); \
 	type DMDATA_ADDRESS_HIGH;\
@@ -222,16 +228,29 @@
 	type CURSOR_REQ_MODE;\
 	type HUBP_SOFT_RESET
 
+#define DCN32_HUBP_REG_FIELD_VARIABLE_LIST(type) \
+	DCN31_HUBP_REG_FIELD_VARIABLE_LIST(type);\
+	type USE_MALL_SEL; \
+	type USE_MALL_FOR_CURSOR;\
+	type VMPG_SIZE; \
+	type PTE_BUFFER_MODE; \
+	type BIGK_FRAGMENT_SIZE; \
+	type FORCE_ONE_ROW_FOR_FRAME; \
+	type DATA_UCLK_PSTATE_FORCE_EN; \
+	type DATA_UCLK_PSTATE_FORCE_VALUE; \
+	type CURSOR_UCLK_PSTATE_FORCE_EN; \
+	type CURSOR_UCLK_PSTATE_FORCE_VALUE
+
 struct dcn_hubp2_registers {
-	DCN30_HUBP_REG_COMMON_VARIABLE_LIST;
+	DCN32_HUBP_REG_COMMON_VARIABLE_LIST;
 };
 
 struct dcn_hubp2_shift {
-	DCN31_HUBP_REG_FIELD_VARIABLE_LIST(uint8_t);
+	DCN32_HUBP_REG_FIELD_VARIABLE_LIST(uint8_t);
 };
 
 struct dcn_hubp2_mask {
-	DCN31_HUBP_REG_FIELD_VARIABLE_LIST(uint32_t);
+	DCN32_HUBP_REG_FIELD_VARIABLE_LIST(uint32_t);
 };
 
 struct dcn20_hubp {
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
index ec6aa8d8b251..d00a27893ab0 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
@@ -768,6 +768,10 @@ enum dc_status dcn20_enable_stream_timing(
 	/* TODO enable stream if timing changed */
 	/* TODO unblank stream if DP */
 
+	if (pipe_ctx->stream && pipe_ctx->stream->mall_stream_config.type == SUBVP_PHANTOM) {
+		if (pipe_ctx->stream_res.tg && pipe_ctx->stream_res.tg->funcs->phantom_crtc_post_enable)
+			pipe_ctx->stream_res.tg->funcs->phantom_crtc_post_enable(pipe_ctx->stream_res.tg);
+	}
 	return DC_OK;
 }
 
@@ -1247,6 +1251,16 @@ void dcn20_pipe_control_lock(
 					lock,
 					&hw_locks,
 					&inst_flags);
+	} else if (pipe->stream && pipe->stream->mall_stream_config.type == SUBVP_MAIN) {
+		union dmub_inbox0_cmd_lock_hw hw_lock_cmd = { 0 };
+		hw_lock_cmd.bits.command_code = DMUB_INBOX0_CMD__HW_LOCK;
+		hw_lock_cmd.bits.hw_lock_client = HW_LOCK_CLIENT_DRIVER;
+		hw_lock_cmd.bits.lock_pipe = 1;
+		hw_lock_cmd.bits.otg_inst = pipe->stream_res.tg->inst;
+		hw_lock_cmd.bits.lock = lock;
+		if (!lock)
+			hw_lock_cmd.bits.should_release = 1;
+		dmub_hw_lock_mgr_inbox0_cmd(dc->ctx->dmub_srv, hw_lock_cmd);
 	} else if (pipe->plane_state != NULL && pipe->plane_state->triplebuffer_flips) {
 		if (lock)
 			pipe->stream_res.tg->funcs->triplebuffer_lock(pipe->stream_res.tg);
@@ -1564,10 +1578,12 @@ static void dcn20_update_dchubp_dpp(
 		plane_state->update_flags.bits.addr_update)
 		hws->funcs.update_plane_addr(dc, pipe_ctx);
 
-
-
 	if (pipe_ctx->update_flags.bits.enable)
 		hubp->funcs->set_blank(hubp, false);
+	/* If the stream paired with this plane is phantom, the plane is also phantom */
+	if (pipe_ctx->stream && pipe_ctx->stream->mall_stream_config.type == SUBVP_PHANTOM
+			&& hubp->funcs->phantom_hubp_post_enable)
+		hubp->funcs->phantom_hubp_post_enable(hubp);
 }
 
 
@@ -1578,6 +1594,7 @@ static void dcn20_program_pipe(
 {
 	struct dce_hwseq *hws = dc->hwseq;
 	/* Only need to unblank on top pipe */
+
 	if ((pipe_ctx->update_flags.bits.enable || pipe_ctx->stream->update_flags.bits.abm_level)
 			&& !pipe_ctx->top_pipe && !pipe_ctx->prev_odm_pipe)
 		hws->funcs.blank_pixel_data(dc, pipe_ctx, !pipe_ctx->plane_state->visible);
@@ -1585,7 +1602,6 @@ static void dcn20_program_pipe(
 	/* Only update TG on top pipe */
 	if (pipe_ctx->update_flags.bits.global_sync && !pipe_ctx->top_pipe
 			&& !pipe_ctx->prev_odm_pipe) {
-
 		pipe_ctx->stream_res.tg->funcs->program_global_sync(
 				pipe_ctx->stream_res.tg,
 				pipe_ctx->pipe_dlg_param.vready_offset,
@@ -1593,7 +1609,12 @@ static void dcn20_program_pipe(
 				pipe_ctx->pipe_dlg_param.vupdate_offset,
 				pipe_ctx->pipe_dlg_param.vupdate_width);
 
-		pipe_ctx->stream_res.tg->funcs->wait_for_state(pipe_ctx->stream_res.tg, CRTC_STATE_VACTIVE);
+		if (pipe_ctx->stream->mall_stream_config.type != SUBVP_PHANTOM) {
+			pipe_ctx->stream_res.tg->funcs->wait_for_state(
+				pipe_ctx->stream_res.tg, CRTC_STATE_VBLANK);
+			pipe_ctx->stream_res.tg->funcs->wait_for_state(
+				pipe_ctx->stream_res.tg, CRTC_STATE_VACTIVE);
+		}
 
 		pipe_ctx->stream_res.tg->funcs->set_vtg_params(
 				pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing, true);
@@ -1749,6 +1770,8 @@ void dcn20_program_front_end_for_ctx(
 			pipe->plane_res.hubp->funcs->hubp_wait_pipe_read_start(pipe->plane_res.hubp);
 		}
 	}
+	if (hws->funcs.program_mall_pipe_config)
+		hws->funcs.program_mall_pipe_config(dc, context);
 }
 
 void dcn20_post_unlock_program_front_end(
@@ -2409,6 +2432,7 @@ void dcn20_update_mpcc(struct dc *dc, struct pipe_ctx *pipe_ctx)
 			NULL,
 			hubp->inst,
 			mpcc_id);
+
 	dc->hwss.update_visual_confirm_color(dc, pipe_ctx, &blnd_cfg.black_color, mpcc_id);
 
 	ASSERT(new_mpcc != NULL);
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_stream_encoder.c
index a04ca4a98392..b683ad817106 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_stream_encoder.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_stream_encoder.c
@@ -195,7 +195,7 @@ static void enc3_update_hdmi_info_packet(
 	}
 }
 
-static void enc3_stream_encoder_update_hdmi_info_packets(
+void enc3_stream_encoder_update_hdmi_info_packets(
 	struct stream_encoder *enc,
 	const struct encoder_info_frame *info_frame)
 {
@@ -214,7 +214,7 @@ static void enc3_stream_encoder_update_hdmi_info_packets(
 	enc3_update_hdmi_info_packet(enc1, 4, &info_frame->hdrsmd);
 }
 
-static void enc3_stream_encoder_stop_hdmi_info_packets(
+void enc3_stream_encoder_stop_hdmi_info_packets(
 	struct stream_encoder *enc)
 {
 	struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
@@ -318,7 +318,7 @@ static void enc3_dp_set_dsc_config(struct stream_encoder *enc,
 }
 
 
-static void enc3_dp_set_dsc_pps_info_packet(struct stream_encoder *enc,
+void enc3_dp_set_dsc_pps_info_packet(struct stream_encoder *enc,
 					bool enable,
 					uint8_t *dsc_packed_pps,
 					bool immediate_update)
@@ -404,7 +404,7 @@ static void enc3_read_state(struct stream_encoder *enc, struct enc_state *s)
 	}
 }
 
-static void enc3_stream_encoder_update_dp_info_packets(
+void enc3_stream_encoder_update_dp_info_packets(
 	struct stream_encoder *enc,
 	const struct encoder_info_frame *info_frame)
 {
@@ -652,7 +652,7 @@ static void enc3_stream_encoder_hdmi_set_stream_attribute(
 	REG_UPDATE(HDMI_GC, HDMI_GC_AVMUTE, 0);
 }
 
-static void enc3_audio_mute_control(
+void enc3_audio_mute_control(
 	struct stream_encoder *enc,
 	bool mute)
 {
@@ -660,7 +660,7 @@ static void enc3_audio_mute_control(
 	enc->afmt->funcs->audio_mute_control(enc->afmt, mute);
 }
 
-static void enc3_se_dp_audio_setup(
+void enc3_se_dp_audio_setup(
 	struct stream_encoder *enc,
 	unsigned int az_inst,
 	struct audio_info *info)
@@ -691,7 +691,7 @@ static void enc3_se_setup_dp_audio(
 	enc->afmt->funcs->setup_dp_audio(enc->afmt);
 }
 
-static void enc3_se_dp_audio_enable(
+void enc3_se_dp_audio_enable(
 	struct stream_encoder *enc)
 {
 	enc1_se_enable_audio_clock(enc, true);
@@ -757,7 +757,7 @@ static void enc3_se_setup_hdmi_audio(
 	 */
 }
 
-static void enc3_se_hdmi_audio_setup(
+void enc3_se_hdmi_audio_setup(
 	struct stream_encoder *enc,
 	unsigned int az_inst,
 	struct audio_info *info,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_stream_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_stream_encoder.h
index 42140e73c3b2..d2207b35f15f 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_stream_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dio_stream_encoder.h
@@ -287,4 +287,39 @@ void dcn30_dio_stream_encoder_construct(
 	const struct dcn10_stream_encoder_shift *se_shift,
 	const struct dcn10_stream_encoder_mask *se_mask);
 
+void enc3_stream_encoder_update_hdmi_info_packets(
+	struct stream_encoder *enc,
+	const struct encoder_info_frame *info_frame);
+
+void enc3_stream_encoder_stop_hdmi_info_packets(
+	struct stream_encoder *enc);
+
+void enc3_stream_encoder_update_dp_info_packets(
+	struct stream_encoder *enc,
+	const struct encoder_info_frame *info_frame);
+
+void enc3_audio_mute_control(
+	struct stream_encoder *enc,
+	bool mute);
+
+void enc3_se_dp_audio_setup(
+	struct stream_encoder *enc,
+	unsigned int az_inst,
+	struct audio_info *info);
+
+void enc3_se_dp_audio_enable(
+	struct stream_encoder *enc);
+
+void enc3_se_hdmi_audio_setup(
+	struct stream_encoder *enc,
+	unsigned int az_inst,
+	struct audio_info *info,
+	struct audio_crtc_info *audio_crtc_info);
+
+void enc3_dp_set_dsc_pps_info_packet(
+	struct stream_encoder *enc,
+	bool enable,
+	uint8_t *dsc_packed_pps,
+    bool immediate_update);
+
 #endif /* __DC_DIO_STREAM_ENCODER_DCN30_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c
index ab3918c0a15b..9cca59bf2ae0 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c
@@ -41,9 +41,9 @@
 	dpp->tf_shift->field_name, dpp->tf_mask->field_name
 
 
-static void dpp30_read_state(struct dpp *dpp_base, struct dcn_dpp_state *s)
+void dpp30_read_state(struct dpp *dpp_base, struct dcn_dpp_state *s)
 {
-	struct dcn20_dpp *dpp = TO_DCN20_DPP(dpp_base);
+	struct dcn3_dpp *dpp = TO_DCN30_DPP(dpp_base);
 
 	REG_GET(DPP_CONTROL,
 			DPP_CLOCK_ENABLE, &s->is_enabled);
@@ -167,7 +167,7 @@ void dpp3_set_pre_degam(struct dpp *dpp_base, enum dc_transfer_func_predefined t
 			PRE_DEGAM_SELECT, degamma_lut_selection);
 }
 
-static void dpp3_cnv_setup (
+void dpp3_cnv_setup (
 		struct dpp *dpp_base,
 		enum surface_pixel_format format,
 		enum expansion_mode mode,
@@ -372,7 +372,7 @@ void dpp3_set_cursor_attributes(
 }
 
 
-static bool dpp3_get_optimal_number_of_taps(
+bool dpp3_get_optimal_number_of_taps(
 		struct dpp *dpp,
 		struct scaler_data *scl_data,
 		const struct scaling_taps *in_taps)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.h b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.h
index ac644ae6b9f2..6263408d71fc 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.h
@@ -588,6 +588,22 @@ void dpp3_program_CM_dealpha(
 		struct dpp *dpp_base,
 		uint32_t enable, uint32_t additive_blending);
 
+void dpp30_read_state(struct dpp *dpp_base,
+		struct dcn_dpp_state *s);
+
+bool dpp3_get_optimal_number_of_taps(
+		struct dpp *dpp,
+		struct scaler_data *scl_data,
+		const struct scaling_taps *in_taps);
+
+void dpp3_cnv_setup (
+		struct dpp *dpp_base,
+		enum surface_pixel_format format,
+		enum expansion_mode mode,
+		struct dc_csc_transform input_csc_color_matrix,
+		enum dc_color_space input_color_space,
+		struct cnv_alpha_2bit_lut *alpha_2bit_lut);
+
 void dpp3_program_CM_bias(
 		struct dpp *dpp_base,
 		struct CM_bias_params *bias_params);
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c
index 0ce0d6165f43..1981a71b961b 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c
@@ -44,7 +44,7 @@
 #define NUM_ELEMENTS(a) (sizeof(a) / sizeof((a)[0]))
 
 
-static bool mpc3_is_dwb_idle(
+bool mpc3_is_dwb_idle(
 	struct mpc *mpc,
 	int dwb_id)
 {
@@ -59,7 +59,7 @@ static bool mpc3_is_dwb_idle(
 		return false;
 }
 
-static void mpc3_set_dwb_mux(
+void mpc3_set_dwb_mux(
 	struct mpc *mpc,
 	int dwb_id,
 	int mpcc_id)
@@ -70,7 +70,7 @@ static void mpc3_set_dwb_mux(
 		MPC_DWB0_MUX, mpcc_id);
 }
 
-static void mpc3_disable_dwb_mux(
+void mpc3_disable_dwb_mux(
 	struct mpc *mpc,
 	int dwb_id)
 {
@@ -80,7 +80,7 @@ static void mpc3_disable_dwb_mux(
 		MPC_DWB0_MUX, 0xf);
 }
 
-static void mpc3_set_out_rate_control(
+void mpc3_set_out_rate_control(
 	struct mpc *mpc,
 	int opp_id,
 	bool enable,
@@ -99,7 +99,7 @@ static void mpc3_set_out_rate_control(
 			MPC_OUT_FLOW_CONTROL_COUNT, flow_control->flow_ctrl_cnt1);
 }
 
-static enum dc_lut_mode mpc3_get_ogam_current(struct mpc *mpc, int mpcc_id)
+enum dc_lut_mode mpc3_get_ogam_current(struct mpc *mpc, int mpcc_id)
 {
 	/*Contrary to DCN2 and DCN1 wherein a single status register field holds this info;
 	 *in DCN3/3AG, we need to read two separate fields to retrieve the same info
@@ -137,7 +137,7 @@ static enum dc_lut_mode mpc3_get_ogam_current(struct mpc *mpc, int mpcc_id)
 		return mode;
 }
 
-static void mpc3_power_on_ogam_lut(
+void mpc3_power_on_ogam_lut(
 		struct mpc *mpc, int mpcc_id,
 		bool power_on)
 {
@@ -1035,7 +1035,7 @@ static void mpc3_set3dlut_ram10(
 }
 
 
-static void mpc3_init_mpcc(struct mpcc *mpcc, int mpcc_inst)
+void mpc3_init_mpcc(struct mpcc *mpcc, int mpcc_inst)
 {
 	mpcc->mpcc_id = mpcc_inst;
 	mpcc->dpp_id = 0xf;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h
index 34b9cedbd012..a4d8f77d43bc 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h
@@ -282,6 +282,73 @@
 	uint32_t MPCC_OGAM_RAMB_START_BASE_CNTL_R[MAX_MPCC]; \
 	uint32_t MPC_OUT_CSC_COEF_FORMAT
 
+#define MPC_REG_VARIABLE_LIST_DCN32 \
+	uint32_t MPCC_MCM_SHAPER_CONTROL[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_OFFSET_R[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_OFFSET_G[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_OFFSET_B[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_SCALE_R[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_SCALE_G_B[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_LUT_INDEX[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_LUT_DATA[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_LUT_WRITE_EN_MASK[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_START_CNTL_B[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_START_CNTL_G[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_START_CNTL_R[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_END_CNTL_B[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_END_CNTL_G[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_END_CNTL_R[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_0_1[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_2_3[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_4_5[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_6_7[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_8_9[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_10_11[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_12_13[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_14_15[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_16_17[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_18_19[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_20_21[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_22_23[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_24_25[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_26_27[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_28_29[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_30_31[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMA_REGION_32_33[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_START_CNTL_B[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_START_CNTL_G[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_START_CNTL_R[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_END_CNTL_B[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_END_CNTL_G[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_END_CNTL_R[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_0_1[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_2_3[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_4_5[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_6_7[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_8_9[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_10_11[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_12_13[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_14_15[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_16_17[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_18_19[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_20_21[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_22_23[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_24_25[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_26_27[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_28_29[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_30_31[MAX_MPCC]; \
+	uint32_t MPCC_MCM_SHAPER_RAMB_REGION_32_33[MAX_MPCC]; \
+	uint32_t MPCC_MCM_3DLUT_MODE[MAX_MPCC]; \
+	uint32_t MPCC_MCM_3DLUT_INDEX[MAX_MPCC]; \
+	uint32_t MPCC_MCM_3DLUT_DATA[MAX_MPCC]; \
+	uint32_t MPCC_MCM_3DLUT_DATA_30BIT[MAX_MPCC]; \
+	uint32_t MPCC_MCM_3DLUT_READ_WRITE_CONTROL[MAX_MPCC]; \
+	uint32_t MPCC_MCM_3DLUT_OUT_NORM_FACTOR[MAX_MPCC]; \
+	uint32_t MPCC_MCM_3DLUT_OUT_OFFSET_R[MAX_MPCC]; \
+	uint32_t MPCC_MCM_3DLUT_OUT_OFFSET_G[MAX_MPCC]; \
+	uint32_t MPCC_MCM_3DLUT_OUT_OFFSET_B[MAX_MPCC]; \
+	uint32_t MPCC_MCM_MEM_PWR_CTRL[MAX_MPCC]
+
 #define MPC_COMMON_MASK_SH_LIST_DCN3_0(mask_sh) \
 	MPC_COMMON_MASK_SH_LIST_DCN1_0(mask_sh),\
 	SF(MPCC0_MPCC_CONTROL, MPCC_BG_BPC, mask_sh),\
@@ -580,6 +647,53 @@
 	type MPC_RMU_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS;\
 	type MPC_RMU_SHAPER_MODE_CURRENT
 
+#define MPC_REG_FIELD_LIST_DCN32(type) \
+	type MPCC_MCM_SHAPER_MEM_PWR_FORCE;\
+	type MPCC_MCM_SHAPER_MEM_PWR_DIS;\
+	type MPCC_MCM_SHAPER_MEM_LOW_PWR_MODE;\
+	type MPCC_MCM_3DLUT_MEM_PWR_FORCE;\
+	type MPCC_MCM_3DLUT_MEM_PWR_DIS;\
+	type MPCC_MCM_3DLUT_MEM_LOW_PWR_MODE;\
+	type MPCC_MCM_1DLUT_MEM_PWR_FORCE;\
+	type MPCC_MCM_1DLUT_MEM_PWR_DIS;\
+	type MPCC_MCM_1DLUT_MEM_LOW_PWR_MODE;\
+	type MPCC_MCM_SHAPER_MEM_PWR_STATE;\
+	type MPCC_MCM_3DLUT_MEM_PWR_STATE;\
+	type MPCC_MCM_1DLUT_MEM_PWR_STATE;\
+	type MPCC_MCM_3DLUT_MODE; \
+	type MPCC_MCM_3DLUT_SIZE; \
+	type MPCC_MCM_3DLUT_MODE_CURRENT; \
+	type MPCC_MCM_3DLUT_WRITE_EN_MASK;\
+	type MPCC_MCM_3DLUT_RAM_SEL;\
+	type MPCC_MCM_3DLUT_30BIT_EN;\
+	type MPCC_MCM_3DLUT_CONFIG_STATUS;\
+	type MPCC_MCM_3DLUT_READ_SEL;\
+	type MPCC_MCM_3DLUT_INDEX;\
+	type MPCC_MCM_3DLUT_DATA0;\
+	type MPCC_MCM_3DLUT_DATA1;\
+	type MPCC_MCM_3DLUT_DATA_30BIT;\
+	type MPCC_MCM_SHAPER_LUT_MODE;\
+	type MPCC_MCM_SHAPER_MODE_CURRENT;\
+	type MPCC_MCM_SHAPER_OFFSET_R;\
+	type MPCC_MCM_SHAPER_OFFSET_G;\
+	type MPCC_MCM_SHAPER_OFFSET_B;\
+	type MPCC_MCM_SHAPER_SCALE_R;\
+	type MPCC_MCM_SHAPER_SCALE_G;\
+	type MPCC_MCM_SHAPER_SCALE_B;\
+	type MPCC_MCM_SHAPER_LUT_INDEX;\
+	type MPCC_MCM_SHAPER_LUT_DATA;\
+	type MPCC_MCM_SHAPER_LUT_WRITE_EN_MASK;\
+	type MPCC_MCM_SHAPER_LUT_WRITE_SEL;\
+	type MPCC_MCM_SHAPER_CONFIG_STATUS;\
+	type MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_B;\
+	type MPCC_MCM_SHAPER_RAMA_EXP_REGION_START_SEGMENT_B;\
+	type MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_B;\
+	type MPCC_MCM_SHAPER_RAMA_EXP_REGION_END_BASE_B;\
+	type MPCC_MCM_SHAPER_RAMA_EXP_REGION0_LUT_OFFSET;\
+	type MPCC_MCM_SHAPER_RAMA_EXP_REGION0_NUM_SEGMENTS;\
+	type MPCC_MCM_SHAPER_RAMA_EXP_REGION1_LUT_OFFSET;\
+	type MPCC_MCM_SHAPER_RAMA_EXP_REGION1_NUM_SEGMENTS
+
 #define MPC_COMMON_MASK_SH_LIST_DCN303(mask_sh) \
 	MPC_COMMON_MASK_SH_LIST_DCN1_0(mask_sh),\
 	SF(MPCC0_MPCC_CONTROL, MPCC_BG_BPC, mask_sh),\
@@ -758,14 +872,17 @@
 
 struct dcn30_mpc_registers {
 	MPC_REG_VARIABLE_LIST_DCN3_0;
+	MPC_REG_VARIABLE_LIST_DCN32;
 };
 
 struct dcn30_mpc_shift {
 	MPC_REG_FIELD_LIST_DCN3_0(uint8_t);
+	MPC_REG_FIELD_LIST_DCN32(uint8_t);
 };
 
 struct dcn30_mpc_mask {
 	MPC_REG_FIELD_LIST_DCN3_0(uint32_t);
+	MPC_REG_FIELD_LIST_DCN32(uint32_t);
 };
 
 struct dcn30_mpc {
@@ -841,4 +958,34 @@ void mpc3_set_rmu_mux(
 	int rmu_idx,
 	int value);
 
+void mpc3_set_dwb_mux(
+	struct mpc *mpc,
+	int dwb_id,
+	int mpcc_id);
+
+void mpc3_disable_dwb_mux(
+	struct mpc *mpc,
+	int dwb_id);
+
+bool mpc3_is_dwb_idle(
+	struct mpc *mpc,
+	int dwb_id);
+
+void mpc3_set_out_rate_control(
+	struct mpc *mpc,
+	int opp_id,
+	bool enable,
+	bool rate_2x_mode,
+	struct mpc_dwb_flow_control *flow_control);
+
+void mpc3_power_on_ogam_lut(
+	struct mpc *mpc, int mpcc_id,
+	bool power_on);
+
+void mpc3_init_mpcc(struct mpcc *mpcc, int mpcc_inst);
+
+enum dc_lut_mode mpc3_get_ogam_current(
+	struct mpc *mpc,
+	int mpcc_id);
+
 #endif
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.h b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.h
index 97f11ef6e9f0..33bd12f5dc17 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_optc.h
@@ -28,6 +28,7 @@
 
 #include "dcn20/dcn20_optc.h"
 
+#define V_TOTAL_REGS_DCN30_SRI(inst)
 
 #define OPTC_COMMON_REG_LIST_DCN3_BASE(inst) \
 	SRI(OTG_VSTARTUP_PARAM, OTG, inst),\
@@ -55,6 +56,7 @@
 	SRI(OTG_V_TOTAL_MAX, OTG, inst),\
 	SRI(OTG_V_TOTAL_MIN, OTG, inst),\
 	SRI(OTG_V_TOTAL_CONTROL, OTG, inst),\
+	V_TOTAL_REGS_DCN30_SRI(inst)\
 	SRI(OTG_TRIGA_CNTL, OTG, inst),\
 	SRI(OTG_FORCE_COUNT_NOW_CNTL, OTG, inst),\
 	SRI(OTG_STATIC_SCREEN_CONTROL, OTG, inst),\
@@ -80,6 +82,7 @@
 	SRI(OTG_VERT_SYNC_CONTROL, OTG, inst),\
 	SRI(OTG_GSL_CONTROL, OTG, inst),\
 	SRI(OTG_CRC_CNTL, OTG, inst),\
+	SRI(OTG_CRC_CNTL2, OTG, inst),\
 	SRI(OTG_CRC0_DATA_RG, OTG, inst),\
 	SRI(OTG_CRC0_DATA_B, OTG, inst),\
 	SRI(OTG_CRC0_WINDOWA_X_CONTROL, OTG, inst),\
@@ -108,6 +111,7 @@
 	SRI(OPTC_MEMORY_CONFIG, ODM, inst),\
 	SR(DWB_SOURCE_SELECT)
 
+#define DCN30_VTOTAL_REGS_SF(mask_sh)
 
 #define OPTC_COMMON_MASK_SH_LIST_DCN3_BASE(mask_sh)\
 	SF(OTG0_OTG_VSTARTUP_PARAM, VSTARTUP_START, mask_sh),\
@@ -161,6 +165,7 @@
 	SF(OTG0_OTG_V_TOTAL_CONTROL, OTG_SET_V_TOTAL_MIN_MASK, mask_sh),\
 	SF(OTG0_OTG_V_TOTAL_CONTROL, OTG_VTOTAL_MID_REPLACING_MIN_EN, mask_sh),\
 	SF(OTG0_OTG_V_TOTAL_CONTROL, OTG_VTOTAL_MID_REPLACING_MAX_EN, mask_sh),\
+	DCN30_VTOTAL_REGS_SF(mask_sh)\
 	SF(OTG0_OTG_FORCE_COUNT_NOW_CNTL, OTG_FORCE_COUNT_NOW_CLEAR, mask_sh),\
 	SF(OTG0_OTG_FORCE_COUNT_NOW_CNTL, OTG_FORCE_COUNT_NOW_MODE, mask_sh),\
 	SF(OTG0_OTG_FORCE_COUNT_NOW_CNTL, OTG_FORCE_COUNT_NOW_OCCURRED, mask_sh),\
@@ -219,6 +224,10 @@
 	SF(OTG0_OTG_CRC_CNTL, OTG_CRC_CONT_EN, mask_sh),\
 	SF(OTG0_OTG_CRC_CNTL, OTG_CRC0_SELECT, mask_sh),\
 	SF(OTG0_OTG_CRC_CNTL, OTG_CRC_EN, mask_sh),\
+	SF(OTG0_OTG_CRC_CNTL2, OTG_CRC_DSC_MODE, mask_sh),\
+	SF(OTG0_OTG_CRC_CNTL2, OTG_CRC_DATA_STREAM_COMBINE_MODE, mask_sh),\
+	SF(OTG0_OTG_CRC_CNTL2, OTG_CRC_DATA_STREAM_SPLIT_MODE, mask_sh),\
+	SF(OTG0_OTG_CRC_CNTL2, OTG_CRC_DATA_FORMAT, mask_sh),\
 	SF(OTG0_OTG_CRC0_DATA_RG, CRC0_R_CR, mask_sh),\
 	SF(OTG0_OTG_CRC0_DATA_RG, CRC0_G_Y, mask_sh),\
 	SF(OTG0_OTG_CRC0_DATA_B, CRC0_B_CB, mask_sh),\
diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
index 287a1066b547..20f11fb8f5a6 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
@@ -158,10 +158,9 @@ static void dccg31_disable_dpstreamclk(struct dccg *dccg, int otg_inst)
 	}
 }
 
-void dccg31_set_dpstreamclk(
-		struct dccg *dccg,
-		enum hdmistreamclk_source src,
-		int otg_inst)
+void dccg31_set_dpstreamclk(struct dccg *dccg,
+			    enum streamclk_source src,
+			    int otg_inst)
 {
 	if (src == REFCLK)
 		dccg31_disable_dpstreamclk(dccg, otg_inst);
@@ -513,12 +512,13 @@ void dccg31_set_physymclk(
 /* Controls the generation of pixel valid for OTG in (OTG -> HPO case) */
 static void dccg31_set_dtbclk_dto(
 		struct dccg *dccg,
-		int dtbclk_inst,
-		int req_dtbclk_khz,
+		int otg_inst,
+		int pixclk_khz,
 		int num_odm_segments,
 		const struct dc_crtc_timing *timing)
 {
 	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+	int req_dtbclk_khz = pixclk_khz;
 	uint32_t dtbdto_div;
 
 	/* Mode	                DTBDTO Rate       DTBCLK_DTO<x>_DIV Register
@@ -531,13 +531,13 @@ static void dccg31_set_dtbclk_dto(
 	 */
 	if (num_odm_segments == 4) {
 		dtbdto_div = 2;
-		req_dtbclk_khz = req_dtbclk_khz / 4;
+		req_dtbclk_khz = pixclk_khz / 4;
 	} else if ((num_odm_segments == 2) ||
 			(timing->pixel_encoding == PIXEL_ENCODING_YCBCR420) ||
 			(timing->flags.DSC && timing->pixel_encoding == PIXEL_ENCODING_YCBCR422
 					&& !timing->dsc_cfg.ycbcr422_simple)) {
 		dtbdto_div = 4;
-		req_dtbclk_khz = req_dtbclk_khz / 2;
+		req_dtbclk_khz = pixclk_khz / 2;
 	} else
 		dtbdto_div = 8;
 
@@ -549,37 +549,37 @@ static void dccg31_set_dtbclk_dto(
 		phase = div_u64((((unsigned long long)modulo * req_dtbclk_khz) + dccg->ref_dtbclk_khz - 1),
 			dccg->ref_dtbclk_khz);
 
-		REG_UPDATE(OTG_PIXEL_RATE_CNTL[dtbclk_inst],
-				DTBCLK_DTO_DIV[dtbclk_inst], dtbdto_div);
+		REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
+				DTBCLK_DTO_DIV[otg_inst], dtbdto_div);
 
-		REG_WRITE(DTBCLK_DTO_MODULO[dtbclk_inst], modulo);
-		REG_WRITE(DTBCLK_DTO_PHASE[dtbclk_inst], phase);
+		REG_WRITE(DTBCLK_DTO_MODULO[otg_inst], modulo);
+		REG_WRITE(DTBCLK_DTO_PHASE[otg_inst], phase);
 
-		REG_UPDATE(OTG_PIXEL_RATE_CNTL[dtbclk_inst],
-				DTBCLK_DTO_ENABLE[dtbclk_inst], 1);
+		REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
+				DTBCLK_DTO_ENABLE[otg_inst], 1);
 
-		REG_WAIT(OTG_PIXEL_RATE_CNTL[dtbclk_inst],
-				DTBCLKDTO_ENABLE_STATUS[dtbclk_inst], 1,
+		REG_WAIT(OTG_PIXEL_RATE_CNTL[otg_inst],
+				DTBCLKDTO_ENABLE_STATUS[otg_inst], 1,
 				1, 100);
 
 		/* The recommended programming sequence to enable DTBCLK DTO to generate
 		 * valid pixel HPO DPSTREAM ENCODER, specifies that DTO source select should
 		 * be set only after DTO is enabled
 		 */
-		REG_UPDATE(OTG_PIXEL_RATE_CNTL[dtbclk_inst],
-				PIPE_DTO_SRC_SEL[dtbclk_inst], 1);
+		REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
+				PIPE_DTO_SRC_SEL[otg_inst], 1);
 
-		dccg->dtbclk_khz[dtbclk_inst] = req_dtbclk_khz;
+		dccg->dtbclk_khz[otg_inst] = req_dtbclk_khz;
 	} else {
-		REG_UPDATE_3(OTG_PIXEL_RATE_CNTL[dtbclk_inst],
-				DTBCLK_DTO_ENABLE[dtbclk_inst], 0,
-				PIPE_DTO_SRC_SEL[dtbclk_inst], 0,
-				DTBCLK_DTO_DIV[dtbclk_inst], dtbdto_div);
+		REG_UPDATE_3(OTG_PIXEL_RATE_CNTL[otg_inst],
+				DTBCLK_DTO_ENABLE[otg_inst], 0,
+				PIPE_DTO_SRC_SEL[otg_inst], 0,
+				DTBCLK_DTO_DIV[otg_inst], dtbdto_div);
 
-		REG_WRITE(DTBCLK_DTO_MODULO[dtbclk_inst], 0);
-		REG_WRITE(DTBCLK_DTO_PHASE[dtbclk_inst], 0);
+		REG_WRITE(DTBCLK_DTO_MODULO[otg_inst], 0);
+		REG_WRITE(DTBCLK_DTO_PHASE[otg_inst], 0);
 
-		dccg->dtbclk_khz[dtbclk_inst] = 0;
+		dccg->dtbclk_khz[otg_inst] = 0;
 	}
 }
 
@@ -673,6 +673,24 @@ void dccg31_init(struct dccg *dccg)
 	}
 }
 
+void dccg31_otg_add_pixel(struct dccg *dccg,
+		uint32_t otg_inst)
+{
+	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+
+	REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
+			OTG_ADD_PIXEL[otg_inst], 1);
+}
+
+void dccg31_otg_drop_pixel(struct dccg *dccg,
+		uint32_t otg_inst)
+{
+	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+
+	REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
+			OTG_DROP_PIXEL[otg_inst], 1);
+}
+
 static const struct dccg_funcs dccg31_funcs = {
 	.update_dpp_dto = dccg31_update_dpp_dto,
 	.get_dccg_ref_freq = dccg31_get_dccg_ref_freq,
@@ -685,6 +703,9 @@ static const struct dccg_funcs dccg31_funcs = {
 	.set_physymclk = dccg31_set_physymclk,
 	.set_dtbclk_dto = dccg31_set_dtbclk_dto,
 	.set_audio_dtbclk_dto = dccg31_set_audio_dtbclk_dto,
+	.set_fifo_errdet_ovr_en = dccg2_set_fifo_errdet_ovr_en,
+	.otg_add_pixel = dccg31_otg_add_pixel,
+	.otg_drop_pixel = dccg31_otg_drop_pixel,
 	.set_dispclk_change_mode = dccg31_set_dispclk_change_mode,
 	.disable_dsc = dccg31_disable_dscclk,
 	.enable_dsc = dccg31_enable_dscclk,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h
index 269cabbea72a..f31cced5f441 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h
@@ -28,10 +28,6 @@
 
 #include "dcn30/dcn30_dccg.h"
 
-#define DCCG_SFII(block, reg_name, field_prefix, field_name, inst, post_fix)\
-	.field_prefix ## _ ## field_name[inst] = block ## inst ## _ ## reg_name ## __ ## field_prefix ## inst ## _ ## field_name ## post_fix
-
-
 #define DCCG_REG_LIST_DCN31() \
 	SR(DPPCLK_DTO_CTRL),\
 	DCCG_SRII(DTO_PARAM, DPPCLK, 0),\
@@ -124,6 +120,10 @@
 	DCCG_SFII(OTG, PIXEL_RATE_CNTL, DTBCLK_DTO, DIV, 1, mask_sh),\
 	DCCG_SFII(OTG, PIXEL_RATE_CNTL, DTBCLK_DTO, DIV, 2, mask_sh),\
 	DCCG_SFII(OTG, PIXEL_RATE_CNTL, DTBCLK_DTO, DIV, 3, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, OTG, ADD_PIXEL, 0, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, OTG, ADD_PIXEL, 1, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, OTG, ADD_PIXEL, 2, mask_sh),\
+	DCCG_SFII(OTG, PIXEL_RATE_CNTL, OTG, ADD_PIXEL, 3, mask_sh),\
 	DCCG_SF(DCCG_AUDIO_DTO_SOURCE, DCCG_AUDIO_DTO_SEL, mask_sh),\
 	DCCG_SF(DCCG_AUDIO_DTO_SOURCE, DCCG_AUDIO_DTO0_SOURCE_SEL, mask_sh),\
 	DCCG_SF(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_CHG_MODE, mask_sh), \
@@ -163,7 +163,7 @@ void dccg31_init(struct dccg *dccg);
 
 void dccg31_set_dpstreamclk(
 		struct dccg *dccg,
-		enum hdmistreamclk_source src,
+		enum streamclk_source src,
 		int otg_inst);
 
 void dccg31_enable_symclk32_se(
@@ -194,8 +194,4 @@ void dccg31_set_audio_dtbclk_dto(
 		struct dccg *dccg,
 		uint32_t req_audio_dtbclk_khz);
 
-void dccg31_set_hdmistreamclk(
-		struct dccg *dccg,
-		enum hdmistreamclk_source src);
-
 #endif //__DCN31_DCCG_H__
diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_optc.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_optc.c
index c51f7dca94f8..96fac715a77b 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_optc.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_optc.c
@@ -213,26 +213,6 @@ void optc31_set_drr(
 	}
 }
 
-void optc3_init_odm(struct timing_generator *optc)
-{
-	struct optc *optc1 = DCN10TG_FROM_TG(optc);
-
-	REG_SET_5(OPTC_DATA_SOURCE_SELECT, 0,
-			OPTC_NUM_OF_INPUT_SEGMENT, 0,
-			OPTC_SEG0_SRC_SEL, optc->inst,
-			OPTC_SEG1_SRC_SEL, 0xf,
-			OPTC_SEG2_SRC_SEL, 0xf,
-			OPTC_SEG3_SRC_SEL, 0xf
-			);
-
-	REG_SET(OTG_H_TIMING_CNTL, 0,
-			OTG_H_TIMING_DIV_MODE, 0);
-
-	REG_SET(OPTC_MEMORY_CONFIG, 0,
-			OPTC_MEM_SEL, 0);
-	optc1->opp_count = 1;
-}
-
 static struct timing_generator_funcs dcn31_tg_funcs = {
 		.validate_timing = optc1_validate_timing,
 		.program_timing = optc1_program_timing,
@@ -266,6 +246,7 @@ static struct timing_generator_funcs dcn31_tg_funcs = {
 		.lock_doublebuffer_disable = optc3_lock_doublebuffer_disable,
 		.enable_optc_clock = optc1_enable_optc_clock,
 		.set_drr = optc31_set_drr,
+		.get_last_used_drr_vtotal = optc2_get_last_used_drr_vtotal,
 		.set_vtotal_min_max = optc1_set_vtotal_min_max,
 		.set_static_screen_control = optc1_set_static_screen_control,
 		.program_stereo = optc1_program_stereo,
@@ -292,7 +273,6 @@ static struct timing_generator_funcs dcn31_tg_funcs = {
 		.program_manual_trigger = optc2_program_manual_trigger,
 		.setup_manual_trigger = optc2_setup_manual_trigger,
 		.get_hw_timing = optc1_get_hw_timing,
-		.init_odm = optc3_init_odm,
 };
 
 void dcn31_timing_generator_init(struct optc *optc1)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_optc.h b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_optc.h
index 9e881f2ce74b..3706e6f7880e 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_optc.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_optc.h
@@ -98,7 +98,8 @@
 	SRI(OPTC_WIDTH_CONTROL, ODM, inst),\
 	SRI(OPTC_MEMORY_CONFIG, ODM, inst),\
 	SRI(OTG_CRC_CNTL2, OTG, inst),\
-	SR(DWB_SOURCE_SELECT)
+	SR(DWB_SOURCE_SELECT),\
+	SRI(OTG_DRR_CONTROL, OTG, inst)
 
 #define OPTC_COMMON_MASK_SH_LIST_DCN3_1(mask_sh)\
 	SF(OTG0_OTG_VSTARTUP_PARAM, VSTARTUP_START, mask_sh),\
@@ -252,7 +253,8 @@
 	SF(OTG0_OTG_CRC_CNTL2, OTG_CRC_DSC_MODE, mask_sh),\
 	SF(OTG0_OTG_CRC_CNTL2, OTG_CRC_DATA_STREAM_COMBINE_MODE, mask_sh),\
 	SF(OTG0_OTG_CRC_CNTL2, OTG_CRC_DATA_STREAM_SPLIT_MODE, mask_sh),\
-	SF(OTG0_OTG_CRC_CNTL2, OTG_CRC_DATA_FORMAT, mask_sh)
+	SF(OTG0_OTG_CRC_CNTL2, OTG_CRC_DATA_FORMAT, mask_sh),\
+	SF(OTG0_OTG_DRR_CONTROL, OTG_V_TOTAL_LAST_USED_BY_DRR, mask_sh)
 
 void dcn31_timing_generator_init(struct optc *optc1);
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.c
index 62532da94913..4d7588f2ee79 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.c
@@ -211,8 +211,10 @@ static void enc32_stream_encoder_hdmi_set_stream_attribute(
 		HDMI_GC_SEND, 1,
 		HDMI_NULL_SEND, 1);
 
+#if defined(CONFIG_DRM_AMD_DC_HDCP)
 	/* Disable Audio Content Protection packet transmission */
 	REG_UPDATE(HDMI_VBI_PACKET_CONTROL, HDMI_ACP_SEND, 0);
+#endif
 
 	/* following belongs to audio */
 	/* Enable Audio InfoFrame packet transmission. */
@@ -301,6 +303,21 @@ static void enc32_stream_encoder_dp_unblank(
 
 	REG_UPDATE(DP_STEER_FIFO, DP_STEER_FIFO_RESET, 0);
 
+	/* DIG Resync FIFO now needs to be explicitly enabled
+	 */
+	// TODO: Confirm if we need to wait for DIG_SYMCLK_FE_ON
+	REG_WAIT(DIG_FE_CNTL, DIG_SYMCLK_FE_ON, 1, 10, 5000);
+
+	REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_RESET, 1);
+
+	REG_WAIT(DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, 1, 10, 5000);
+
+	REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_RESET, 0);
+
+	REG_WAIT(DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, 0, 10, 5000);
+
+	REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_ENABLE, 1);
+
 	/* wait 100us for DIG/DP logic to prime
 	 * (i.e. a few video lines)
 	 */
@@ -354,6 +371,23 @@ static void enc32_read_state(struct stream_encoder *enc, struct enc_state *s)
 	}
 }
 
+static void enc32_stream_encoder_reset_fifo(struct stream_encoder *enc)
+{
+	struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
+	uint32_t fifo_enabled;
+
+	REG_GET(DIG_FIFO_CTRL0, DIG_FIFO_ENABLE, &fifo_enabled);
+
+	if (fifo_enabled == 0) {
+		/* reset DIG resync FIFO */
+		REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_RESET, 1);
+		/* TODO: fix timeout when wait for DIG_FIFO_RESET_DONE */
+		//REG_WAIT(DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, 1, 1, 100);
+		udelay(1);
+		REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_RESET, 0);
+		REG_WAIT(DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, 0, 1, 100);
+	}
+}
 
 static const struct stream_encoder_funcs dcn32_str_enc_funcs = {
 	.dp_set_odm_combine =
@@ -375,7 +409,7 @@ static const struct stream_encoder_funcs dcn32_str_enc_funcs = {
 	.stop_dp_info_packets =
 		enc1_stream_encoder_stop_dp_info_packets,
 	.reset_fifo =
-		enc1_stream_encoder_reset_fifo,
+		enc32_stream_encoder_reset_fifo,
 	.dp_blank =
 		enc1_stream_encoder_dp_blank,
 	.dp_unblank =
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h
index 7241080b1553..77da0a13525b 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h
@@ -89,7 +89,8 @@
 	SRI(DP_SEC_METADATA_TRANSMISSION, DP, id), \
 	SRI(HDMI_METADATA_PACKET_CONTROL, DIG, id), \
 	SRI(DIG_FE_CNTL, DIG, id), \
-	SRI(DIG_CLOCK_PATTERN, DIG, id)
+	SRI(DIG_CLOCK_PATTERN, DIG, id), \
+	SRI(DIG_FIFO_CTRL0, DIG, id)
 
 
 #define SE_COMMON_MASK_SH_LIST_DCN32_BASE(mask_sh)\
@@ -233,12 +234,22 @@
 	SE_SF(DIG0_HDMI_METADATA_PACKET_CONTROL, HDMI_METADATA_PACKET_LINE_REFERENCE, mask_sh),\
 	SE_SF(DIG0_HDMI_METADATA_PACKET_CONTROL, HDMI_METADATA_PACKET_LINE, mask_sh),\
 	SE_SF(DIG0_DIG_FE_CNTL, DOLBY_VISION_EN, mask_sh),\
+	SE_SF(DIG0_DIG_FE_CNTL, DIG_SYMCLK_FE_ON, mask_sh),\
 	SE_SF(DP0_DP_SEC_FRAMING4, DP_SST_SDP_SPLITTING, mask_sh),\
-	SE_SF(DIG0_DIG_CLOCK_PATTERN, DIG_CLOCK_PATTERN, mask_sh)
+	SE_SF(DIG0_DIG_CLOCK_PATTERN, DIG_CLOCK_PATTERN, mask_sh),\
+	SE_SF(DIG0_DIG_FIFO_CTRL0, DIG_FIFO_READ_START_LEVEL, mask_sh),\
+	SE_SF(DIG0_DIG_FIFO_CTRL0, DIG_FIFO_ENABLE, mask_sh),\
+	SE_SF(DIG0_DIG_FIFO_CTRL0, DIG_FIFO_RESET, mask_sh),\
+	SE_SF(DIG0_DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, mask_sh)
 
+#if defined(CONFIG_DRM_AMD_DC_HDCP)
 #define SE_COMMON_MASK_SH_LIST_DCN32(mask_sh)\
 	SE_COMMON_MASK_SH_LIST_DCN32_BASE(mask_sh),\
 	SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_ACP_SEND, mask_sh)
+#else
+#define SE_COMMON_MASK_SH_LIST_DCN32(mask_sh)\
+	SE_COMMON_MASK_SH_LIST_DCN32_BASE(mask_sh)
+#endif
 
 void dcn32_dio_stream_encoder_construct(
 	struct dcn10_stream_encoder *enc1,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
index d0a222f1a09c..40afd33ffec6 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
@@ -649,7 +649,7 @@ void dcn32_init_hw(struct dc *dc)
 	 * Otherwise, if taking control is not possible, we need to power
 	 * everything down.
 	 */
-	if (dcb->funcs->is_accelerated_mode(dcb) || dc->config.power_down_display_on_boot) {
+	if (dcb->funcs->is_accelerated_mode(dcb) || dc->config.seamless_boot_edp_requested) {
 		hws->funcs.init_pipes(dc, dc->current_state);
 		if (dc->res_pool->hubbub->funcs->allow_self_refresh_control)
 			dc->res_pool->hubbub->funcs->allow_self_refresh_control(dc->res_pool->hubbub,
@@ -661,7 +661,7 @@ void dcn32_init_hw(struct dc *dc)
 	 * To avoid this, power down hardware on boot
 	 * if DIG is turned on and seamless boot not enabled
 	 */
-	if (dc->config.power_down_display_on_boot) {
+	if (dc->config.seamless_boot_edp_requested) {
 		struct dc_link *edp_links[MAX_NUM_EDP];
 		struct dc_link *edp_link;
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c
index f1ed25e972f2..88275ea4193c 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.c
@@ -91,10 +91,18 @@ static void optc32_set_odm_combine(struct timing_generator *optc, int *opp_id, i
 	REG_UPDATE(OPTC_WIDTH_CONTROL,
 			OPTC_SEGMENT_WIDTH, mpcc_hactive);
 
-	REG_SET(OTG_H_TIMING_CNTL, 0, OTG_H_TIMING_DIV_MODE, opp_cnt - 1);
+	REG_UPDATE(OTG_H_TIMING_CNTL,
+			OTG_H_TIMING_DIV_MODE, opp_cnt - 1);
 	optc1->opp_count = opp_cnt;
 }
 
+static void optc32_set_h_timing_div_manual_mode(struct timing_generator *optc, bool manual_mode)
+{
+	struct optc *optc1 = DCN10TG_FROM_TG(optc);
+
+	REG_UPDATE(OTG_H_TIMING_CNTL,
+			OTG_H_TIMING_DIV_MODE_MANUAL, manual_mode ? 1 : 0);
+}
 /**
  * Enable CRTC
  * Enable CRTC - call ASIC Control Object to enable Timing generator.
@@ -157,6 +165,29 @@ void optc32_phantom_crtc_post_enable(struct timing_generator *optc)
 	REG_WAIT(OTG_CLOCK_CONTROL, OTG_BUSY, 0, 1, 100000);
 }
 
+static void optc32_set_odm_bypass(struct timing_generator *optc,
+		const struct dc_crtc_timing *dc_crtc_timing)
+{
+	struct optc *optc1 = DCN10TG_FROM_TG(optc);
+	enum h_timing_div_mode h_div = H_TIMING_NO_DIV;
+
+	REG_SET_5(OPTC_DATA_SOURCE_SELECT, 0,
+			OPTC_NUM_OF_INPUT_SEGMENT, 0,
+			OPTC_SEG0_SRC_SEL, optc->inst,
+			OPTC_SEG1_SRC_SEL, 0xf,
+			OPTC_SEG2_SRC_SEL, 0xf,
+			OPTC_SEG3_SRC_SEL, 0xf
+			);
+
+	h_div = optc1_is_two_pixels_per_containter(dc_crtc_timing);
+	REG_UPDATE(OTG_H_TIMING_CNTL,
+			OTG_H_TIMING_DIV_MODE, h_div);
+
+	REG_SET(OPTC_MEMORY_CONFIG, 0,
+			OPTC_MEM_SEL, 0);
+	optc1->opp_count = 1;
+}
+
 
 static struct timing_generator_funcs dcn32_tg_funcs = {
 		.validate_timing = optc1_validate_timing,
@@ -206,8 +237,9 @@ static struct timing_generator_funcs dcn32_tg_funcs = {
 		.set_dsc_config = optc3_set_dsc_config,
 		.get_dsc_status = optc2_get_dsc_status,
 		.set_dwb_source = NULL,
-		.set_odm_bypass = optc3_set_odm_bypass,
+		.set_odm_bypass = optc32_set_odm_bypass,
 		.set_odm_combine = optc32_set_odm_combine,
+		.set_h_timing_div_manual_mode = optc32_set_h_timing_div_manual_mode,
 		.get_optc_source = optc2_get_optc_source,
 		.set_out_mux = optc3_set_out_mux,
 		.set_drr_trigger_window = optc3_set_drr_trigger_window,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
index 17a287bc89ce..7ac6428aae52 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
@@ -77,7 +77,7 @@
 
 #include "dcn/dcn_3_2_0_offset.h"
 #include "dcn/dcn_3_2_0_sh_mask.h"
-#include "dcn/nbio_4_3_0_offset.h"
+#include "nbio/nbio_4_3_0_offset.h"
 
 #include "reg_helper.h"
 #include "dce/dmub_abm.h"
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
index f79dd40f8d81..8d4c74b0fc90 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
@@ -811,9 +811,14 @@ void dcn20_calculate_dlg_params(
 		pipes[pipe_idx].pipe.dest.vupdate_offset = get_vupdate_offset(&context->bw_ctx.dml, pipes, pipe_cnt, pipe_idx);
 		pipes[pipe_idx].pipe.dest.vupdate_width = get_vupdate_width(&context->bw_ctx.dml, pipes, pipe_cnt, pipe_idx);
 		pipes[pipe_idx].pipe.dest.vready_offset = get_vready_offset(&context->bw_ctx.dml, pipes, pipe_cnt, pipe_idx);
-		context->res_ctx.pipe_ctx[i].det_buffer_size_kb = context->bw_ctx.dml.ip.det_buffer_size_kbytes;
-		context->res_ctx.pipe_ctx[i].unbounded_req = pipes[pipe_idx].pipe.src.unbounded_req_mode;
-
+		if (context->res_ctx.pipe_ctx[i].stream->mall_stream_config.type == SUBVP_PHANTOM) {
+			// Phantom pipe requires that DET_SIZE = 0 and no unbounded requests
+			context->res_ctx.pipe_ctx[i].det_buffer_size_kb = 0;
+			context->res_ctx.pipe_ctx[i].unbounded_req = false;
+		} else {
+			context->res_ctx.pipe_ctx[i].det_buffer_size_kb = context->bw_ctx.dml.ip.det_buffer_size_kbytes;
+			context->res_ctx.pipe_ctx[i].unbounded_req = pipes[pipe_idx].pipe.src.unbounded_req_mode;
+		}
 		if (context->bw_ctx.bw.dcn.clk.dppclk_khz < pipes[pipe_idx].clks_cfg.dppclk_mhz * 1000)
 			context->bw_ctx.bw.dcn.clk.dppclk_khz = pipes[pipe_idx].clks_cfg.dppclk_mhz * 1000;
 		context->res_ctx.pipe_ctx[i].plane_res.bw.dppclk_khz =
@@ -1013,6 +1018,31 @@ int dcn20_populate_dml_pipes_from_context(
 			pipes[pipe_cnt].pipe.dest.pixel_rate_mhz *= 2;
 		pipes[pipe_cnt].pipe.dest.otg_inst = res_ctx->pipe_ctx[i].stream_res.tg->inst;
 		pipes[pipe_cnt].dout.dp_lanes = 4;
+		if (res_ctx->pipe_ctx[i].stream->link) {
+			switch (res_ctx->pipe_ctx[i].stream->link->cur_link_settings.link_rate) {
+			case LINK_RATE_HIGH:
+				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_hbr;
+				break;
+			case LINK_RATE_HIGH2:
+				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_hbr2;
+				break;
+			case LINK_RATE_HIGH3:
+				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_hbr3;
+				break;
+			case LINK_RATE_UHBR10:
+				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_uhbr10;
+				break;
+			case LINK_RATE_UHBR13_5:
+				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_uhbr13p5;
+				break;
+			case LINK_RATE_UHBR20:
+				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_uhbr20;
+				break;
+			default:
+				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_na;
+				break;
+			}
+		}
 		pipes[pipe_cnt].dout.is_virtual = 0;
 		pipes[pipe_cnt].pipe.dest.vtotal_min = res_ctx->pipe_ctx[i].stream->adjust.v_total_min;
 		pipes[pipe_cnt].pipe.dest.vtotal_max = res_ctx->pipe_ctx[i].stream->adjust.v_total_max;
@@ -1070,6 +1100,7 @@ int dcn20_populate_dml_pipes_from_context(
 			pipes[pipe_cnt].dout.is_virtual = 1;
 			pipes[pipe_cnt].dout.output_type = dm_dp;
 			pipes[pipe_cnt].dout.dp_lanes = 4;
+			pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_hbr2;
 		}
 
 		switch (res_ctx->pipe_ctx[i].stream->timing.display_color_depth) {
@@ -1138,7 +1169,8 @@ int dcn20_populate_dml_pipes_from_context(
 		 * bw calculations due to cursor on/off
 		 */
 		if (res_ctx->pipe_ctx[i].plane_state &&
-				res_ctx->pipe_ctx[i].plane_state->address.type == PLN_ADDR_TYPE_VIDEO_PROGRESSIVE)
+				(res_ctx->pipe_ctx[i].plane_state->address.type == PLN_ADDR_TYPE_VIDEO_PROGRESSIVE ||
+				 res_ctx->pipe_ctx[i].stream->mall_stream_config.type == SUBVP_PHANTOM))
 			pipes[pipe_cnt].pipe.src.num_cursors = 0;
 		else
 			pipes[pipe_cnt].pipe.src.num_cursors = dc->dml.ip.number_of_cursors;
@@ -1149,6 +1181,7 @@ int dcn20_populate_dml_pipes_from_context(
 		if (!res_ctx->pipe_ctx[i].plane_state) {
 			pipes[pipe_cnt].pipe.src.is_hsplit = pipes[pipe_cnt].pipe.dest.odm_combine != dm_odm_combine_mode_disabled;
 			pipes[pipe_cnt].pipe.src.source_scan = dm_horz;
+			pipes[pipe_cnt].pipe.src.source_rotation = dm_rotation_0;
 			pipes[pipe_cnt].pipe.src.sw_mode = dm_sw_4kb_s;
 			pipes[pipe_cnt].pipe.src.macro_tile_size = dm_64k_tile;
 			pipes[pipe_cnt].pipe.src.viewport_width = timing->h_addressable;
@@ -1201,8 +1234,26 @@ int dcn20_populate_dml_pipes_from_context(
 
 			pipes[pipe_cnt].pipe.src.source_scan = pln->rotation == ROTATION_ANGLE_90
 					|| pln->rotation == ROTATION_ANGLE_270 ? dm_vert : dm_horz;
+			switch (pln->rotation) {
+			case ROTATION_ANGLE_0:
+				pipes[pipe_cnt].pipe.src.source_rotation = dm_rotation_0;
+				break;
+			case ROTATION_ANGLE_90:
+				pipes[pipe_cnt].pipe.src.source_rotation = dm_rotation_90;
+				break;
+			case ROTATION_ANGLE_180:
+				pipes[pipe_cnt].pipe.src.source_rotation = dm_rotation_180;
+				break;
+			case ROTATION_ANGLE_270:
+				pipes[pipe_cnt].pipe.src.source_rotation = dm_rotation_270;
+				break;
+			default:
+				break;
+			}
 			pipes[pipe_cnt].pipe.src.viewport_y_y = scl->viewport.y;
 			pipes[pipe_cnt].pipe.src.viewport_y_c = scl->viewport_c.y;
+			pipes[pipe_cnt].pipe.src.viewport_x_y = scl->viewport.x;
+			pipes[pipe_cnt].pipe.src.viewport_x_c = scl->viewport_c.x;
 			pipes[pipe_cnt].pipe.src.viewport_width = scl->viewport.width;
 			pipes[pipe_cnt].pipe.src.viewport_width_c = scl->viewport_c.width;
 			pipes[pipe_cnt].pipe.src.viewport_height = scl->viewport.height;
diff --git a/drivers/gpu/drm/amd/display/dc/inc/core_types.h b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
index 555d4d9e1454..0317af5bb8ca 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/core_types.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
@@ -195,6 +195,16 @@ struct resource_funcs {
 	enum dc_status (*add_dsc_to_stream_resource)(
 			struct dc *dc, struct dc_state *state,
 			struct dc_stream_state *stream);
+
+	void (*add_phantom_pipes)(
+            struct dc *dc,
+            struct dc_state *context,
+            display_e2e_pipe_params_st *pipes,
+			unsigned int pipe_cnt,
+            unsigned int index);
+	void (*remove_phantom_pipes)(
+            struct dc *dc,
+            struct dc_state *context);
 };
 
 struct audio_support{
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
index b2fa4de47734..5cc6d1fb887d 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
@@ -45,9 +45,10 @@ enum physymclk_clock_source {
 	PHYSYMCLK_FORCE_SRC_PHYD32CLK, // Select phyd32clk as the source of clock which is output to PHY through DCIO.
 };
 
-enum hdmistreamclk_source {
+enum streamclk_source {
 	REFCLK,                   // Selects REFCLK as source for hdmistreamclk.
 	DTBCLK0,                  // Selects DTBCLK0 as source for hdmistreamclk.
+	DPREFCLK,                 // Selects DPREFCLK as source for hdmistreamclk
 };
 
 enum dentist_dispclk_change_mode {
@@ -82,7 +83,7 @@ struct dccg_funcs {
 
 	void (*set_dpstreamclk)(
 			struct dccg *dccg,
-			enum hdmistreamclk_source src,
+			enum streamclk_source src,
 			int otg_inst);
 
 	void (*enable_symclk32_se)(
@@ -111,8 +112,8 @@ struct dccg_funcs {
 
 	void (*set_dtbclk_dto)(
 			struct dccg *dccg,
-			int dtbclk_inst,
-			int req_dtbclk_khz,
+			int otg_inst,
+			int pixclk_khz,
 			int num_odm_segments,
 			const struct dc_crtc_timing *timing);
 
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
index 9195dec294c2..e7571c6f5ead 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
@@ -47,6 +47,8 @@ struct dcn_hubbub_wm_set {
 	uint32_t sr_enter;
 	uint32_t sr_exit;
 	uint32_t dram_clk_chanage;
+	uint32_t usr_retrain;
+	uint32_t fclk_pstate_change;
 };
 
 struct dcn_hubbub_wm {
@@ -168,6 +170,7 @@ struct hubbub_funcs {
 	void (*program_det_size)(struct hubbub *hubbub, int hubp_inst, unsigned det_buffer_size_in_kbyte);
 	void (*program_compbuf_size)(struct hubbub *hubbub, unsigned compbuf_size_kb, bool safe_to_increase);
 	void (*init_crb)(struct hubbub *hubbub);
+	void (*force_usr_retraining_allow)(struct hubbub *hubbub, bool allow);
 };
 
 struct hubbub {
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
index ad69d78c4ac3..b2cdb6bfc9b8 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
@@ -140,6 +140,9 @@ struct hubp_funcs {
 
 	void (*set_blank)(struct hubp *hubp, bool blank);
 	void (*set_blank_regs)(struct hubp *hubp, bool blank);
+#ifdef CONFIG_DRM_AMD_DC_DCN
+	void (*phantom_hubp_post_enable)(struct hubp *hubp);
+#endif
 	void (*set_hubp_blank_en)(struct hubp *hubp, bool blank);
 
 	void (*set_cursor_attributes)(
@@ -193,6 +196,10 @@ struct hubp_funcs {
 	bool (*hubp_in_blank)(struct hubp *hubp);
 	void (*hubp_soft_reset)(struct hubp *hubp, bool reset);
 
+	void (*hubp_update_force_pstate_disallow)(struct hubp *hubp, bool allow);
+	void (*hubp_update_mall_sel)(struct hubp *hubp, uint32_t mall_sel);
+	void (*hubp_prepare_subvp_buffering)(struct hubp *hubp, bool enable);
+
 	void (*hubp_set_flip_int)(struct hubp *hubp);
 
 	void (*program_extended_blank)(struct hubp *hubp,
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h b/drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h
index 8798cfa11a4d..b72fb314d804 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h
@@ -37,6 +37,7 @@ struct cstate_pstate_watermarks_st {
 	uint32_t cstate_enter_plus_exit_z8_ns;
 	uint32_t cstate_enter_plus_exit_ns;
 	uint32_t pstate_change_ns;
+	uint32_t fclk_pstate_change_ns;
 };
 
 struct dcn_watermarks {
@@ -46,6 +47,7 @@ struct dcn_watermarks {
 	uint32_t frac_urg_bw_flip;
 	int32_t urgent_latency_ns;
 	struct cstate_pstate_watermarks_st cstate_pstate;
+	uint32_t usr_retraining_ns;
 };
 
 struct dcn_watermark_set {
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/stream_encoder.h b/drivers/gpu/drm/amd/display/dc/inc/hw/stream_encoder.h
index 678c2065e5e8..36ec56524afd 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/stream_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/stream_encoder.h
@@ -164,6 +164,10 @@ struct stream_encoder_funcs {
 	void (*stop_dp_info_packets)(
 		struct stream_encoder *enc);
 
+	void (*reset_fifo)(
+		struct stream_encoder *enc
+	);
+
 	void (*dp_blank)(
 		struct dc_link *link,
 		struct stream_encoder *enc);
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h b/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h
index 554d2e33bd7f..a89b2230cd2c 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h
@@ -174,6 +174,9 @@ struct timing_generator_funcs {
 
 	bool (*enable_crtc)(struct timing_generator *tg);
 	bool (*disable_crtc)(struct timing_generator *tg);
+#ifdef CONFIG_DRM_AMD_DC_DCN
+	void (*phantom_crtc_post_enable)(struct timing_generator *tg);
+#endif
 	bool (*immediate_disable_crtc)(struct timing_generator *tg);
 	bool (*is_counter_moving)(struct timing_generator *tg);
 	void (*get_position)(struct timing_generator *tg,
@@ -293,6 +296,7 @@ struct timing_generator_funcs {
 	void (*set_odm_bypass)(struct timing_generator *optc, const struct dc_crtc_timing *dc_crtc_timing);
 	void (*set_odm_combine)(struct timing_generator *optc, int *opp_id, int opp_cnt,
 			struct dc_crtc_timing *timing);
+	void (*set_h_timing_div_manual_mode)(struct timing_generator *optc, bool manual_mode);
 	void (*set_gsl)(struct timing_generator *optc, const struct gsl_params *params);
 	void (*set_gsl_source_select)(struct timing_generator *optc,
 			int group_idx,
@@ -310,8 +314,10 @@ struct timing_generator_funcs {
 			uint32_t slave_pixel_clock_100Hz,
 			uint8_t master_clock_divider,
 			uint8_t slave_clock_divider);
-
-	void (*init_odm)(struct timing_generator *tg);
+	bool (*validate_vmin_vmax)(struct timing_generator *optc,
+			int vmin, int vmax);
+	bool (*validate_vtotal_change_limit)(struct timing_generator *optc,
+			uint32_t vtotal_change_limit);
 };
 
 #endif
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
index 05053f3b4ab7..eb616a4ed508 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
@@ -244,6 +244,8 @@ struct hw_sequencer_funcs {
 			struct pipe_ctx *pipe_ctx,
 			struct tg_color *color,
 			int mpcc_id);
+
+	void (*commit_subvp_config)(struct dc *dc, struct dc_state *context);
 };
 
 void color_space_to_black_color(
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
index 8c2f190c4712..5a0648784872 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
@@ -143,6 +143,11 @@ struct hwseq_private_funcs {
 	void (*PLAT_58856_wa)(struct dc_state *context,
 			struct pipe_ctx *pipe_ctx);
 	void (*setup_hpo_hw_control)(const struct dce_hwseq *hws, bool enable);
+#ifdef CONFIG_DRM_AMD_DC_DCN
+	void (*program_mall_pipe_config)(struct dc *dc, struct dc_state *context);
+	void (*subvp_update_force_pstate)(struct dc *dc, struct dc_state *context);
+	void (*update_mall_sel)(struct dc *dc, struct dc_state *context);
+#endif
 };
 
 struct dce_hwseq {
diff --git a/drivers/gpu/drm/amd/display/dc/inc/resource.h b/drivers/gpu/drm/amd/display/dc/inc/resource.h
index 2369f38ed06f..58158764adc0 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/resource.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/resource.h
@@ -205,6 +205,13 @@ bool get_temp_dp_link_res(struct dc_link *link,
 		struct link_resource *link_res,
 		struct dc_link_settings *link_settings);
 
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+struct hpo_dp_link_encoder *resource_get_hpo_dp_link_enc_for_det_lt(
+		const struct resource_context *res_ctx,
+		const struct resource_pool *pool,
+		const struct dc_link *link);
+#endif
+
 void reset_syncd_pipes_from_disabled_pipes(struct dc *dc,
 	struct dc_state *context);
 
diff --git a/drivers/gpu/drm/amd/display/include/bios_parser_types.h b/drivers/gpu/drm/amd/display/include/bios_parser_types.h
index cf4027cc3f4c..83058bcbb2e8 100644
--- a/drivers/gpu/drm/amd/display/include/bios_parser_types.h
+++ b/drivers/gpu/drm/amd/display/include/bios_parser_types.h
@@ -335,4 +335,14 @@ struct bp_soc_bb_info {
 	uint32_t dram_sr_enter_exit_latency_100ns;
 };
 
+struct bp_connector_speed_cap_info {
+	uint32_t DP_HBR2_EN:1;
+	uint32_t DP_HBR3_EN:1;
+	uint32_t HDMI_6GB_EN:1;
+	uint32_t DP_UHBR10_EN:1;
+	uint32_t DP_UHBR13_5_EN:1;
+	uint32_t DP_UHBR20_EN:1;
+	uint32_t RESERVED:29;
+};
+
 #endif /*__DAL_BIOS_PARSER_TYPES_H__ */
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 12/43] drm/amd/display: Add DM support for DCN32/DCN321
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (9 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 11/43] drm/amd/display: Add dependant changes for DCN32/321 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 13/43] drm/amd/display: add DCN32 to IP discovery table Alex Deucher
                   ` (30 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

Add Display Manager specific changes for DCN3.2.x.  DM
handles the interaction between the core DC modesetting
code and the drm modesetting infrastructure.

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 23 +++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 05f34b46e109..a93affc37c53 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -117,6 +117,11 @@ MODULE_FIRMWARE(FIRMWARE_DCN_315_DMUB);
 #define FIRMWARE_DCN316_DMUB "amdgpu/dcn_3_1_6_dmcub.bin"
 MODULE_FIRMWARE(FIRMWARE_DCN316_DMUB);
 
+#define FIRMWARE_DCN_V3_2_0_DMCUB "amdgpu/dcn_3_2_0_dmcub.bin"
+MODULE_FIRMWARE(FIRMWARE_DCN_V3_2_0_DMCUB);
+#define FIRMWARE_DCN_V3_2_1_DMCUB "amdgpu/dcn_3_2_1_dmcub.bin"
+MODULE_FIRMWARE(FIRMWARE_DCN_V3_2_1_DMCUB);
+
 #define FIRMWARE_RAVEN_DMCU		"amdgpu/raven_dmcu.bin"
 MODULE_FIRMWARE(FIRMWARE_RAVEN_DMCU);
 
@@ -1800,6 +1805,8 @@ static int load_dmcu_fw(struct amdgpu_device *adev)
 		case IP_VERSION(3, 1, 3):
 		case IP_VERSION(3, 1, 5):
 		case IP_VERSION(3, 1, 6):
+		case IP_VERSION(3, 2, 0):
+		case IP_VERSION(3, 2, 1):
 			return 0;
 		default:
 			break;
@@ -1923,6 +1930,14 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
 		dmub_asic = DMUB_ASIC_DCN316;
 		fw_name_dmub = FIRMWARE_DCN316_DMUB;
 		break;
+	case IP_VERSION(3, 2, 0):
+		dmub_asic = DMUB_ASIC_DCN32;
+		fw_name_dmub = FIRMWARE_DCN_V3_2_0_DMCUB;
+		break;
+	case IP_VERSION(3, 2, 1):
+		dmub_asic = DMUB_ASIC_DCN321;
+		fw_name_dmub = FIRMWARE_DCN_V3_2_1_DMCUB;
+		break;
 	default:
 		/* ASIC doesn't support DMUB. */
 		return 0;
@@ -4232,6 +4247,8 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
 	case IP_VERSION(3, 1, 3):
 	case IP_VERSION(3, 1, 5):
 	case IP_VERSION(3, 1, 6):
+	case IP_VERSION(3, 2, 0):
+	case IP_VERSION(3, 2, 1):
 	case IP_VERSION(2, 1, 0):
 		if (register_outbox_irq_handlers(dm->adev)) {
 			DRM_ERROR("DM: Failed to initialize IRQ\n");
@@ -4250,6 +4267,8 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
 		case IP_VERSION(3, 1, 3):
 		case IP_VERSION(3, 1, 5):
 		case IP_VERSION(3, 1, 6):
+		case IP_VERSION(3, 2, 0):
+		case IP_VERSION(3, 2, 1):
 			psr_feature_enabled = true;
 			break;
 		default:
@@ -4367,6 +4386,8 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
 		case IP_VERSION(3, 1, 3):
 		case IP_VERSION(3, 1, 5):
 		case IP_VERSION(3, 1, 6):
+		case IP_VERSION(3, 2, 0):
+		case IP_VERSION(3, 2, 1):
 			if (dcn10_register_irq_handlers(dm->adev)) {
 				DRM_ERROR("DM: Failed to initialize IRQ\n");
 				goto fail;
@@ -4553,6 +4574,8 @@ static int dm_early_init(void *handle)
 		case IP_VERSION(3, 1, 3):
 		case IP_VERSION(3, 1, 5):
 		case IP_VERSION(3, 1, 6):
+		case IP_VERSION(3, 2, 0):
+		case IP_VERSION(3, 2, 1):
 			adev->mode_info.num_crtc = 4;
 			adev->mode_info.num_hpd = 4;
 			adev->mode_info.num_dig = 4;
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 13/43] drm/amd/display: add DCN32 to IP discovery table
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (10 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 12/43] drm/amd/display: Add DM support for DCN32/DCN321 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 14/43] drm/amd: Add GFX11 modifiers support to AMDGPU Alex Deucher
                   ` (29 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

[Why&How]
Add DCN32 to IP discovery to enable automatic initialization of AMDGPU
Display Manager

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index 47f0344205ed..91f21b725a43 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -1709,6 +1709,8 @@ static int amdgpu_discovery_set_display_ip_blocks(struct amdgpu_device *adev)
 		case IP_VERSION(3, 1, 3):
 		case IP_VERSION(3, 1, 5):
 		case IP_VERSION(3, 1, 6):
+		case IP_VERSION(3, 2, 0):
+		case IP_VERSION(3, 2, 1):
 			amdgpu_device_ip_block_add(adev, &dm_ip_block);
 			break;
 		default:
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 14/43] drm/amd: Add GFX11 modifiers support to AMDGPU
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (11 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 13/43] drm/amd/display: add DCN32 to IP discovery table Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 19:27   ` Marek Olšák
  2022-05-25 16:19 ` [PATCH 15/43] drm/amd/display: Fix USBC link creation Alex Deucher
                   ` (28 subsequent siblings)
  41 siblings, 1 reply; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

GFX11 IP introduces new tiling mode. Various combinations of DCC
settings are possible and the most preferred settings must be exposed
for optimal use of the hardware.

add_gfx11_modifiers() is based on recommendation from Marek for the
preferred tiling modifier that are most efficient for the hardware.

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   | 40 ++++++++--
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 74 ++++++++++++++++++-
 include/uapi/drm/drm_fourcc.h                 |  2 +
 3 files changed, 108 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
index ec395fe427f2..a54081a89282 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
@@ -30,6 +30,9 @@
 #include "atom.h"
 #include "amdgpu_connectors.h"
 #include "amdgpu_display.h"
+#include "soc15_common.h"
+#include "gc/gc_11_0_0_offset.h"
+#include "gc/gc_11_0_0_sh_mask.h"
 #include <asm/div64.h>
 
 #include <linux/pci.h>
@@ -662,6 +665,11 @@ static int convert_tiling_flags_to_modifier(struct amdgpu_framebuffer *afb)
 {
 	struct amdgpu_device *adev = drm_to_adev(afb->base.dev);
 	uint64_t modifier = 0;
+	int num_pipes = 0;
+	int num_pkrs = 0;
+
+	num_pkrs = adev->gfx.config.gb_addr_config_fields.num_pkrs;
+	num_pipes = adev->gfx.config.gb_addr_config_fields.num_pipes;
 
 	if (!afb->tiling_flags || !AMDGPU_TILING_GET(afb->tiling_flags, SWIZZLE_MODE)) {
 		modifier = DRM_FORMAT_MOD_LINEAR;
@@ -674,7 +682,7 @@ static int convert_tiling_flags_to_modifier(struct amdgpu_framebuffer *afb)
 		int bank_xor_bits = 0;
 		int packers = 0;
 		int rb = 0;
-		int pipes = ilog2(adev->gfx.config.gb_addr_config_fields.num_pipes);
+		int pipes = ilog2(num_pipes);
 		uint32_t dcc_offset = AMDGPU_TILING_GET(afb->tiling_flags, DCC_OFFSET_256B);
 
 		switch (swizzle >> 2) {
@@ -690,12 +698,17 @@ static int convert_tiling_flags_to_modifier(struct amdgpu_framebuffer *afb)
 		case 6: /* 64 KiB _X */
 			block_size_bits = 16;
 			break;
+		case 7: /* 256 KiB */
+			block_size_bits = 18;
+			break;
 		default:
 			/* RESERVED or VAR */
 			return -EINVAL;
 		}
 
-		if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 3, 0))
+		if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(11, 0, 0))
+			version = AMD_FMT_MOD_TILE_VER_GFX11;
+		else if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 3, 0))
 			version = AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS;
 		else if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 0, 0))
 			version = AMD_FMT_MOD_TILE_VER_GFX10;
@@ -718,7 +731,17 @@ static int convert_tiling_flags_to_modifier(struct amdgpu_framebuffer *afb)
 		}
 
 		if (has_xor) {
+			if (num_pipes == num_pkrs && num_pkrs == 0) {
+				DRM_ERROR("invalid number of pipes and packers\n");
+				return -EINVAL;
+			}
+
 			switch (version) {
+			case AMD_FMT_MOD_TILE_VER_GFX11:
+				pipe_xor_bits = min(block_size_bits - 8, pipes);
+				packers = min(block_size_bits - 8 - pipe_xor_bits,
+						ilog2(adev->gfx.config.gb_addr_config_fields.num_pkrs));
+				break;
 			case AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS:
 				pipe_xor_bits = min(block_size_bits - 8, pipes);
 				packers = min(block_size_bits - 8 - pipe_xor_bits,
@@ -752,9 +775,10 @@ static int convert_tiling_flags_to_modifier(struct amdgpu_framebuffer *afb)
 			u64 render_dcc_offset;
 
 			/* Enable constant encode on RAVEN2 and later. */
-			bool dcc_constant_encode = adev->asic_type > CHIP_RAVEN ||
+			bool dcc_constant_encode = (adev->asic_type > CHIP_RAVEN ||
 						   (adev->asic_type == CHIP_RAVEN &&
-						    adev->external_rev_id >= 0x81);
+						    adev->external_rev_id >= 0x81)) &&
+						    adev->ip_versions[GC_HWIP][0] < IP_VERSION(11, 0, 0);
 
 			int max_cblock_size = dcc_i64b ? AMD_FMT_MOD_DCC_BLOCK_64B :
 					      dcc_i128b ? AMD_FMT_MOD_DCC_BLOCK_128B :
@@ -869,10 +893,11 @@ static unsigned int get_dcc_block_size(uint64_t modifier, bool rb_aligned,
 		return max(10 + (rb_aligned ? (int)AMD_FMT_MOD_GET(RB, modifier) : 0), 12);
 	}
 	case AMD_FMT_MOD_TILE_VER_GFX10:
-	case AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS: {
+	case AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS:
+	case AMD_FMT_MOD_TILE_VER_GFX11: {
 		int pipes_log2 = AMD_FMT_MOD_GET(PIPE_XOR_BITS, modifier);
 
-		if (ver == AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS && pipes_log2 > 1 &&
+		if (ver >= AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS && pipes_log2 > 1 &&
 		    AMD_FMT_MOD_GET(PACKERS, modifier) == pipes_log2)
 			++pipes_log2;
 
@@ -965,6 +990,9 @@ static int amdgpu_display_verify_sizes(struct amdgpu_framebuffer *rfb)
 			case DC_SW_64KB_S_X:
 				block_size_log2 = 16;
 				break;
+			case DC_SW_VAR_S_X:
+				block_size_log2 = 18;
+				break;
 			default:
 				drm_dbg_kms(rfb->base.dev,
 					    "Swizzle mode with unknown block size: %d\n", swizzle);
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index a93affc37c53..badd136f5686 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -88,10 +88,14 @@
 #include "dcn/dcn_1_0_offset.h"
 #include "dcn/dcn_1_0_sh_mask.h"
 #include "soc15_hw_ip.h"
+#include "soc15_common.h"
 #include "vega10_ip_offset.h"
 
 #include "soc15_common.h"
 
+#include "gc/gc_11_0_0_offset.h"
+#include "gc/gc_11_0_0_sh_mask.h"
+
 #include "modules/inc/mod_freesync.h"
 #include "modules/power/power_helpers.h"
 #include "modules/inc/mod_info_packet.h"
@@ -4885,7 +4889,9 @@ fill_gfx9_tiling_info_from_modifier(const struct amdgpu_device *adev,
 	unsigned int mod_bank_xor_bits = AMD_FMT_MOD_GET(BANK_XOR_BITS, modifier);
 	unsigned int mod_pipe_xor_bits = AMD_FMT_MOD_GET(PIPE_XOR_BITS, modifier);
 	unsigned int pkrs_log2 = AMD_FMT_MOD_GET(PACKERS, modifier);
-	unsigned int pipes_log2 = min(4u, mod_pipe_xor_bits);
+	unsigned int pipes_log2;
+
+	pipes_log2 = min(5u, mod_pipe_xor_bits);
 
 	fill_gfx9_tiling_info_from_device(adev, tiling_info);
 
@@ -5221,6 +5227,67 @@ add_gfx10_3_modifiers(const struct amdgpu_device *adev,
 		    AMD_FMT_MOD_SET(TILE_VERSION, AMD_FMT_MOD_TILE_VER_GFX9));
 }
 
+static void
+add_gfx11_modifiers(const struct amdgpu_device *adev,
+		      uint64_t **mods, uint64_t *size, uint64_t *capacity)
+{
+	int num_pipes = 0;
+	int pipe_xor_bits = 0;
+	int num_pkrs = 0;
+	int pkrs = 0;
+	u32 gb_addr_config;
+	unsigned swizzle_r_x;
+	uint64_t modifier_r_x;
+	uint64_t modifier_dcc_best;
+	uint64_t modifier_dcc_4k;
+
+	/* TODO: GFX11 IP HW init hasnt finish and we get zero if we read from
+	 * adev->gfx.config.gb_addr_config_fields.num_{pkrs,pipes} */
+	gb_addr_config = RREG32_SOC15(GC, 0, regGB_ADDR_CONFIG);
+	ASSERT(gb_addr_config != 0);
+
+	num_pkrs = 1 << REG_GET_FIELD(gb_addr_config, GB_ADDR_CONFIG, NUM_PKRS);
+	pkrs = ilog2(num_pkrs);
+	num_pipes = 1 << REG_GET_FIELD(gb_addr_config, GB_ADDR_CONFIG, NUM_PIPES);
+	pipe_xor_bits = ilog2(num_pipes);
+
+    /* R_X swizzle modes are the best for rendering and DCC requires them. */
+    swizzle_r_x = num_pipes > 16 ? AMD_FMT_MOD_TILE_GFX11_256K_R_X :
+                                              AMD_FMT_MOD_TILE_GFX9_64K_R_X;
+
+	modifier_r_x = AMD_FMT_MOD |
+                            AMD_FMT_MOD_SET(TILE_VERSION, AMD_FMT_MOD_TILE_VER_GFX11) |
+                            AMD_FMT_MOD_SET(TILE, swizzle_r_x) |
+                            AMD_FMT_MOD_SET(PIPE_XOR_BITS, pipe_xor_bits) |
+                            AMD_FMT_MOD_SET(PACKERS, pkrs);
+
+    /* DCC_CONSTANT_ENCODE is not set because it can't vary with gfx11 (it's implied to be 1). */
+	modifier_dcc_best = modifier_r_x |
+                            AMD_FMT_MOD_SET(DCC, 1) |
+                            AMD_FMT_MOD_SET(DCC_INDEPENDENT_64B, 0) |
+                            AMD_FMT_MOD_SET(DCC_INDEPENDENT_128B, 1) |
+                            AMD_FMT_MOD_SET(DCC_MAX_COMPRESSED_BLOCK, AMD_FMT_MOD_DCC_BLOCK_128B);
+
+    /* DCC settings for 4K and greater resolutions. (required by display hw) */
+	modifier_dcc_4k = modifier_r_x |
+                            AMD_FMT_MOD_SET(DCC, 1) |
+                            AMD_FMT_MOD_SET(DCC_INDEPENDENT_64B, 1) |
+                            AMD_FMT_MOD_SET(DCC_INDEPENDENT_128B, 1) |
+                            AMD_FMT_MOD_SET(DCC_MAX_COMPRESSED_BLOCK, AMD_FMT_MOD_DCC_BLOCK_64B);
+
+	add_modifier(mods, size, capacity, modifier_dcc_best);
+	add_modifier(mods, size, capacity, modifier_dcc_4k);
+
+	add_modifier(mods, size, capacity, modifier_dcc_best | AMD_FMT_MOD_SET(DCC_RETILE, 1));
+	add_modifier(mods, size, capacity, modifier_dcc_4k | AMD_FMT_MOD_SET(DCC_RETILE, 1));
+
+	add_modifier(mods, size, capacity, modifier_r_x);
+
+	add_modifier(mods, size, capacity, AMD_FMT_MOD |
+             AMD_FMT_MOD_SET(TILE_VERSION, AMD_FMT_MOD_TILE_VER_GFX11) |
+			 AMD_FMT_MOD_SET(TILE, AMD_FMT_MOD_TILE_GFX9_64K_D));
+}
+
 static int
 get_plane_modifiers(const struct amdgpu_device *adev, unsigned int plane_type, uint64_t **mods)
 {
@@ -5254,6 +5321,9 @@ get_plane_modifiers(const struct amdgpu_device *adev, unsigned int plane_type, u
 		else
 			add_gfx10_1_modifiers(adev, mods, &size, &capacity);
 		break;
+	case AMDGPU_FAMILY_GC_11_0_0:
+		add_gfx11_modifiers(adev, mods, &size, &capacity);
+		break;
 	}
 
 	add_modifier(mods, &size, &capacity, DRM_FORMAT_MOD_LINEAR);
@@ -5292,7 +5362,7 @@ fill_gfx9_plane_attributes_from_modifiers(struct amdgpu_device *adev,
 		dcc->enable = 1;
 		dcc->meta_pitch = afb->base.pitches[1];
 		dcc->independent_64b_blks = independent_64b_blks;
-		if (AMD_FMT_MOD_GET(TILE_VERSION, modifier) == AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS) {
+		if (AMD_FMT_MOD_GET(TILE_VERSION, modifier) >= AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS) {
 			if (independent_64b_blks && independent_128b_blks)
 				dcc->dcc_ind_blk = hubp_ind_block_64b_no_128bcl;
 			else if (independent_128b_blks)
diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h
index fc0c1454d275..14cb2dafb0fa 100644
--- a/include/uapi/drm/drm_fourcc.h
+++ b/include/uapi/drm/drm_fourcc.h
@@ -1294,6 +1294,7 @@ drm_fourcc_canonicalize_nvidia_format_mod(__u64 modifier)
 #define AMD_FMT_MOD_TILE_VER_GFX9 1
 #define AMD_FMT_MOD_TILE_VER_GFX10 2
 #define AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS 3
+#define AMD_FMT_MOD_TILE_VER_GFX11 4
 
 /*
  * 64K_S is the same for GFX9/GFX10/GFX10_RBPLUS and hence has GFX9 as canonical
@@ -1309,6 +1310,7 @@ drm_fourcc_canonicalize_nvidia_format_mod(__u64 modifier)
 #define AMD_FMT_MOD_TILE_GFX9_64K_S_X 25
 #define AMD_FMT_MOD_TILE_GFX9_64K_D_X 26
 #define AMD_FMT_MOD_TILE_GFX9_64K_R_X 27
+#define AMD_FMT_MOD_TILE_GFX11_256K_R_X 31
 
 #define AMD_FMT_MOD_DCC_BLOCK_64B 0
 #define AMD_FMT_MOD_DCC_BLOCK_128B 1
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 15/43] drm/amd/display: Fix USBC link creation
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (12 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 14/43] drm/amd: Add GFX11 modifiers support to AMDGPU Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 16/43] drm/amd/display: Add missing instance for clock source register Alex Deucher
                   ` (27 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jerry Zuo, Dillon Varone

From: Dillon Varone <dillon.varone@amd.com>

[Description]

Add USBC connector ID to align with new VBIOS parsing.

Add seperate DCN321 link encoder due to different PHY version affecting
DP ALT related registers.

Signed-off-by: Dillon Varone <dillon.varone@amd.com>
Acked-by: Jerry Zuo <jerry.zuo@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../drm/amd/display/dc/bios/bios_parser2.c    |   1 +
 .../drm/amd/display/dc/bios/command_table.c   |   4 +-
 .../amd/display/dc/dcn10/dcn10_link_encoder.h |   6 +
 .../display/dc/dcn10/dcn10_stream_encoder.h   |   3 +
 .../display/dc/dcn32/dcn32_dio_link_encoder.c |  15 +-
 .../display/dc/dcn32/dcn32_dio_link_encoder.h |  20 +-
 .../gpu/drm/amd/display/dc/dcn321/Makefile    |   2 +-
 .../dc/dcn321/dcn321_dio_link_encoder.c       | 199 ++++++++++++++++++
 .../dc/dcn321/dcn321_dio_link_encoder.h       |  42 ++++
 .../amd/display/dc/dcn321/dcn321_resource.c   |   3 +-
 .../drm/amd/display/dc/inc/hw/link_encoder.h  |   2 +
 .../amd/display/include/bios_parser_types.h   |   3 +-
 12 files changed, 292 insertions(+), 8 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn321/dcn321_dio_link_encoder.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dcn321/dcn321_dio_link_encoder.h

diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
index bbc0a5769e88..3540b46765fb 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
@@ -3224,6 +3224,7 @@ static enum bp_result update_slot_layout_info_v2(
 		break;
 
 	case CONNECTOR_ID_DISPLAY_PORT:
+	case CONNECTOR_ID_USBC:
 		if (record->mini_type == MINI_TYPE_NORMAL) {
 			slot_layout_info->connectors[i].connector_type = CONNECTOR_LAYOUT_TYPE_DP;
 			slot_layout_info->connectors[i].length = CONNECTOR_SIZE_DP;
diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table.c b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
index 32efa92422e8..818a529cacc3 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
@@ -522,8 +522,8 @@ static enum bp_result transmitter_control_v2(
 		 */
 		params.acConfig.ucEncoderSel = 1;
 
-	if (CONNECTOR_ID_DISPLAY_PORT == connector_id
-		|| CONNECTOR_ID_USBC == connector_id)
+	if (CONNECTOR_ID_DISPLAY_PORT == connector_id ||
+	    CONNECTOR_ID_USBC == connector_id)
 		/* Bit4: DP connector flag
 		 * =0 connector is none-DP connector
 		 * =1 connector is DP connector
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.h
index 663aac0a164a..773380ef4997 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.h
@@ -167,6 +167,7 @@ struct dcn10_link_enc_registers {
 	uint32_t DIO_LINKD_CNTL;
 	uint32_t DIO_LINKE_CNTL;
 	uint32_t DIO_LINKF_CNTL;
+	uint32_t DIG_FIFO_CTRL0;
 };
 
 #define LE_SF(reg_name, field_name, post_fix)\
@@ -472,11 +473,15 @@ struct dcn10_link_enc_registers {
 	type HPO_DP_ENC_SEL;\
 	type HPO_HDMI_ENC_SEL
 
+#define DCN32_LINK_ENCODER_REG_FIELD_LIST(type) \
+	type DIG_FIFO_OUTPUT_PIXEL_MODE
+
 struct dcn10_link_enc_shift {
 	DCN_LINK_ENCODER_REG_FIELD_LIST(uint8_t);
 	DCN20_LINK_ENCODER_REG_FIELD_LIST(uint8_t);
 	DCN30_LINK_ENCODER_REG_FIELD_LIST(uint8_t);
 	DCN31_LINK_ENCODER_REG_FIELD_LIST(uint8_t);
+	DCN32_LINK_ENCODER_REG_FIELD_LIST(uint8_t);
 };
 
 struct dcn10_link_enc_mask {
@@ -484,6 +489,7 @@ struct dcn10_link_enc_mask {
 	DCN20_LINK_ENCODER_REG_FIELD_LIST(uint32_t);
 	DCN30_LINK_ENCODER_REG_FIELD_LIST(uint32_t);
 	DCN31_LINK_ENCODER_REG_FIELD_LIST(uint32_t);
+	DCN32_LINK_ENCODER_REG_FIELD_LIST(uint32_t);
 };
 
 struct dcn10_link_encoder {
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h
index fa6ff540e8f2..f659aaab792e 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h
@@ -660,6 +660,9 @@ void enc1_stream_encoder_send_immediate_sdp_message(
 void enc1_stream_encoder_stop_dp_info_packets(
 	struct stream_encoder *enc);
 
+void enc1_stream_encoder_reset_fifo(
+	struct stream_encoder *enc);
+
 void enc1_stream_encoder_dp_blank(
 	struct dc_link *link,
 	struct stream_encoder *enc);
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c
index 7170a9aa82a4..d6855d4f749b 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.c
@@ -33,6 +33,7 @@
 #include "stream_encoder.h"
 #include "i2caux_interface.h"
 #include "dc_bios_types.h"
+#include "link_enc_cfg.h"
 
 #include "gpio_service_interface.h"
 
@@ -125,7 +126,7 @@ bool dcn32_link_encoder_is_in_alt_mode(struct link_encoder *enc)
 
 	if (enc->features.flags.bits.DP_IS_USB_C) {
 		/* if value == 1 alt mode is disabled, otherwise it is enabled */
-		//REG_GET(RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, &dp_alt_mode_disable);
+		REG_GET(RDPCSPIPE_PHY_CNTL6, RDPCS_PHY_DPALT_DISABLE, &dp_alt_mode_disable);
 		is_usb_c_alt_mode = (dp_alt_mode_disable == 0);
 	}
 
@@ -142,13 +143,19 @@ void dcn32_link_encoder_get_max_link_cap(struct link_encoder *enc,
 
 	/* in usb c dp2 mode, max lane count is 2 */
 	if (enc->funcs->is_in_alt_mode && enc->funcs->is_in_alt_mode(enc)) {
-//		REG_GET(RDPCSTX_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, &is_in_usb_c_dp4_mode);
+		REG_GET(RDPCSPIPE_PHY_CNTL6, RDPCS_PHY_DPALT_DP4, &is_in_usb_c_dp4_mode);
 		if (!is_in_usb_c_dp4_mode)
 			link_settings->lane_count = MIN(LANE_COUNT_TWO, link_settings->lane_count);
 	}
 
 }
 
+void enc32_set_dig_output_mode(struct link_encoder *enc, uint8_t pix_per_container)
+{
+	struct dcn10_link_encoder *enc10 = TO_DCN10_LINK_ENC(enc);
+	REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_OUTPUT_PIXEL_MODE, pix_per_container);
+}
+ 
 static const struct link_encoder_funcs dcn32_link_enc_funcs = {
 	.read_state = link_enc2_read_state,
 	.validate_output_with_stream =
@@ -179,6 +186,7 @@ static const struct link_encoder_funcs dcn32_link_enc_funcs = {
 	.is_in_alt_mode = dcn32_link_encoder_is_in_alt_mode,
 	.get_max_link_cap = dcn32_link_encoder_get_max_link_cap,
 	.set_dio_phy_mux = dcn31_link_encoder_set_dio_phy_mux,
+	.set_dig_output_mode = enc32_set_dig_output_mode,
 };
 
 void dcn32_link_encoder_construct(
@@ -203,6 +211,9 @@ void dcn32_link_encoder_construct(
 	enc10->base.hpd_source = init_data->hpd_source;
 	enc10->base.connector = init_data->connector;
 
+	if (enc10->base.connector.id == CONNECTOR_ID_USBC)
+		enc10->base.features.flags.bits.DP_IS_USB_C = 1;
+
 	enc10->base.preferred_engine = ENGINE_ID_UNKNOWN;
 
 	enc10->base.features = *enc_features;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.h
index e880b76d5f25..749a1e8cb811 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_link_encoder.h
@@ -26,7 +26,15 @@
 #ifndef __DC_LINK_ENCODER__DCN32_H__
 #define __DC_LINK_ENCODER__DCN32_H__
 
-#include "dcn30/dcn30_dio_link_encoder.h"
+#include "dcn31/dcn31_dio_link_encoder.h"
+
+#define LE_DCN32_REG_LIST(id)\
+	LE_DCN31_REG_LIST(id),\
+	SRI(DIG_FIFO_CTRL0, DIG, id)
+
+#define LINK_ENCODER_MASK_SH_LIST_DCN32(mask_sh) \
+	LINK_ENCODER_MASK_SH_LIST_DCN31(mask_sh),\
+	LE_SF(DIG0_DIG_FIFO_CTRL0, DIG_FIFO_OUTPUT_PIXEL_MODE, mask_sh)
 
 void dcn32_link_encoder_construct(
 	struct dcn20_link_encoder *enc20,
@@ -38,5 +46,15 @@ void dcn32_link_encoder_construct(
 	const struct dcn10_link_enc_shift *link_shift,
 	const struct dcn10_link_enc_mask *link_mask);
 
+void enc32_hw_init(struct link_encoder *enc);
+
+void dcn32_link_encoder_enable_dp_output(
+	struct link_encoder *enc,
+	const struct dc_link_settings *link_settings,
+	enum clock_source_id clock_source);
+
+void enc32_set_dig_output_mode(
+		struct link_encoder *enc,
+		uint8_t pix_per_container);
 
 #endif /* __DC_LINK_ENCODER__DCN32_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/Makefile b/drivers/gpu/drm/amd/display/dc/dcn321/Makefile
index 99515cb3ed31..9b61d08700ca 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn321/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/Makefile
@@ -10,7 +10,7 @@
 #
 # Makefile for dcn321.
 
-DCN321 = dcn321_resource.o
+DCN321 = dcn321_resource.o dcn321_dio_link_encoder.o
 
 CFLAGS_$(AMDDALPATH)/dc/dcn321/dcn321_resource.o := -mhard-float -msse
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_dio_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_dio_link_encoder.c
new file mode 100644
index 000000000000..49682a31ecbd
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_dio_link_encoder.c
@@ -0,0 +1,199 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+
+#include "reg_helper.h"
+
+#include "core_types.h"
+#include "link_encoder.h"
+#include "dcn321_dio_link_encoder.h"
+#include "dcn31/dcn31_dio_link_encoder.h"
+#include "stream_encoder.h"
+#include "i2caux_interface.h"
+#include "dc_bios_types.h"
+
+#include "gpio_service_interface.h"
+
+#ifndef MIN
+#define MIN(X, Y) ((X) < (Y) ? (X) : (Y))
+#endif
+
+#define CTX \
+	enc10->base.ctx
+#define DC_LOGGER \
+	enc10->base.ctx->logger
+
+#define REG(reg)\
+	(enc10->link_regs->reg)
+
+#undef FN
+#define FN(reg_name, field_name) \
+	enc10->link_shift->field_name, enc10->link_mask->field_name
+
+#define AUX_REG(reg)\
+	(enc10->aux_regs->reg)
+
+#define AUX_REG_READ(reg_name) \
+		dm_read_reg(CTX, AUX_REG(reg_name))
+
+#define AUX_REG_WRITE(reg_name, val) \
+			dm_write_reg(CTX, AUX_REG(reg_name), val)
+
+static const struct link_encoder_funcs dcn321_link_enc_funcs = {
+	.read_state = link_enc2_read_state,
+	.validate_output_with_stream =
+			dcn30_link_encoder_validate_output_with_stream,
+	.hw_init = enc32_hw_init,
+	.setup = dcn10_link_encoder_setup,
+	.enable_tmds_output = dcn10_link_encoder_enable_tmds_output,
+	.enable_dp_output = dcn32_link_encoder_enable_dp_output,
+	.enable_dp_mst_output = dcn10_link_encoder_enable_dp_mst_output,
+	.disable_output = dcn10_link_encoder_disable_output,
+	.dp_set_lane_settings = dcn10_link_encoder_dp_set_lane_settings,
+	.dp_set_phy_pattern = dcn10_link_encoder_dp_set_phy_pattern,
+	.update_mst_stream_allocation_table =
+		dcn10_link_encoder_update_mst_stream_allocation_table,
+	.psr_program_dp_dphy_fast_training =
+			dcn10_psr_program_dp_dphy_fast_training,
+	.psr_program_secondary_packet = dcn10_psr_program_secondary_packet,
+	.connect_dig_be_to_fe = dcn10_link_encoder_connect_dig_be_to_fe,
+	.enable_hpd = dcn10_link_encoder_enable_hpd,
+	.disable_hpd = dcn10_link_encoder_disable_hpd,
+	.is_dig_enabled = dcn10_is_dig_enabled,
+	.destroy = dcn10_link_encoder_destroy,
+	.fec_set_enable = enc2_fec_set_enable,
+	.fec_set_ready = enc2_fec_set_ready,
+	.fec_is_active = enc2_fec_is_active,
+	.get_dig_frontend = dcn10_get_dig_frontend,
+	.get_dig_mode = dcn10_get_dig_mode,
+	.is_in_alt_mode = dcn20_link_encoder_is_in_alt_mode,
+	.get_max_link_cap = dcn20_link_encoder_get_max_link_cap,
+	.set_dio_phy_mux = dcn31_link_encoder_set_dio_phy_mux,
+	.set_dig_output_mode = enc32_set_dig_output_mode,
+};
+
+void dcn321_link_encoder_construct(
+	struct dcn20_link_encoder *enc20,
+	const struct encoder_init_data *init_data,
+	const struct encoder_feature_support *enc_features,
+	const struct dcn10_link_enc_registers *link_regs,
+	const struct dcn10_link_enc_aux_registers *aux_regs,
+	const struct dcn10_link_enc_hpd_registers *hpd_regs,
+	const struct dcn10_link_enc_shift *link_shift,
+	const struct dcn10_link_enc_mask *link_mask)
+{
+	struct bp_connector_speed_cap_info bp_cap_info = {0};
+	const struct dc_vbios_funcs *bp_funcs = init_data->ctx->dc_bios->funcs;
+	enum bp_result result = BP_RESULT_OK;
+	struct dcn10_link_encoder *enc10 = &enc20->enc10;
+
+	enc10->base.funcs = &dcn321_link_enc_funcs;
+	enc10->base.ctx = init_data->ctx;
+	enc10->base.id = init_data->encoder;
+
+	enc10->base.hpd_source = init_data->hpd_source;
+	enc10->base.connector = init_data->connector;
+
+	if (enc10->base.connector.id == CONNECTOR_ID_USBC)
+		enc10->base.features.flags.bits.DP_IS_USB_C = 1;
+
+	enc10->base.preferred_engine = ENGINE_ID_UNKNOWN;
+
+	enc10->base.features = *enc_features;
+
+	enc10->base.transmitter = init_data->transmitter;
+
+	/* set the flag to indicate whether driver poll the I2C data pin
+	 * while doing the DP sink detect
+	 */
+
+/*	if (dal_adapter_service_is_feature_supported(as,
+		FEATURE_DP_SINK_DETECT_POLL_DATA_PIN))
+		enc10->base.features.flags.bits.
+			DP_SINK_DETECT_POLL_DATA_PIN = true;*/
+
+	enc10->base.output_signals =
+		SIGNAL_TYPE_DVI_SINGLE_LINK |
+		SIGNAL_TYPE_DVI_DUAL_LINK |
+		SIGNAL_TYPE_LVDS |
+		SIGNAL_TYPE_DISPLAY_PORT |
+		SIGNAL_TYPE_DISPLAY_PORT_MST |
+		SIGNAL_TYPE_EDP |
+		SIGNAL_TYPE_HDMI_TYPE_A;
+
+	enc10->link_regs = link_regs;
+	enc10->aux_regs = aux_regs;
+	enc10->hpd_regs = hpd_regs;
+	enc10->link_shift = link_shift;
+	enc10->link_mask = link_mask;
+
+	switch (enc10->base.transmitter) {
+	case TRANSMITTER_UNIPHY_A:
+		enc10->base.preferred_engine = ENGINE_ID_DIGA;
+	break;
+	case TRANSMITTER_UNIPHY_B:
+		enc10->base.preferred_engine = ENGINE_ID_DIGB;
+	break;
+	case TRANSMITTER_UNIPHY_C:
+		enc10->base.preferred_engine = ENGINE_ID_DIGC;
+	break;
+	case TRANSMITTER_UNIPHY_D:
+		enc10->base.preferred_engine = ENGINE_ID_DIGD;
+	break;
+	case TRANSMITTER_UNIPHY_E:
+		enc10->base.preferred_engine = ENGINE_ID_DIGE;
+	break;
+	default:
+		ASSERT_CRITICAL(false);
+		enc10->base.preferred_engine = ENGINE_ID_UNKNOWN;
+	}
+
+	/* default to one to mirror Windows behavior */
+	enc10->base.features.flags.bits.HDMI_6GB_EN = 1;
+
+	if (bp_funcs->get_connector_speed_cap_info)
+		result = bp_funcs->get_connector_speed_cap_info(enc10->base.ctx->dc_bios,
+						enc10->base.connector, &bp_cap_info);
+
+	/* Override features with DCE-specific values */
+	if (result == BP_RESULT_OK) {
+		enc10->base.features.flags.bits.IS_HBR2_CAPABLE =
+				bp_cap_info.DP_HBR2_EN;
+		enc10->base.features.flags.bits.IS_HBR3_CAPABLE =
+				bp_cap_info.DP_HBR3_EN;
+		enc10->base.features.flags.bits.HDMI_6GB_EN = bp_cap_info.HDMI_6GB_EN;
+		enc10->base.features.flags.bits.IS_DP2_CAPABLE = 1;
+		enc10->base.features.flags.bits.IS_UHBR10_CAPABLE = bp_cap_info.DP_UHBR10_EN;
+		enc10->base.features.flags.bits.IS_UHBR13_5_CAPABLE = bp_cap_info.DP_UHBR13_5_EN;
+		enc10->base.features.flags.bits.IS_UHBR20_CAPABLE = bp_cap_info.DP_UHBR20_EN;
+	} else {
+		DC_LOG_WARNING("%s: Failed to get encoder_cap_info from VBIOS with error code %d!\n",
+				__func__,
+				result);
+	}
+	if (enc10->base.ctx->dc->debug.hdmi20_disable) {
+		enc10->base.features.flags.bits.HDMI_6GB_EN = 0;
+	}
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_dio_link_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_dio_link_encoder.h
new file mode 100644
index 000000000000..2205f39b0a24
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_dio_link_encoder.h
@@ -0,0 +1,42 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ *  and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_LINK_ENCODER__DCN321_H__
+#define __DC_LINK_ENCODER__DCN321_H__
+
+#include "dcn32/dcn32_dio_link_encoder.h"
+
+void dcn321_link_encoder_construct(
+	struct dcn20_link_encoder *enc20,
+	const struct encoder_init_data *init_data,
+	const struct encoder_feature_support *enc_features,
+	const struct dcn10_link_enc_registers *link_regs,
+	const struct dcn10_link_enc_aux_registers *aux_regs,
+	const struct dcn10_link_enc_hpd_registers *hpd_regs,
+	const struct dcn10_link_enc_shift *link_shift,
+	const struct dcn10_link_enc_mask *link_mask);
+
+
+#endif /* __DC_LINK_ENCODER__DCN321_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
index 6bc477a212b0..4db2cdf7c9e5 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
@@ -62,6 +62,7 @@
 #include "dcn31/dcn31_apg.h"
 #include "dcn31/dcn31_dio_link_encoder.h"
 #include "dcn32/dcn32_dio_link_encoder.h"
+#include "dcn321_dio_link_encoder.h"
 #include "dce/dce_clock_source.h"
 #include "dce/dce_audio.h"
 #include "dce/dce_hwseq.h"
@@ -1253,7 +1254,7 @@ static struct link_encoder *dcn321_link_encoder_create(
 	if (!enc20)
 		return NULL;
 
-	dcn32_link_encoder_construct(enc20,
+	dcn321_link_encoder_construct(enc20,
 			enc_init_data,
 			&link_enc_feature,
 			&link_enc_regs[enc_init_data->transmitter],
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/link_encoder.h b/drivers/gpu/drm/amd/display/dc/inc/hw/link_encoder.h
index 2013a70603ae..d8433c679201 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/link_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/link_encoder.h
@@ -200,6 +200,8 @@ struct link_encoder_funcs {
 		struct link_encoder *enc,
 		enum encoder_type_select sel,
 		uint32_t hpo_inst);
+	void (*set_dig_output_mode)(
+			struct link_encoder *enc, uint8_t pix_per_container);
 };
 
 /*
diff --git a/drivers/gpu/drm/amd/display/include/bios_parser_types.h b/drivers/gpu/drm/amd/display/include/bios_parser_types.h
index 83058bcbb2e8..812377d9e48f 100644
--- a/drivers/gpu/drm/amd/display/include/bios_parser_types.h
+++ b/drivers/gpu/drm/amd/display/include/bios_parser_types.h
@@ -342,7 +342,8 @@ struct bp_connector_speed_cap_info {
 	uint32_t DP_UHBR10_EN:1;
 	uint32_t DP_UHBR13_5_EN:1;
 	uint32_t DP_UHBR20_EN:1;
-	uint32_t RESERVED:29;
+	uint32_t DP_IS_USB_C:1;
+	uint32_t RESERVED:28;
 };
 
 #endif /*__DAL_BIOS_PARSER_TYPES_H__ */
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 16/43] drm/amd/display: Add missing instance for clock source register
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (13 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 15/43] drm/amd/display: Fix USBC link creation Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 17/43] drm/amd/display: Use DTBCLK for valid pixel clock Alex Deucher
                   ` (26 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jerry Zuo, Alvin Lee

From: Alvin Lee <alvin.lee2@amd.com>

[Description]
Need to add inst 5 for clk_src_regs because
there are 5 PHY instances in DCN32 & DCN321.

Signed-off-by: Alvin Lee <alvin.lee2@amd.com>
Acked-by: Jerry Zuo <jerry.zuo@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c   | 3 ++-
 drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
index 7ac6428aae52..ca9da3d4b1b5 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
@@ -330,7 +330,8 @@ static const struct dce110_clk_src_regs clk_src_regs[] = {
 	clk_src_regs(0, A),
 	clk_src_regs(1, B),
 	clk_src_regs(2, C),
-	clk_src_regs(3, D)
+	clk_src_regs(3, D),
+	clk_src_regs(4, E)
 };
 
 static const struct dce110_clk_src_shift cs_shift = {
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
index 4db2cdf7c9e5..28e4d7904d54 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
@@ -333,7 +333,8 @@ static const struct dce110_clk_src_regs clk_src_regs[] = {
 	clk_src_regs(0, A),
 	clk_src_regs(1, B),
 	clk_src_regs(2, C),
-	clk_src_regs(3, D)
+	clk_src_regs(3, D),
+	clk_src_regs(4, E)
 };
 
 static const struct dce110_clk_src_shift cs_shift = {
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 17/43] drm/amd/display: Use DTBCLK for valid pixel clock
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (14 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 16/43] drm/amd/display: Add missing instance for clock source register Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 18/43] drm/amd/display: Add guard for FCLK pstate message to PMFW for DCN321 Alex Deucher
                   ` (25 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jerry Zuo, Eric Bernstein

From: Eric Bernstein <eric.bernstein@amd.com>

Use DTBCLK for valid pixel clock generation

Signed-off-by: Eric Bernstein <eric.bernstein@amd.com>
Acked-by: Jerry Zuo <jerry.zuo@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c   | 15 +++++++++------
 drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h    | 17 +++++++++++++++++
 2 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
index 12633561be3f..a3e0b95fdc84 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
@@ -42,12 +42,6 @@
 #define DC_LOGGER \
 	dccg->ctx->logger
 
-enum pixel_rate_div {
-	PIXEL_RATE_DIV_BY_1 = 0,
-	PIXEL_RATE_DIV_BY_2 = 1,
-	PIXEL_RATE_DIV_BY_4 = 3
-};
-
 static void dccg32_set_pixel_rate_div(
 		struct dccg *dccg,
 		uint32_t otg_inst,
@@ -190,6 +184,14 @@ void dccg32_set_dtbclk_dto(
 	}
 }
 
+void dccg32_set_valid_pixel_rate(
+		struct dccg *dccg,
+		int otg_inst,
+		int pixclk_khz)
+{
+	dccg32_set_dtbclk_dto(dccg, otg_inst, pixclk_khz, 0, NULL);
+}
+
 static void dccg32_get_dccg_ref_freq(struct dccg *dccg,
 		unsigned int xtalin_freq_inKhz,
 		unsigned int *dccg_ref_freq_inKhz)
@@ -267,6 +269,7 @@ static const struct dccg_funcs dccg32_funcs = {
 	.disable_symclk32_le = dccg31_disable_symclk32_le,
 	.set_physymclk = dccg31_set_physymclk,
 	.set_dtbclk_dto = dccg32_set_dtbclk_dto,
+	.set_valid_pixel_rate = dccg32_set_valid_pixel_rate,
 	.set_fifo_errdet_ovr_en = dccg2_set_fifo_errdet_ovr_en,
 	.set_audio_dtbclk_dto = dccg31_set_audio_dtbclk_dto,
 	.otg_add_pixel = dccg32_otg_add_pixel,
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
index 5cc6d1fb887d..2fbd65d88b61 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
@@ -56,6 +56,12 @@ enum dentist_dispclk_change_mode {
 	DISPCLK_CHANGE_MODE_RAMPING,
 };
 
+enum pixel_rate_div {
+   PIXEL_RATE_DIV_BY_1 = 0,
+   PIXEL_RATE_DIV_BY_2 = 1,
+   PIXEL_RATE_DIV_BY_4 = 3
+};
+
 struct dccg {
 	struct dc_context *ctx;
 	const struct dccg_funcs *funcs;
@@ -133,6 +139,17 @@ struct dccg_funcs {
 		struct dccg *dccg,
 		int inst);
 
+void (*set_pixel_rate_div)(
+        struct dccg *dccg,
+        uint32_t otg_inst,
+        enum pixel_rate_div k1,
+        enum pixel_rate_div k2);
+
+void (*set_valid_pixel_rate)(
+        struct dccg *dccg,
+        int otg_inst,
+        int pixclk_khz);
+
 };
 
 #endif //__DAL_DCCG_H__
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 18/43] drm/amd/display: Add guard for FCLK pstate message to PMFW for DCN321
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (15 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 17/43] drm/amd/display: Use DTBCLK for valid pixel clock Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 19/43] drm/amd/display: Various DML fixes to enable higher timings Alex Deucher
                   ` (24 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jerry Zuo, Dillon Varone

From: Dillon Varone <dillon.varone@amd.com>

[WHY?]
DCN321 does not support FCLK DPM, and thus it should not send messages to
PMFW regarding it.

Signed-off-by: Dillon Varone <dillon.varone@amd.com>
Acked-by: Jerry Zuo <jerry.zuo@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
index 419cc83b3d21..4ff12b816614 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
@@ -346,7 +346,8 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
 					clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
 	}
 
-	if (should_update_pstate_support(safe_to_lower, fclk_p_state_change_support, clk_mgr_base->clks.fclk_p_state_change_support)) {
+	if (should_update_pstate_support(safe_to_lower, fclk_p_state_change_support, clk_mgr_base->clks.fclk_p_state_change_support) &&
+			clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21) {
 		clk_mgr_base->clks.fclk_p_state_change_support = fclk_p_state_change_support;
 
 		/* To disable FCLK P-state switching, send FCLK_PSTATE_NOTSUPPORTED message to PMFW */
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 19/43] drm/amd/display: Various DML fixes to enable higher timings
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (16 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 18/43] drm/amd/display: Add guard for FCLK pstate message to PMFW for DCN321 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 20/43] drm/amd/display: Implement WM table transfer for DCN32/DCN321 Alex Deucher
                   ` (23 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx
  Cc: Alex Deucher, Chaitanya Dhere, Dillon Varone, Nevenko Stupar, Jerry Zuo

From: Dillon Varone <dillon.varone@amd.com>

Fixes to enable higher rate timings for DCN3.2.x.

Signed-off-by: Dillon Varone <dillon.varone@amd.com>
Signed-off-by: Chaitanya Dhere <chaitanya.dhere@amd.com>
Signed-off-by: Nevenko Stupar <Nevenko.Stupar@amd.com>
Acked-by: Jerry Zuo <jerry.zuo@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../drm/amd/display/dc/dcn32/dcn32_hubbub.c   |  4 +--
 .../drm/amd/display/dc/dcn32/dcn32_resource.c | 33 ++++++++++++-------
 .../drm/amd/display/dc/dml/dcn20/dcn20_fpu.c  | 27 ++-------------
 3 files changed, 26 insertions(+), 38 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.c
index 27813374f2bb..99eb239bbc7b 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hubbub.c
@@ -65,7 +65,7 @@ static void dcn32_init_crb(struct hubbub *hubbub)
 	REG_SET_2(COMPBUF_RESERVED_SPACE, 0,
 			COMPBUF_RESERVED_SPACE_64B, hubbub2->pixel_chunk_size / 32,
 			COMPBUF_RESERVED_SPACE_ZS, hubbub2->pixel_chunk_size / 128);
-	REG_UPDATE(DCHUBBUB_DEBUG_CTRL_0, DET_DEPTH, 0x17F);
+	REG_UPDATE(DCHUBBUB_DEBUG_CTRL_0, DET_DEPTH, 0x47F);
 }
 
 static void dcn32_program_det_size(struct hubbub *hubbub, int hubp_inst, unsigned int det_buffer_size_in_kbyte)
@@ -119,8 +119,8 @@ static void dcn32_program_compbuf_size(struct hubbub *hubbub, unsigned int compb
 		ASSERT(hubbub2->det0_size + hubbub2->det1_size + hubbub2->det2_size
 				+ hubbub2->det3_size + compbuf_size_segments <= hubbub2->crb_size_segs);
 		REG_UPDATE(DCHUBBUB_COMPBUF_CTRL, COMPBUF_SIZE, compbuf_size_segments);
-		REG_WAIT(DCHUBBUB_COMPBUF_CTRL, COMPBUF_SIZE_CURRENT, compbuf_size_segments, 1, 100);
 		hubbub2->compbuf_size_segments = compbuf_size_segments;
+		ASSERT(REG_GET(DCHUBBUB_COMPBUF_CTRL, CONFIG_ERROR, &compbuf_size_segments) && !compbuf_size_segments);
 	}
 }
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
index ca9da3d4b1b5..8a10a7a4c3e1 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
@@ -3015,13 +3015,30 @@ int dcn32_populate_dml_pipes_from_context(
 		}
 		pipe_cnt++;
 	}
-	context->bw_ctx.dml.ip.det_buffer_size_kbytes = DCN3_2_DEFAULT_DET_SIZE;
 
-	if (pipe_cnt == 1 && pipe->plane_state && !dc->debug.disable_z9_mpc) {
-		if (!is_dual_plane(pipe->plane_state->format)) {
-			context->bw_ctx.dml.ip.det_buffer_size_kbytes = 192;
-			pipes[0].pipe.src.unbounded_req_mode = true;
+	switch (pipe_cnt) {
+	case 1:
+		context->bw_ctx.dml.ip.det_buffer_size_kbytes = DCN3_2_MAX_DET_SIZE;
+		if (pipe->plane_state && !dc->debug.disable_z9_mpc) {
+			if (!is_dual_plane(pipe->plane_state->format)) {
+				context->bw_ctx.dml.ip.det_buffer_size_kbytes = DCN3_2_DEFAULT_DET_SIZE;
+				pipes[0].pipe.src.unbounded_req_mode = true;
+				if (pipe->plane_state->src_rect.width >= 5120 &&
+					pipe->plane_state->src_rect.height >= 2880)
+					context->bw_ctx.dml.ip.det_buffer_size_kbytes = 320; // 5K or higher
+			}
 		}
+		break;
+	case 2:
+		context->bw_ctx.dml.ip.det_buffer_size_kbytes = DCN3_2_MAX_DET_SIZE / 2; // 576 KB (9 segments)
+		break;
+	case 3:
+		context->bw_ctx.dml.ip.det_buffer_size_kbytes = DCN3_2_MAX_DET_SIZE / 3; // 384 KB (6 segments)
+		break;
+	case 4:
+	default:
+		context->bw_ctx.dml.ip.det_buffer_size_kbytes = DCN3_2_DEFAULT_DET_SIZE; // 256 KB (4 segments)
+		break;
 	}
 
 	return pipe_cnt;
@@ -3283,7 +3300,6 @@ void dcn32_calculate_dlg_params(struct dc *dc, struct dc_state *context, display
 	 * possible with firmware driven vertical blank stretching.
 	 */
 	// context->bw_ctx.bw.dcn.clk.p_state_change_support |= context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching;
-
 	context->bw_ctx.bw.dcn.clk.dppclk_khz = 0;
 	context->bw_ctx.bw.dcn.clk.dtbclk_en = is_dtbclk_required(dc, context);
 	if (context->bw_ctx.dml.vba.FCLKChangeSupport[vlevel][context->bw_ctx.dml.vba.maxMpcComb] == dm_fclock_change_unsupported)
@@ -3339,11 +3355,6 @@ void dcn32_calculate_dlg_params(struct dc *dc, struct dc_state *context, display
 		if (!context->res_ctx.pipe_ctx[i].stream)
 			continue;
 
-		/* cstate disabled on 201 */
-//		if (dc->ctx->dce_version == DCN_VERSION_2_01)
-//			cstate_en = false;
-
-
 		context->bw_ctx.dml.funcs.rq_dlg_get_dlg_reg_v2(&context->bw_ctx.dml,
 				&context->res_ctx.pipe_ctx[i].dlg_regs, &context->res_ctx.pipe_ctx[i].ttu_regs, pipes,
 				pipe_cnt, pipe_idx);
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
index 8d4c74b0fc90..eeec40f6fd0a 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
@@ -1018,31 +1018,8 @@ int dcn20_populate_dml_pipes_from_context(
 			pipes[pipe_cnt].pipe.dest.pixel_rate_mhz *= 2;
 		pipes[pipe_cnt].pipe.dest.otg_inst = res_ctx->pipe_ctx[i].stream_res.tg->inst;
 		pipes[pipe_cnt].dout.dp_lanes = 4;
-		if (res_ctx->pipe_ctx[i].stream->link) {
-			switch (res_ctx->pipe_ctx[i].stream->link->cur_link_settings.link_rate) {
-			case LINK_RATE_HIGH:
-				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_hbr;
-				break;
-			case LINK_RATE_HIGH2:
-				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_hbr2;
-				break;
-			case LINK_RATE_HIGH3:
-				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_hbr3;
-				break;
-			case LINK_RATE_UHBR10:
-				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_uhbr10;
-				break;
-			case LINK_RATE_UHBR13_5:
-				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_uhbr13p5;
-				break;
-			case LINK_RATE_UHBR20:
-				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_uhbr20;
-				break;
-			default:
-				pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_na;
-				break;
-			}
-		}
+		if (res_ctx->pipe_ctx[i].stream->link)
+			pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_na;
 		pipes[pipe_cnt].dout.is_virtual = 0;
 		pipes[pipe_cnt].pipe.dest.vtotal_min = res_ctx->pipe_ctx[i].stream->adjust.v_total_min;
 		pipes[pipe_cnt].pipe.dest.vtotal_max = res_ctx->pipe_ctx[i].stream->adjust.v_total_max;
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 20/43] drm/amd/display: Implement WM table transfer for DCN32/DCN321
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (17 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 19/43] drm/amd/display: Various DML fixes to enable higher timings Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 21/43] drm/amd/display: add missing interrupt handlers " Alex Deucher
                   ` (22 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jerry Zuo, Alvin Lee

From: Alvin Lee <alvin.lee2@amd.com>

Add support for watermark table transfers.

Signed-off-by: Alvin Lee <alvin.lee2@amd.com>
Acked-by: Jerry Zuo <jerry.zuo@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c   | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
index 4ff12b816614..93fbecbc8065 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
@@ -444,6 +444,7 @@ void dcn32_clock_read_ss_info(struct clk_mgr_internal *clk_mgr)
 }
 static void dcn32_notify_wm_ranges(struct clk_mgr *clk_mgr_base)
 {
+	unsigned int i;
 	struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
 	WatermarksExternal_t *table = (WatermarksExternal_t *) clk_mgr->wm_range_table;
 
@@ -455,6 +456,12 @@ static void dcn32_notify_wm_ranges(struct clk_mgr *clk_mgr_base)
 
 	memset(table, 0, sizeof(*table));
 
+	/* collect valid ranges, place in pmfw table */
+	for (i = 0; i < WM_SET_COUNT; i++)
+		if (clk_mgr->base.bw_params->wm_table.nv_entries[i].valid) {
+			table->Watermarks.WatermarkRow[i].WmSetting = i;
+			table->Watermarks.WatermarkRow[i].Flags = clk_mgr->base.bw_params->wm_table.nv_entries[i].pmfw_breakdown.wm_type;
+		}
 	dcn30_smu_set_dram_addr_high(clk_mgr, clk_mgr->wm_range_table_addr >> 32);
 	dcn30_smu_set_dram_addr_low(clk_mgr, clk_mgr->wm_range_table_addr & 0xFFFFFFFF);
 	dcn32_smu_transfer_wm_table_dram_2_smu(clk_mgr);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 21/43] drm/amd/display: add missing interrupt handlers for DCN32/DCN321
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (18 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 20/43] drm/amd/display: Implement WM table transfer for DCN32/DCN321 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 22/43] drm/amd/display: disable idle optimizations Alex Deucher
                   ` (21 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jerry Zuo, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Acked-by: Jerry Zuo <jerry.zuo@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../display/dc/irq/dcn32/irq_service_dcn32.c  | 65 ++++++++++++++++++-
 1 file changed, 64 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c b/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c
index 3f9d6531c563..3a213ca2f077 100644
--- a/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c
+++ b/drivers/gpu/drm/amd/display/dc/irq/dcn32/irq_service_dcn32.c
@@ -54,6 +54,18 @@ enum dc_irq_source to_dal_irq_source_dcn32(
 		return DC_IRQ_SOURCE_VBLANK5;
 	case DCN_1_0__SRCID__DC_D6_OTG_VSTARTUP:
 		return DC_IRQ_SOURCE_VBLANK6;
+	case DCN_1_0__SRCID__OTG1_VERTICAL_INTERRUPT0_CONTROL:
+		return DC_IRQ_SOURCE_DC1_VLINE0;
+	case DCN_1_0__SRCID__OTG2_VERTICAL_INTERRUPT0_CONTROL:
+		return DC_IRQ_SOURCE_DC2_VLINE0;
+	case DCN_1_0__SRCID__OTG3_VERTICAL_INTERRUPT0_CONTROL:
+		return DC_IRQ_SOURCE_DC3_VLINE0;
+	case DCN_1_0__SRCID__OTG4_VERTICAL_INTERRUPT0_CONTROL:
+		return DC_IRQ_SOURCE_DC4_VLINE0;
+	case DCN_1_0__SRCID__OTG5_VERTICAL_INTERRUPT0_CONTROL:
+		return DC_IRQ_SOURCE_DC5_VLINE0;
+	case DCN_1_0__SRCID__OTG6_VERTICAL_INTERRUPT0_CONTROL:
+		return DC_IRQ_SOURCE_DC6_VLINE0;
 	case DCN_1_0__SRCID__HUBP0_FLIP_INTERRUPT:
 		return DC_IRQ_SOURCE_PFLIP1;
 	case DCN_1_0__SRCID__HUBP1_FLIP_INTERRUPT:
@@ -78,7 +90,8 @@ enum dc_irq_source to_dal_irq_source_dcn32(
 		return DC_IRQ_SOURCE_VUPDATE5;
 	case DCN_1_0__SRCID__OTG5_IHC_V_UPDATE_NO_LOCK_INTERRUPT:
 		return DC_IRQ_SOURCE_VUPDATE6;
-
+	case DCN_1_0__SRCID__DMCUB_OUTBOX_LOW_PRIORITY_READY_INT:
+		return DC_IRQ_SOURCE_DMCUB_OUTBOX;
 	case DCN_1_0__SRCID__DC_HPD1_INT:
 		/* generic src_id for all HPD and HPDRX interrupts */
 		switch (ext_id) {
@@ -168,6 +181,16 @@ static const struct irq_source_info_funcs vblank_irq_info_funcs = {
 	.ack = NULL
 };
 
+static const struct irq_source_info_funcs outbox_irq_info_funcs = {
+	.set = NULL,
+	.ack = NULL
+};
+
+static const struct irq_source_info_funcs vline0_irq_info_funcs = {
+	.set = NULL,
+	.ack = NULL
+};
+
 #undef BASE_INNER
 #define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
 
@@ -179,6 +202,10 @@ static const struct irq_source_info_funcs vblank_irq_info_funcs = {
 	BASE(reg ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
 			reg ## block ## id ## _ ## reg_name
 
+#define SRI_DMUB(reg_name)\
+	BASE(reg ## reg_name ## _BASE_IDX) + \
+			reg ## reg_name
+
 #define IRQ_REG_ENTRY(block, reg_num, reg1, mask1, reg2, mask2)\
 	.enable_reg = SRI(reg1, block, reg_num),\
 	.enable_mask = \
@@ -193,6 +220,20 @@ static const struct irq_source_info_funcs vblank_irq_info_funcs = {
 	.ack_value = \
 		block ## reg_num ## _ ## reg2 ## __ ## mask2 ## _MASK \
 
+#define IRQ_REG_ENTRY_DMUB(reg1, mask1, reg2, mask2)\
+	.enable_reg = SRI_DMUB(reg1),\
+	.enable_mask = \
+		reg1 ## __ ## mask1 ## _MASK,\
+	.enable_value = {\
+		reg1 ## __ ## mask1 ## _MASK,\
+		~reg1 ## __ ## mask1 ## _MASK \
+	},\
+	.ack_reg = SRI_DMUB(reg2),\
+	.ack_mask = \
+		reg2 ## __ ## mask2 ## _MASK,\
+	.ack_value = \
+		reg2 ## __ ## mask2 ## _MASK \
+
 #define hpd_int_entry(reg_num)\
 	[DC_IRQ_SOURCE_HPD1 + reg_num] = {\
 		IRQ_REG_ENTRY(HPD, reg_num,\
@@ -237,6 +278,21 @@ static const struct irq_source_info_funcs vblank_irq_info_funcs = {
 		.funcs = &vblank_irq_info_funcs\
 }
 
+#define vline0_int_entry(reg_num)\
+	[DC_IRQ_SOURCE_DC1_VLINE0 + reg_num] = {\
+		IRQ_REG_ENTRY(OTG, reg_num,\
+			OTG_VERTICAL_INTERRUPT0_CONTROL, OTG_VERTICAL_INTERRUPT0_INT_ENABLE,\
+			OTG_VERTICAL_INTERRUPT0_CONTROL, OTG_VERTICAL_INTERRUPT0_CLEAR),\
+		.funcs = &vline0_irq_info_funcs\
+	}
+#define dmub_outbox_int_entry()\
+	[DC_IRQ_SOURCE_DMCUB_OUTBOX] = {\
+		IRQ_REG_ENTRY_DMUB(\
+			DMCUB_INTERRUPT_ENABLE, DMCUB_OUTBOX1_READY_INT_EN,\
+			DMCUB_INTERRUPT_ACK, DMCUB_OUTBOX1_READY_INT_ACK),\
+		.funcs = &outbox_irq_info_funcs\
+	}
+
 #define dummy_irq_entry() \
 	{\
 		.funcs = &dummy_irq_info_funcs\
@@ -339,6 +395,13 @@ irq_source_info_dcn32[DAL_IRQ_SOURCES_NUMBER] = {
 	vblank_int_entry(1),
 	vblank_int_entry(2),
 	vblank_int_entry(3),
+	vline0_int_entry(0),
+	vline0_int_entry(1),
+	vline0_int_entry(2),
+	vline0_int_entry(3),
+	[DC_IRQ_SOURCE_DC5_VLINE1] = dummy_irq_entry(),
+	[DC_IRQ_SOURCE_DC6_VLINE1] = dummy_irq_entry(),
+	dmub_outbox_int_entry(),
 };
 
 static const struct irq_service_funcs irq_service_funcs_dcn32 = {
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 22/43] drm/amd/display: disable idle optimizations
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (19 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 21/43] drm/amd/display: add missing interrupt handlers " Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 23/43] drm/amd/display: Select correct DTO source Alex Deucher
                   ` (20 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jerry Zuo, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

Disable idle optimizations until SMU can handle them to prevent DMUB
timeout and subsequent system freeze

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Acked-by: Jerry Zuo <jerry.zuo@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
index 8a10a7a4c3e1..64d1a6bf0683 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
@@ -976,6 +976,7 @@ static const struct dc_debug_options debug_defaults_drv = {
 	.timing_trace = false,
 	.clock_trace = true,
 	.disable_pplib_clock_request = false,
+	.disable_idle_power_optimizations = true,
 	.pipe_split_policy = MPC_SPLIT_DYNAMIC,
 	.force_single_disp_pipe_split = false,
 	.disable_dcc = DCC_ENABLE,
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 23/43] drm/amd/display: Select correct DTO source
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (20 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 22/43] drm/amd/display: disable idle optimizations Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 24/43] drm/amd/display: use updated clock source init routine Alex Deucher
                   ` (19 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Dillon Varone, Aurabindo Pillai

From: Dillon Varone <dillon.varone@amd.com>

[WHY&HOW]
Change criteria for setting DTO source value, and always set it regardless of
the signal type.

Signed-off-by: Dillon Varone <dillon.varone@amd.com>
Acked-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../drm/amd/display/dc/dce/dce_clock_source.c | 27 +++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
index 845aa8a1027d..4b57657b5322 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
@@ -992,7 +992,18 @@ static bool dcn31_program_pix_clk(
 			REG_WRITE(PHASE[inst], pll_settings->actual_pix_clk_100hz * 100);
 			REG_WRITE(MODULO[inst], dp_dto_ref_khz * 1000);
 		}
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+		/* Enable DTO */
+		if (clk_src->cs_mask->PIPE0_DTO_SRC_SEL)
+			REG_UPDATE_2(PIXEL_RATE_CNTL[inst],
+					DP_DTO0_ENABLE, 1,
+					PIPE0_DTO_SRC_SEL, 1);
+		else
+			REG_UPDATE(PIXEL_RATE_CNTL[inst],
+					DP_DTO0_ENABLE, 1);
+#else
 		REG_UPDATE(PIXEL_RATE_CNTL[inst], DP_DTO0_ENABLE, 1);
+#endif
 	} else {
 		if (IS_FPGA_MAXIMUS_DC(clock_source->ctx->dce_environment)) {
 			unsigned int inst = pix_clk_params->controller_id - CONTROLLER_ID_D0;
@@ -1004,10 +1015,26 @@ static bool dcn31_program_pix_clk(
 			REG_WRITE(MODULO[inst], dp_dto_ref_100hz);
 
 			/* Enable DTO */
+	#if defined(CONFIG_DRM_AMD_DC_DCN)
+			if (clk_src->cs_mask->PIPE0_DTO_SRC_SEL)
+				REG_UPDATE_2(PIXEL_RATE_CNTL[inst],
+						DP_DTO0_ENABLE, 1,
+						PIPE0_DTO_SRC_SEL, 1);
+			else
+				REG_UPDATE(PIXEL_RATE_CNTL[inst],
+						DP_DTO0_ENABLE, 1);
+	#else
 			REG_UPDATE(PIXEL_RATE_CNTL[inst], DP_DTO0_ENABLE, 1);
+	#endif
 			return true;
 		}
 
+#if defined(CONFIG_DRM_AMD_DC_DCN)
+		if (clk_src->cs_mask->PIPE0_DTO_SRC_SEL)
+			REG_UPDATE(PIXEL_RATE_CNTL[inst],
+					PIPE0_DTO_SRC_SEL, 0);
+#endif
+
 		/*ATOMBIOS expects pixel rate adjusted by deep color ratio)*/
 		bp_pc_params.controller_id = pix_clk_params->controller_id;
 		bp_pc_params.pll_id = clock_source->id;
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 24/43] drm/amd/display: use updated clock source init routine
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (21 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 23/43] drm/amd/display: Select correct DTO source Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 25/43] drm/amd/display: Ensure that DMCUB fw in use is loaded by DC and not VBIOS Alex Deucher
                   ` (18 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Charlene Liu, Aurabindo Pillai

From: Charlene Liu <Charlene.Liu@amd.com>

[why]
Use correct clock source initialization routine for DCN32/321

Signed-off-by: Charlene Liu <Charlene.Liu@amd.com>
Acked-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c   | 2 +-
 drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
index 64d1a6bf0683..7772beadd000 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
@@ -1090,7 +1090,7 @@ static struct clock_source *dcn32_clock_source_create(
 	if (!clk_src)
 		return NULL;
 
-	if (dcn3_clk_src_construct(clk_src, ctx, bios, id,
+	if (dcn31_clk_src_construct(clk_src, ctx, bios, id,
 			regs, &cs_shift, &cs_mask)) {
 		clk_src->base.dp_clk_src = dp_clk_src;
 		return &clk_src->base;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
index 28e4d7904d54..0b420466b6dd 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
@@ -1088,7 +1088,7 @@ static struct clock_source *dcn321_clock_source_create(
 	if (!clk_src)
 		return NULL;
 
-	if (dcn3_clk_src_construct(clk_src, ctx, bios, id,
+	if (dcn31_clk_src_construct(clk_src, ctx, bios, id,
 			regs, &cs_shift, &cs_mask)) {
 		clk_src->base.dp_clk_src = dp_clk_src;
 		return &clk_src->base;
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 25/43] drm/amd/display: Ensure that DMCUB fw in use is loaded by DC and not VBIOS
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (22 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 24/43] drm/amd/display: use updated clock source init routine Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 26/43] drm/amd/display: Add additional guard for FCLK pstate message for DCN321 Alex Deucher
                   ` (17 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Fangzhi Zuo, Dillon Varone, Aurabindo Pillai

From: Dillon Varone <dillon.varone@amd.com>

[Why?]
On wake from S3/S4, driver checks if DMUB is initialized. On S4 VBIOS loads
DMUB, and driver does not reload as it appears to be initialized already.

[How?]
Add a check for the DAL_FW bit to ensure that loaded FW is from driver and
not VBIOS.

Signed-off-by: Fangzhi Zuo <Jerry.Zuo@amd.com>
Reviewed-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c
index 0a498082ccc6..d298f6016e0b 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c
@@ -299,11 +299,13 @@ void dmub_dcn32_set_outbox1_rptr(struct dmub_srv *dmub, uint32_t rptr_offset)
 
 bool dmub_dcn32_is_hw_init(struct dmub_srv *dmub)
 {
+	union dmub_fw_boot_status status;
 	uint32_t is_hw_init;
 
+	status.all = REG_READ(DMCUB_SCRATCH0);
 	REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_hw_init);
 
-	return is_hw_init != 0;
+	return is_hw_init != 0 && status.bits.dal_fw;
 }
 
 bool dmub_dcn32_is_supported(struct dmub_srv *dmub)
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 26/43] drm/amd/display: Add additional guard for FCLK pstate message for DCN321
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (23 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 25/43] drm/amd/display: Ensure that DMCUB fw in use is loaded by DC and not VBIOS Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 27/43] drm/amd/display: Halve DTB Clock Value for DCN32 Alex Deucher
                   ` (16 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Fangzhi Zuo, Dillon Varone, Aurabindo Pillai

From: Dillon Varone <dillon.varone@amd.com>

Signed-off-by: Fangzhi Zuo <Jerry.Zuo@amd.com>
Acked-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c   | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
index 93fbecbc8065..9d2d2cda5543 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
@@ -346,8 +346,8 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
 					clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
 	}
 
-	if (should_update_pstate_support(safe_to_lower, fclk_p_state_change_support, clk_mgr_base->clks.fclk_p_state_change_support) &&
-			clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21) {
+	if (clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21 &&
+			should_update_pstate_support(safe_to_lower, fclk_p_state_change_support, clk_mgr_base->clks.fclk_p_state_change_support)) {
 		clk_mgr_base->clks.fclk_p_state_change_support = fclk_p_state_change_support;
 
 		/* To disable FCLK P-state switching, send FCLK_PSTATE_NOTSUPPORTED message to PMFW */
@@ -368,7 +368,8 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
 			(update_uclk || !clk_mgr_base->clks.prev_p_state_change_support))
 		dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
 
-	if (clk_mgr_base->clks.fclk_p_state_change_support &&
+	if (clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21 &&
+			clk_mgr_base->clks.fclk_p_state_change_support &&
 			(update_uclk || !clk_mgr_base->clks.fclk_prev_p_state_change_support)) {
 		/* Handle the code for sending a message to PMFW that FCLK P-state change is supported */
 		dcn32_smu_send_fclk_pstate_message(clk_mgr, FCLK_PSTATE_SUPPORTED);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 27/43] drm/amd/display: Halve DTB Clock Value for DCN32
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (24 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 26/43] drm/amd/display: Add additional guard for FCLK pstate message for DCN321 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 28/43] drm/amd/display: set dram speed for all states Alex Deucher
                   ` (15 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Fangzhi Zuo, Aurabindo Pillai

From: Fangzhi Zuo <Jerry.Zuo@amd.com>

VBIOS default clock value was halved, so the hardcoded dtb value should be
halved as well.

dtb clock should come from SMU eventually, but now dtb clock switching is not
fully supported yet in SMU.

Halve the dtb hardcoded value for now to have UHBR10 light up. Will rely on
SMU for dtb clock switching once available. The w/a is for DCN32 only, DCN321
should adopt the original value.

Signed-off-by: Fangzhi Zuo <Jerry.Zuo@amd.com>
Acked-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
index 9d2d2cda5543..774de29fa532 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
@@ -599,7 +599,7 @@ void dcn32_clk_mgr_construct(
 	clk_mgr->dfs_ref_freq_khz = 100000;
 
 	clk_mgr->base.dprefclk_khz = 717000; /* Changed as per DCN3.2_clock_frequency doc */
-	clk_mgr->dccg->ref_dtbclk_khz = 477800;
+	clk_mgr->dccg->ref_dtbclk_khz = 268750;
 
 	/* integer part is now VCO frequency in kHz */
 	clk_mgr->base.dentist_vco_freq_khz = 4300000;//dcn32_get_vco_frequency_from_reg(clk_mgr);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 28/43] drm/amd/display: set dram speed for all states
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (25 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 27/43] drm/amd/display: Halve DTB Clock Value for DCN32 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 29/43] drm/amd/display: Disable DTB Ref Clock Switching in dcn32 Alex Deucher
                   ` (14 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Dillon Varone

From: Dillon Varone <dillon.varone@amd.com>

[WHY?]
If higher states have memory speed set to 0 MT/s currently they do not get set
to the highest value which can cause validation failures.

[HOW?]
Set unpopulated higher states to max value.

Signed-off-by: Dillon Varone <dillon.varone@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c   | 6 +++++-
 drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c | 6 +++++-
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
index 7772beadd000..3f93b1f2d872 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
@@ -3549,7 +3549,6 @@ static void dcn32_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw
 			dcn3_2_soc.clock_limits[i].state = i;
 			dcn3_2_soc.clock_limits[i].dcfclk_mhz = dcfclk_mhz[i];
 			dcn3_2_soc.clock_limits[i].fabricclk_mhz = dcfclk_mhz[i];
-			dcn3_2_soc.clock_limits[i].dram_speed_mts = dram_speed_mts[i];
 
 			/* Fill all states with max values of all these clocks */
 			dcn3_2_soc.clock_limits[i].dispclk_mhz = max_dispclk_mhz;
@@ -3568,6 +3567,11 @@ static void dcn32_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw
 			else
 				dcn3_2_soc.clock_limits[i].socclk_mhz = bw_params->clk_table.entries[i].socclk_mhz;
 
+			if (!dram_speed_mts[i] && i > 0)
+				dcn3_2_soc.clock_limits[i].dram_speed_mts = dcn3_2_soc.clock_limits[i-1].dram_speed_mts;
+			else
+				dcn3_2_soc.clock_limits[i].dram_speed_mts = dram_speed_mts[i];
+
 			/* These clocks cannot come from bw_params, always fill from dcn3_2_soc[0] */
 			/* PHYCLK_D18, PHYCLK_D32 */
 			dcn3_2_soc.clock_limits[i].phyclk_d18_mhz = dcn3_2_soc.clock_limits[0].phyclk_d18_mhz;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
index 0b420466b6dd..27d3aa7d751f 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
@@ -1899,7 +1899,6 @@ static void dcn321_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *b
 			dcn3_21_soc.clock_limits[i].state = i;
 			dcn3_21_soc.clock_limits[i].dcfclk_mhz = dcfclk_mhz[i];
 			dcn3_21_soc.clock_limits[i].fabricclk_mhz = dcfclk_mhz[i];
-			dcn3_21_soc.clock_limits[i].dram_speed_mts = dram_speed_mts[i];
 
 			/* Fill all states with max values of all these clocks */
 			dcn3_21_soc.clock_limits[i].dispclk_mhz = max_dispclk_mhz;
@@ -1918,6 +1917,11 @@ static void dcn321_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *b
 			else
 				dcn3_21_soc.clock_limits[i].socclk_mhz = bw_params->clk_table.entries[i].socclk_mhz;
 
+			if (!dram_speed_mts[i] && i > 0)
+				dcn3_21_soc.clock_limits[i].dram_speed_mts = dcn3_21_soc.clock_limits[i-1].dram_speed_mts;
+			else
+				dcn3_21_soc.clock_limits[i].dram_speed_mts = dram_speed_mts[i];
+
 			/* These clocks cannot come from bw_params, always fill from dcn3_21_soc[0] */
 			/* PHYCLK_D18, PHYCLK_D32 */
 			dcn3_21_soc.clock_limits[i].phyclk_d18_mhz = dcn3_21_soc.clock_limits[0].phyclk_d18_mhz;
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 29/43] drm/amd/display: Disable DTB Ref Clock Switching in dcn32
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (26 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 28/43] drm/amd/display: set dram speed for all states Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 30/43] drm/amd/display: change dsc image width cap for dcn32 and dcn321 Alex Deucher
                   ` (13 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Dillon Varone

From: Dillon Varone <dillon.varone@amd.com>

[How & Why]
To be enabled once PMFW supports it.

Signed-off-by: Dillon Varone <dillon.varone@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c | 4 ++++
 drivers/gpu/drm/amd/display/dc/dc.h                          | 1 -
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
index 774de29fa532..f147c65137c6 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
@@ -607,6 +607,10 @@ void dcn32_clk_mgr_construct(
 	if (clk_mgr->base.dentist_vco_freq_khz == 0)
 		clk_mgr->base.dentist_vco_freq_khz = 4300000; /* Updated as per HW docs */
 
+	if (clk_mgr->dccg->ref_dtbclk_khz != clk_mgr->base.boot_snapshot.dtbclk) {
+		clk_mgr->dccg->ref_dtbclk_khz = clk_mgr->base.boot_snapshot.dtbclk;
+	}
+
 	if (clk_mgr->base.boot_snapshot.dprefclk != 0) {
 		//ASSERT(clk_mgr->base.dprefclk_khz == clk_mgr->base.boot_snapshot.dprefclk);
 		//clk_mgr->base.dprefclk_khz = clk_mgr->base.boot_snapshot.dprefclk;
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index b6decdf820fa..e25e91bfe763 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -430,7 +430,6 @@ struct dc_clocks {
 	bool p_state_change_support;
 	enum dcn_zstate_support_state zstate_support;
 	bool dtbclk_en;
-	int dtbclk_khz;
 	bool fclk_p_state_change_support;
 	enum dcn_pwr_state pwr_state;
 	/*
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 30/43] drm/amd/display: change dsc image width cap for dcn32 and dcn321
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (27 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 29/43] drm/amd/display: Disable DTB Ref Clock Switching in dcn32 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 31/43] drm/amd/display: do not override CURSOR_REQ_MODE when SubVP is not enabled Alex Deucher
                   ` (12 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Dillon Varone

From: Dillon Varone <dillon.varone@amd.com>

Set appropriate caps for DCN3.2.x.

Signed-off-by: Dillon Varone <dillon.varone@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c   | 3 +++
 drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
index 3f93b1f2d872..f2850071478c 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
@@ -1695,6 +1695,9 @@ static struct display_stream_compressor *dcn32_dsc_create(
 	}
 
 	dsc2_construct(dsc, ctx, inst, &dsc_regs[inst], &dsc_shift, &dsc_mask);
+
+	dsc->max_image_width = 6016;
+
 	return &dsc->base;
 }
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
index 27d3aa7d751f..376dd586b643 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
@@ -1679,6 +1679,9 @@ static struct display_stream_compressor *dcn321_dsc_create(
 	}
 
 	dsc2_construct(dsc, ctx, inst, &dsc_regs[inst], &dsc_shift, &dsc_mask);
+
+	dsc->max_image_width = 6016;
+
 	return &dsc->base;
 }
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 31/43] drm/amd/display: do not override CURSOR_REQ_MODE when SubVP is not enabled
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (28 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 30/43] drm/amd/display: change dsc image width cap for dcn32 and dcn321 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 32/43] drm/amd/display: Remove W/A for ODM memory pins Alex Deucher
                   ` (11 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Samson Tam

From: Samson Tam <Samson.Tam@amd.com>

[Why]
HUBP_UNBOUNDED_REQ_MODE and CURSOR_REQ_MODE are normally set together.
In hubp32_prepare_subvp_buffering() call, CURSOR_REQ_MODE is set based on
whether SubVP is enabled or not.  For non MPO case, both REQ_MODE
registers are set to 1.  But since SubVP is not enabled, then
CURSOR_REQ_MODE is set to 0, overriding the previous value.

[How]
Do not set CURSOR_REQ_MODE to 0 if SubVP is not enabled.  This
will allow CURSOR_REQ_MODE to stay as 1 in the non MPO case.
Add note to follow up and check case for single pipe MPO and
SubVP enabled as this would cause both REQ_MODE registers to be
set to 0 but SubVP enabled would override CURSOR_REQ_MODE to 1.

Signed-off-by: Samson Tam <Samson.Tam@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
index 40afd33ffec6..8eed05aad54c 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
@@ -555,10 +555,13 @@ void dcn32_program_mall_pipe_config(struct dc *dc, struct dc_state *context)
 		struct hubp *hubp = pipe->plane_res.hubp;
 
 		if (pipe->stream && hubp && hubp->funcs->hubp_prepare_subvp_buffering) {
+			/* TODO - remove setting CURSOR_REQ_MODE to 0 for legacy cases
+			 *      - need to investigate single pipe MPO + SubVP case to
+			 *        see if CURSOR_REQ_MODE will be back to 1 for SubVP
+			 *        when it should be 0 for MPO
+			 */
 			if (pipe->stream->mall_stream_config.type == SUBVP_MAIN) {
 				hubp->funcs->hubp_prepare_subvp_buffering(hubp, true);
-			} else {
-				hubp->funcs->hubp_prepare_subvp_buffering(hubp, false);
 			}
 		}
 	}
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 32/43] drm/amd/display: Remove W/A for ODM memory pins
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (29 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 31/43] drm/amd/display: do not override CURSOR_REQ_MODE when SubVP is not enabled Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 33/43] drm/amd/display: add new pixel rate programming Alex Deucher
                   ` (10 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Alvin Lee

From: Alvin Lee <Alvin.Lee2@amd.com>

[Description]
By default we can now set
ODM_MEM_VBLANK_PWR_MODE=1

Signed-off-by: Alvin Lee <Alvin.Lee2@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c   | 2 +-
 drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
index f2850071478c..b492eb424d19 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
@@ -997,7 +997,7 @@ static const struct dc_debug_options debug_defaults_drv = {
 			.dscl = false,
 			.cm = false,
 			.mpc = false,
-			.optc = false,
+			.optc = true,
 		}
 	},
 	.use_max_lb = true,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
index 376dd586b643..780409eb0e98 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
@@ -994,7 +994,7 @@ static const struct dc_debug_options debug_defaults_drv = {
 			.dscl = false,
 			.cm = false,
 			.mpc = false,
-			.optc = false,
+			.optc = true,
 		}
 	},
 	.use_max_lb = true,
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 33/43] drm/amd/display: add new pixel rate programming
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (30 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 32/43] drm/amd/display: Remove W/A for ODM memory pins Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 34/43] drm/amd/display: set link fec status during init for DCN32 Alex Deucher
                   ` (9 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jun Lei, Rodrigo Siqueira

From: Jun Lei <Jun.Lei@amd.com>

[why]
New dividers in DCCG need to be programmed depending
on encoder/stream type since pixels per clock in
OTG/DIO is different

DIO also needs additional programming depending on
pixels per clock

Signed-off-by: Jun Lei <Jun.Lei@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
---
 .../display/dc/dcn10/dcn10_stream_encoder.h   |  1 +
 .../drm/amd/display/dc/dcn20/dcn20_hwseq.c    | 14 ++++
 .../gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c |  1 +
 .../dc/dcn32/dcn32_dio_stream_encoder.h       |  3 +-
 .../drm/amd/display/dc/dcn32/dcn32_hwseq.c    | 72 +++++++++++++++++--
 .../drm/amd/display/dc/dcn32/dcn32_hwseq.h    |  2 +
 .../gpu/drm/amd/display/dc/dcn32/dcn32_init.c |  1 +
 drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h  |  3 +-
 .../amd/display/dc/inc/hw/stream_encoder.h    |  3 +
 .../amd/display/dc/inc/hw_sequencer_private.h |  3 +
 10 files changed, 95 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h
index f659aaab792e..034849e37860 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_stream_encoder.h
@@ -571,6 +571,7 @@ struct dcn10_stream_enc_registers {
 	type DP_SEC_GSP11_LINE_NUM
 
 #define SE_REG_FIELD_LIST_DCN3_2(type) \
+	type DIG_FIFO_OUTPUT_PIXEL_MODE;\
 	type DIG_SYMCLK_FE_ON;\
 	type DIG_FIFO_READ_START_LEVEL;\
 	type DIG_FIFO_ENABLE;\
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
index d00a27893ab0..aed8ab06b41d 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
@@ -661,7 +661,17 @@ enum dc_status dcn20_enable_stream_timing(
 	struct mpc_dwb_flow_control flow_control;
 	struct mpc *mpc = dc->res_pool->mpc;
 	bool rate_control_2x_pclk = (interlace || optc2_is_two_pixels_per_containter(&stream->timing));
+	unsigned int k1_div = PIXEL_RATE_DIV_NA;
+	unsigned int k2_div = PIXEL_RATE_DIV_NA;
 
+	if (hws->funcs.calculate_dccg_k1_k2_values && dc->res_pool->dccg->funcs->set_pixel_rate_div) {
+		hws->funcs.calculate_dccg_k1_k2_values(pipe_ctx, &k1_div, &k2_div);
+
+		dc->res_pool->dccg->funcs->set_pixel_rate_div(
+			dc->res_pool->dccg,
+			pipe_ctx->stream_res.tg->inst,
+			k1_div, k2_div);
+	}
 	/* by upper caller loop, pipe0 is parent pipe and be called first.
 	 * back end is set up by for pipe0. Other children pipe share back end
 	 * with pipe 0. No program is needed.
@@ -2485,6 +2495,10 @@ void dcn20_enable_stream(struct pipe_ctx *pipe_ctx)
 
 	tg->funcs->set_early_control(tg, early_control);
 
+	if (pipe_ctx->stream_res.stream_enc->funcs->set_input_mode)
+		pipe_ctx->stream_res.stream_enc->funcs->set_input_mode(pipe_ctx->stream_res.stream_enc,
+			timing->pixel_encoding == PIXEL_ENCODING_YCBCR420 ? 2 : 1);
+
 	/* enable audio only within mode set */
 	if (pipe_ctx->stream_res.audio != NULL) {
 		if (is_dp_128b_132b_signal(pipe_ctx))
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
index a3e0b95fdc84..c7d07203d21a 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
@@ -274,6 +274,7 @@ static const struct dccg_funcs dccg32_funcs = {
 	.set_audio_dtbclk_dto = dccg31_set_audio_dtbclk_dto,
 	.otg_add_pixel = dccg32_otg_add_pixel,
 	.otg_drop_pixel = dccg32_otg_drop_pixel,
+	.set_pixel_rate_div = dccg32_set_pixel_rate_div,
 };
 
 struct dccg *dccg32_create(
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h
index 77da0a13525b..042bc9aca944 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dio_stream_encoder.h
@@ -240,7 +240,8 @@
 	SE_SF(DIG0_DIG_FIFO_CTRL0, DIG_FIFO_READ_START_LEVEL, mask_sh),\
 	SE_SF(DIG0_DIG_FIFO_CTRL0, DIG_FIFO_ENABLE, mask_sh),\
 	SE_SF(DIG0_DIG_FIFO_CTRL0, DIG_FIFO_RESET, mask_sh),\
-	SE_SF(DIG0_DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, mask_sh)
+	SE_SF(DIG0_DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, mask_sh),\
+	SE_SF(DIG0_DIG_FIFO_CTRL0, DIG_FIFO_OUTPUT_PIXEL_MODE, mask_sh)
 
 #if defined(CONFIG_DRM_AMD_DC_HDCP)
 #define SE_COMMON_MASK_SH_LIST_DCN32(mask_sh)\
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
index 8eed05aad54c..5994edc3dd0c 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
@@ -47,6 +47,7 @@
 #include "clk_mgr.h"
 #include "dsc.h"
 #include "dcn20/dcn20_optc.h"
+#include "dc_link_dp.h"
 
 #define DC_LOGGER_INIT(logger)
 
@@ -836,20 +837,44 @@ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
 	}
 }
 
+/*
+* Given any pipe_ctx, return the total ODM combine factor, and optionally return
+* the OPPids which are used
+* */
+static unsigned int get_odm_config(struct pipe_ctx *pipe_ctx, unsigned int *opp_instances)
+{
+	unsigned int opp_count = 1;
+	struct pipe_ctx *odm_pipe;
+
+	/* First get to the top pipe */
+	for (odm_pipe = pipe_ctx; odm_pipe->prev_odm_pipe; odm_pipe = odm_pipe->prev_odm_pipe)
+		;
+
+	/* First pipe is always used */
+	if (opp_instances)
+		opp_instances[0] = odm_pipe->stream_res.opp->inst;
+
+	/* Find and count odm pipes, if any */
+	for (odm_pipe = odm_pipe->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
+		if (opp_instances)
+			opp_instances[opp_count] = odm_pipe->stream_res.opp->inst;
+		opp_count++;
+	}
+
+	return opp_count;
+}
+
 void dcn32_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *pipe_ctx)
 {
 	struct pipe_ctx *odm_pipe;
-	int opp_cnt = 1;
-	int opp_inst[MAX_PIPES] = { pipe_ctx->stream_res.opp->inst };
+	int opp_cnt = 0;
+	int opp_inst[MAX_PIPES] = {0};
 	bool rate_control_2x_pclk = (pipe_ctx->stream->timing.flags.INTERLACE || optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing));
 	struct mpc_dwb_flow_control flow_control;
 	struct mpc *mpc = dc->res_pool->mpc;
 	int i;
 
-	for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
-		opp_inst[opp_cnt] = odm_pipe->stream_res.opp->inst;
-		opp_cnt++;
-	}
+	opp_cnt = get_odm_config(pipe_ctx, opp_inst);
 
 	if (opp_cnt > 1)
 		pipe_ctx->stream_res.tg->funcs->set_odm_combine(
@@ -892,3 +917,38 @@ void dcn32_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *
 		update_dsc_on_stream(pipe_ctx, pipe_ctx->stream->timing.flags.DSC);
 }
 
+unsigned int dcn32_calculate_dccg_k1_k2_values(struct pipe_ctx *pipe_ctx, unsigned int *k1_div, unsigned int *k2_div)
+{
+	struct dc_stream_state *stream = pipe_ctx->stream;
+	unsigned int odm_combine_factor = 0;
+
+	odm_combine_factor = get_odm_config(pipe_ctx, NULL);
+
+	if (is_dp_128b_132b_signal(pipe_ctx)) {
+		*k2_div = PIXEL_RATE_DIV_BY_1;
+	} else if (dc_is_hdmi_tmds_signal(pipe_ctx->stream->signal) || dc_is_dvi_signal(pipe_ctx->stream->signal)) {
+		*k1_div = PIXEL_RATE_DIV_BY_1;
+		if (stream->timing.pixel_encoding == PIXEL_ENCODING_YCBCR420)
+			*k2_div = PIXEL_RATE_DIV_BY_2;
+		else
+			*k2_div = PIXEL_RATE_DIV_BY_4;
+	} else if (dc_is_dp_signal(pipe_ctx->stream->signal)) {
+		if (stream->timing.pixel_encoding == PIXEL_ENCODING_YCBCR420) {
+			*k1_div = PIXEL_RATE_DIV_BY_1;
+			*k2_div = PIXEL_RATE_DIV_BY_2;
+		} else if (stream->timing.pixel_encoding == PIXEL_ENCODING_YCBCR422) {
+			*k1_div = PIXEL_RATE_DIV_BY_2;
+			*k2_div = PIXEL_RATE_DIV_BY_2;
+		} else {
+			if (odm_combine_factor == 1)
+				*k2_div = PIXEL_RATE_DIV_BY_4;
+			else if (odm_combine_factor == 2)
+				*k2_div = PIXEL_RATE_DIV_BY_2;
+		}
+	}
+
+	if ((*k1_div == PIXEL_RATE_DIV_NA) && (*k2_div == PIXEL_RATE_DIV_NA))
+		ASSERT(false);
+
+	return odm_combine_factor;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.h
index 737a7fac5cf9..2a5bdcf58bc6 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.h
@@ -61,4 +61,6 @@ void dcn32_subvp_update_force_pstate(struct dc *dc, struct dc_state *context);
 
 void dcn32_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *pipe_ctx);
 
+unsigned int dcn32_calculate_dccg_k1_k2_values(struct pipe_ctx *pipe_ctx, unsigned int *k1_div, unsigned int *k2_div);
+
 #endif /* __DC_HWSS_DCN32_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c
index d1e3db1bc488..7f492734f881 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c
@@ -141,6 +141,7 @@ static const struct hwseq_private_funcs dcn32_private_funcs = {
 	.program_mall_pipe_config = dcn32_program_mall_pipe_config,
 	.subvp_update_force_pstate = dcn32_subvp_update_force_pstate,
 	.update_mall_sel = dcn32_update_mall_sel,
+	.calculate_dccg_k1_k2_values = dcn32_calculate_dccg_k1_k2_values,
 };
 
 void dcn32_hw_sequencer_init_functions(struct dc *dc)
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
index 2fbd65d88b61..a06c8daed5ed 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
@@ -59,7 +59,8 @@ enum dentist_dispclk_change_mode {
 enum pixel_rate_div {
    PIXEL_RATE_DIV_BY_1 = 0,
    PIXEL_RATE_DIV_BY_2 = 1,
-   PIXEL_RATE_DIV_BY_4 = 3
+   PIXEL_RATE_DIV_BY_4 = 3,
+   PIXEL_RATE_DIV_NA = 0xF
 };
 
 struct dccg {
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/stream_encoder.h b/drivers/gpu/drm/amd/display/dc/inc/hw/stream_encoder.h
index 36ec56524afd..e5fe0f6adc86 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/stream_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/stream_encoder.h
@@ -247,6 +247,9 @@ struct stream_encoder_funcs {
 
 	uint32_t (*get_fifo_cal_average_level)(
 		struct stream_encoder *enc);
+
+	void (*set_input_mode)(
+		struct stream_encoder *enc, unsigned int pix_per_container);
 };
 
 struct hpo_dp_stream_encoder_state {
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
index 5a0648784872..60d89e863d1e 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
@@ -147,6 +147,9 @@ struct hwseq_private_funcs {
 	void (*program_mall_pipe_config)(struct dc *dc, struct dc_state *context);
 	void (*subvp_update_force_pstate)(struct dc *dc, struct dc_state *context);
 	void (*update_mall_sel)(struct dc *dc, struct dc_state *context);
+	unsigned int (*calculate_dccg_k1_k2_values)(struct pipe_ctx *pipe_ctx,
+			unsigned int *k1_div,
+			unsigned int *k2_div);
 #endif
 };
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 34/43] drm/amd/display: set link fec status during init for DCN32
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (31 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 33/43] drm/amd/display: add new pixel rate programming Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 35/43] drm/amd/display: update disp pattern generator routine for DCN30 Alex Deucher
                   ` (8 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jingwen Zhu

From: Jingwen Zhu <Jingwen.Zhu@github.amd.com>

We can now enable FEC.

Signed-off-by: Jingwen Zhu <Jingwen.Zhu@github.amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
index 5994edc3dd0c..04360553a43f 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
@@ -635,8 +635,12 @@ void dcn32_init_hw(struct dc *dc)
 
 		/* Check for enabled DIG to identify enabled display */
 		if (link->link_enc->funcs->is_dig_enabled &&
-			link->link_enc->funcs->is_dig_enabled(link->link_enc))
+			link->link_enc->funcs->is_dig_enabled(link->link_enc)) {
 			link->link_status.link_active = true;
+			if (link->link_enc->funcs->fec_is_active &&
+					link->link_enc->funcs->fec_is_active(link->link_enc))
+				link->fec_state = dc_link_fec_enabled;
+		}
 	}
 
 	/* Power gate DSCs */
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 35/43] drm/amd/display: update disp pattern generator routine for DCN30
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (32 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 34/43] drm/amd/display: set link fec status during init for DCN32 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 36/43] drm/amd/display: cleaning up smu_if to add future flexibility Alex Deucher
                   ` (7 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Aurabindo Pillai

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../drm/amd/display/dc/dcn30/dcn30_hwseq.c    | 33 ++-----------------
 1 file changed, 2 insertions(+), 31 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
index 782b8db451b4..ecdc7c781217 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
@@ -959,35 +959,6 @@ void dcn30_set_disp_pattern_generator(const struct dc *dc,
 		const struct tg_color *solid_color,
 		int width, int height, int offset)
 {
-	struct stream_resource *stream_res = &pipe_ctx->stream_res;
-	struct pipe_ctx *mpcc_pipe;
-
-	if (test_pattern != CONTROLLER_DP_TEST_PATTERN_VIDEOMODE) {
-		pipe_ctx->vtp_locked = false;
-		/* turning on DPG */
-		stream_res->opp->funcs->opp_set_disp_pattern_generator(stream_res->opp, test_pattern, color_space,
-				color_depth, solid_color, width, height, offset);
-
-		/* Defer hubp blank if tg is locked */
-		if (stream_res->tg->funcs->is_tg_enabled(stream_res->tg)) {
-			if (stream_res->tg->funcs->is_locked(stream_res->tg))
-				pipe_ctx->vtp_locked = true;
-			else {
-				/* Blank HUBP to allow p-state during blank on all timings */
-				pipe_ctx->plane_res.hubp->funcs->set_blank(pipe_ctx->plane_res.hubp, true);
-
-				for (mpcc_pipe = pipe_ctx->bottom_pipe; mpcc_pipe; mpcc_pipe = mpcc_pipe->bottom_pipe)
-					mpcc_pipe->plane_res.hubp->funcs->set_blank(mpcc_pipe->plane_res.hubp, true);
-			}
-		}
-	} else {
-		/* turning off DPG */
-		pipe_ctx->plane_res.hubp->funcs->set_blank(pipe_ctx->plane_res.hubp, false);
-		for (mpcc_pipe = pipe_ctx->bottom_pipe; mpcc_pipe; mpcc_pipe = mpcc_pipe->bottom_pipe)
-			if (mpcc_pipe->plane_res.hubp)
-				mpcc_pipe->plane_res.hubp->funcs->set_blank(mpcc_pipe->plane_res.hubp, false);
-
-		stream_res->opp->funcs->opp_set_disp_pattern_generator(stream_res->opp, test_pattern, color_space,
-				color_depth, solid_color, width, height, offset);
-	}
+	pipe_ctx->stream_res.opp->funcs->opp_set_disp_pattern_generator(pipe_ctx->stream_res.opp, test_pattern,
+			color_space, color_depth, solid_color, width, height, offset);
 }
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 36/43] drm/amd/display: cleaning up smu_if to add future flexibility
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (33 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 35/43] drm/amd/display: update disp pattern generator routine for DCN30 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 37/43] drm/amd/display: Match dprefclk with clk registers Alex Deucher
                   ` (6 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Martin Leung

From: Martin Leung <Martin.Leung@amd.com>

This commit cleans up code that uses old variables and adds some SMU
interfaces for future flexibility.

Signed-off-by: Martin Leung <Martin.Leung@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c  | 19 +++---
 .../dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h  |  7 +--
 .../dc/clk_mgr/dcn32/dcn32_smu13_driver_if.h  | 63 +++++++++++++++++++
 .../gpu/drm/amd/display/dmub/src/dmub_dcn32.c |  5 +-
 4 files changed, 78 insertions(+), 16 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_smu13_driver_if.h

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c
index 35e8afe6db93..95ab268e9179 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c
@@ -27,6 +27,8 @@
 
 #include "clk_mgr_internal.h"
 #include "reg_helper.h"
+#include "dalsmc.h"
+#include "smu13_driver_if.h"
 
 #define mmDAL_MSG_REG  0x1628A
 #define mmDAL_ARG_REG  0x16273
@@ -38,8 +40,6 @@
 	mm ## reg_name
 
 #include "logger_types.h"
-#include "dalsmc.h"
-#include "smu13_driver_if.h"
 
 #define smu_print(str, ...) {DC_LOG_SMU(str, ##__VA_ARGS__); }
 
@@ -100,14 +100,6 @@ void dcn32_smu_send_fclk_pstate_message(struct clk_mgr_internal *clk_mgr, bool e
 			DALSMC_MSG_SetFclkSwitchAllow, enable ? FCLK_PSTATE_SUPPORTED : FCLK_PSTATE_NOTSUPPORTED, NULL);
 }
 
-void dcn32_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr)
-{
-       smu_print("SMU Transfer WM table DRAM 2 SMU\n");
-
-       dcn32_smu_send_msg_with_param(clk_mgr,
-                       DALSMC_MSG_TransferTableDram2Smu, TABLE_WATERMARKS, NULL);
-}
-
 void dcn32_smu_send_cab_for_uclk_message(struct clk_mgr_internal *clk_mgr, unsigned int num_ways)
 {
 	smu_print("Numways for SubVP : %d\n", num_ways);
@@ -115,3 +107,10 @@ void dcn32_smu_send_cab_for_uclk_message(struct clk_mgr_internal *clk_mgr, unsig
 	dcn32_smu_send_msg_with_param(clk_mgr, DALSMC_MSG_SetCabForUclkPstate, num_ways, NULL);
 }
 
+void dcn32_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr)
+{
+	smu_print("SMU Transfer WM table DRAM 2 SMU\n");
+
+	dcn32_smu_send_msg_with_param(clk_mgr,
+			DALSMC_MSG_TransferTableDram2Smu, TABLE_WATERMARKS, NULL);
+}
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h
index 11b25de1527f..ee6259108add 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h
@@ -38,12 +38,9 @@
 #define DALSMC_Result_OK				0x1
 
 void
-dcn32_smu_send_fclk_pstate_message(struct clk_mgr_internal *clk_mgr,
-				   bool enable);
-
+dcn32_smu_send_fclk_pstate_message(struct clk_mgr_internal *clk_mgr, bool enable);
 void dcn32_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr);
-
 void dcn32_smu_send_cab_for_uclk_message(struct clk_mgr_internal *clk_mgr, unsigned int num_ways);
-
+void dcn32_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr);
 
 #endif /* __DCN32_CLK_MGR_SMU_MSG_H_ */
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_smu13_driver_if.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_smu13_driver_if.h
new file mode 100644
index 000000000000..d30fbbdd1792
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_smu13_driver_if.h
@@ -0,0 +1,63 @@
+// This is a stripped-down version of the smu13_driver_if.h file for the relevant DAL interfaces.
+
+#define SMU13_DRIVER_IF_VERSION  0x18
+
+//Only Clks that have DPM descriptors are listed here
+typedef enum {
+	PPCLK_GFXCLK = 0,
+	PPCLK_SOCCLK,
+	PPCLK_UCLK,
+	PPCLK_FCLK,
+	PPCLK_DCLK_0,
+	PPCLK_VCLK_0,
+	PPCLK_DCLK_1,
+	PPCLK_VCLK_1,
+	PPCLK_DISPCLK,
+	PPCLK_DPPCLK,
+	PPCLK_DPREFCLK,
+	PPCLK_DCFCLK,
+	PPCLK_DTBCLK,
+	PPCLK_COUNT,
+} PPCLK_e;
+
+typedef struct {
+	uint8_t  WmSetting;
+	uint8_t  Flags;
+	uint8_t  Padding[2];
+
+} WatermarkRowGeneric_t;
+
+#define NUM_WM_RANGES 4
+
+typedef enum {
+	WATERMARKS_CLOCK_RANGE = 0,
+	WATERMARKS_DUMMY_PSTATE,
+	WATERMARKS_MALL,
+	WATERMARKS_COUNT,
+} WATERMARKS_FLAGS_e;
+
+typedef struct {
+	// Watermarks
+	WatermarkRowGeneric_t WatermarkRow[NUM_WM_RANGES];
+} Watermarks_t;
+
+typedef struct {
+	Watermarks_t Watermarks;
+	uint32_t  Spare[16];
+
+	uint32_t     MmHubPadding[8]; // SMU internal use
+} WatermarksExternal_t;
+
+// Table types
+#define TABLE_PMFW_PPTABLE            0
+#define TABLE_COMBO_PPTABLE           1
+#define TABLE_WATERMARKS              2
+#define TABLE_AVFS_PSM_DEBUG          3
+#define TABLE_PMSTATUSLOG             4
+#define TABLE_SMU_METRICS             5
+#define TABLE_DRIVER_SMU_CONFIG       6
+#define TABLE_ACTIVITY_MONITOR_COEFF  7
+#define TABLE_OVERDRIVE               8
+#define TABLE_I2C_COMMANDS            9
+#define TABLE_DRIVER_INFO             10
+#define TABLE_COUNT                   11
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c
index d298f6016e0b..a76da0131add 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn32.c
@@ -39,7 +39,10 @@
 
 const struct dmub_srv_dcn32_regs dmub_srv_dcn32_regs = {
 #define DMUB_SR(reg) REG_OFFSET_EXP(reg),
-		{ DMUB_DCN32_REGS() },
+	{
+		DMUB_DCN32_REGS()
+		DMCUB_INTERNAL_REGS()
+	},
 #undef DMUB_SR
 
 #define DMUB_SF(reg, field) FD_MASK(reg, field),
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 37/43] drm/amd/display: Match dprefclk with clk registers
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (34 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 36/43] drm/amd/display: cleaning up smu_if to add future flexibility Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 38/43] drm/amd/display: Introduce new update_clocks logic Alex Deucher
                   ` (5 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Samson Tam

From: Samson Tam <Samson.Tam@amd.com>

Update base.dprefclk_khz to match result from dcn32_dump_clk_registers()

Signed-off-by: Samson Tam <Samson.Tam@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
index f147c65137c6..bd2352e61040 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
@@ -598,7 +598,11 @@ void dcn32_clk_mgr_construct(
 	clk_mgr->ss_on_dprefclk = false;
 	clk_mgr->dfs_ref_freq_khz = 100000;
 
-	clk_mgr->base.dprefclk_khz = 717000; /* Changed as per DCN3.2_clock_frequency doc */
+	/* Changed from DCN3.2_clock_frequency doc to match
+	 * dcn32_dump_clk_registers from 4 * dentist_vco_freq_khz /
+	 * dprefclk DID divider
+	 */
+	clk_mgr->base.dprefclk_khz = 716666;
 	clk_mgr->dccg->ref_dtbclk_khz = 268750;
 
 	/* integer part is now VCO frequency in kHz */
@@ -612,8 +616,7 @@ void dcn32_clk_mgr_construct(
 	}
 
 	if (clk_mgr->base.boot_snapshot.dprefclk != 0) {
-		//ASSERT(clk_mgr->base.dprefclk_khz == clk_mgr->base.boot_snapshot.dprefclk);
-		//clk_mgr->base.dprefclk_khz = clk_mgr->base.boot_snapshot.dprefclk;
+		clk_mgr->base.dprefclk_khz = clk_mgr->base.boot_snapshot.dprefclk;
 	}
 	dcn32_clock_read_ss_info(clk_mgr);
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 38/43] drm/amd/display: Introduce new update_clocks logic
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (35 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 37/43] drm/amd/display: Match dprefclk with clk registers Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 39/43] drm/amd/display: FCLK P-state support updates Alex Deucher
                   ` (4 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jun Lei

From: Jun Lei <Jun.Lei@amd.com>

[why]
DCN has sidebands to control some clocks, it is useful for clk_mgr to
always update the clocks it explicitly controls rather than skip them
because it enables more configurations to work without SMU

[how]
only skip handling clocks where SMU manages the frequency for clocks
with DENTIST sideband (DISP/DPP), only skip the voltage request when SMU
not available, but otherwise proceed normally

Signed-off-by: Jun Lei <Jun.Lei@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c  | 125 ++++++++++--------
 1 file changed, 73 insertions(+), 52 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
index bd2352e61040..390432537305 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
@@ -310,69 +310,84 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
 	if (display_count == 0)
 		enter_display_off = true;
 
-	if (enter_display_off == safe_to_lower)
-		dcn30_smu_set_num_of_displays(clk_mgr, display_count);
+	if (clk_mgr->smu_present) {
+		if (enter_display_off == safe_to_lower)
+			dcn30_smu_set_num_of_displays(clk_mgr, display_count);
 
-	if (dc->debug.force_min_dcfclk_mhz > 0)
-		new_clocks->dcfclk_khz = (new_clocks->dcfclk_khz > (dc->debug.force_min_dcfclk_mhz * 1000)) ?
-				new_clocks->dcfclk_khz : (dc->debug.force_min_dcfclk_mhz * 1000);
+		if (dc->debug.force_min_dcfclk_mhz > 0)
+			new_clocks->dcfclk_khz = (new_clocks->dcfclk_khz > (dc->debug.force_min_dcfclk_mhz * 1000)) ?
+					new_clocks->dcfclk_khz : (dc->debug.force_min_dcfclk_mhz * 1000);
 
-	if (should_set_clock(safe_to_lower, new_clocks->dcfclk_khz, clk_mgr_base->clks.dcfclk_khz)) {
-		clk_mgr_base->clks.dcfclk_khz = new_clocks->dcfclk_khz;
-		dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DCFCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_khz));
-	}
+		if (should_set_clock(safe_to_lower, new_clocks->dcfclk_khz, clk_mgr_base->clks.dcfclk_khz)) {
+			clk_mgr_base->clks.dcfclk_khz = new_clocks->dcfclk_khz;
+			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DCFCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_khz));
+		}
 
-	if (should_set_clock(safe_to_lower, new_clocks->dcfclk_deep_sleep_khz, clk_mgr_base->clks.dcfclk_deep_sleep_khz)) {
-		clk_mgr_base->clks.dcfclk_deep_sleep_khz = new_clocks->dcfclk_deep_sleep_khz;
-		dcn30_smu_set_min_deep_sleep_dcef_clk(clk_mgr, khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_deep_sleep_khz));
-	}
+		if (should_set_clock(safe_to_lower, new_clocks->dcfclk_deep_sleep_khz, clk_mgr_base->clks.dcfclk_deep_sleep_khz)) {
+			clk_mgr_base->clks.dcfclk_deep_sleep_khz = new_clocks->dcfclk_deep_sleep_khz;
+			dcn30_smu_set_min_deep_sleep_dcef_clk(clk_mgr, khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_deep_sleep_khz));
+		}
 
-	if (should_set_clock(safe_to_lower, new_clocks->socclk_khz, clk_mgr_base->clks.socclk_khz))
-		/* We don't actually care about socclk, don't notify SMU of hard min */
-		clk_mgr_base->clks.socclk_khz = new_clocks->socclk_khz;
+		if (should_set_clock(safe_to_lower, new_clocks->socclk_khz, clk_mgr_base->clks.socclk_khz))
+			/* We don't actually care about socclk, don't notify SMU of hard min */
+			clk_mgr_base->clks.socclk_khz = new_clocks->socclk_khz;
 
-	clk_mgr_base->clks.prev_p_state_change_support = clk_mgr_base->clks.p_state_change_support;
-	clk_mgr_base->clks.fclk_prev_p_state_change_support = clk_mgr_base->clks.fclk_p_state_change_support;
+		clk_mgr_base->clks.prev_p_state_change_support = clk_mgr_base->clks.p_state_change_support;
+		clk_mgr_base->clks.fclk_prev_p_state_change_support = clk_mgr_base->clks.fclk_p_state_change_support;
+		clk_mgr_base->clks.prev_num_ways = clk_mgr_base->clks.num_ways;
 
-	total_plane_count = clk_mgr_helper_get_active_plane_cnt(dc, context);
-	p_state_change_support = new_clocks->p_state_change_support || (total_plane_count == 0);
-	fclk_p_state_change_support = new_clocks->fclk_p_state_change_support || (total_plane_count == 0);
-	if (should_update_pstate_support(safe_to_lower, p_state_change_support, clk_mgr_base->clks.p_state_change_support)) {
-		clk_mgr_base->clks.p_state_change_support = p_state_change_support;
+		if (clk_mgr_base->clks.num_ways != new_clocks->num_ways &&
+				clk_mgr_base->clks.num_ways < new_clocks->num_ways) {
+			clk_mgr_base->clks.num_ways = new_clocks->num_ways;
+			dcn32_smu_send_cab_for_uclk_message(clk_mgr, clk_mgr_base->clks.num_ways);
+		}
 
-		/* to disable P-State switching, set UCLK min = max */
-		if (!clk_mgr_base->clks.p_state_change_support)
-			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
-					clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
-	}
+		total_plane_count = clk_mgr_helper_get_active_plane_cnt(dc, context);
+		p_state_change_support = new_clocks->p_state_change_support || (total_plane_count == 0);
+		fclk_p_state_change_support = new_clocks->fclk_p_state_change_support || (total_plane_count == 0);
+		if (should_update_pstate_support(safe_to_lower, p_state_change_support, clk_mgr_base->clks.p_state_change_support)) {
+			clk_mgr_base->clks.p_state_change_support = p_state_change_support;
+
+			/* to disable P-State switching, set UCLK min = max */
+			if (!clk_mgr_base->clks.p_state_change_support)
+				dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
+						clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
+		}
+
+		if (clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21 &&
+				should_update_pstate_support(safe_to_lower, fclk_p_state_change_support, clk_mgr_base->clks.fclk_p_state_change_support)) {
+			clk_mgr_base->clks.fclk_p_state_change_support = fclk_p_state_change_support;
 
-	if (clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21 &&
-			should_update_pstate_support(safe_to_lower, fclk_p_state_change_support, clk_mgr_base->clks.fclk_p_state_change_support)) {
-		clk_mgr_base->clks.fclk_p_state_change_support = fclk_p_state_change_support;
+			/* To disable FCLK P-state switching, send FCLK_PSTATE_NOTSUPPORTED message to PMFW */
+			if (!clk_mgr_base->clks.fclk_p_state_change_support) {
+				/* Handle code for sending a message to PMFW that FCLK P-state change is not supported */
+				dcn32_smu_send_fclk_pstate_message(clk_mgr, FCLK_PSTATE_NOTSUPPORTED);
+			}
+		}
 
-		/* To disable FCLK P-state switching, send FCLK_PSTATE_NOTSUPPORTED message to PMFW */
-		if (!clk_mgr_base->clks.fclk_p_state_change_support) {
-			/* Handle code for sending a message to PMFW that FCLK P-state change is not supported */
-			dcn32_smu_send_fclk_pstate_message(clk_mgr, FCLK_PSTATE_NOTSUPPORTED);
+		/* Always update saved value, even if new value not set due to P-State switching unsupported */
+		if (should_set_clock(safe_to_lower, new_clocks->dramclk_khz, clk_mgr_base->clks.dramclk_khz)) {
+			clk_mgr_base->clks.dramclk_khz = new_clocks->dramclk_khz;
+			update_uclk = true;
 		}
-	}
 
-	/* Always update saved value, even if new value not set due to P-State switching unsupported */
-	if (should_set_clock(safe_to_lower, new_clocks->dramclk_khz, clk_mgr_base->clks.dramclk_khz)) {
-		clk_mgr_base->clks.dramclk_khz = new_clocks->dramclk_khz;
-		update_uclk = true;
-	}
+		/* set UCLK to requested value if P-State switching is supported, or to re-enable P-State switching */
+		if (clk_mgr_base->clks.p_state_change_support &&
+				(update_uclk || !clk_mgr_base->clks.prev_p_state_change_support))
+			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
+
+		if (clk_mgr_base->clks.fclk_p_state_change_support &&
+				(update_uclk || !clk_mgr_base->clks.fclk_prev_p_state_change_support)) {
 
-	/* set UCLK to requested value if P-State switching is supported, or to re-enable P-State switching */
-	if (clk_mgr_base->clks.p_state_change_support &&
-			(update_uclk || !clk_mgr_base->clks.prev_p_state_change_support))
-		dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
+			/* Handle the code for sending a message to PMFW that FCLK P-state change is supported */
+			dcn32_smu_send_fclk_pstate_message(clk_mgr, FCLK_PSTATE_SUPPORTED);
+		}
 
-	if (clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21 &&
-			clk_mgr_base->clks.fclk_p_state_change_support &&
-			(update_uclk || !clk_mgr_base->clks.fclk_prev_p_state_change_support)) {
-		/* Handle the code for sending a message to PMFW that FCLK P-state change is supported */
-		dcn32_smu_send_fclk_pstate_message(clk_mgr, FCLK_PSTATE_SUPPORTED);
+		if (clk_mgr_base->clks.num_ways != new_clocks->num_ways &&
+				clk_mgr_base->clks.num_ways > new_clocks->num_ways) {
+			clk_mgr_base->clks.num_ways = new_clocks->num_ways;
+			dcn32_smu_send_cab_for_uclk_message(clk_mgr, clk_mgr_base->clks.num_ways);
+		}
 	}
 
 	if (should_set_clock(safe_to_lower, new_clocks->dppclk_khz, clk_mgr_base->clks.dppclk_khz)) {
@@ -380,13 +395,19 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
 			dpp_clock_lowered = true;
 
 		clk_mgr_base->clks.dppclk_khz = new_clocks->dppclk_khz;
-		dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DPPCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dppclk_khz));
+
+		if (clk_mgr->smu_present)
+			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DPPCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dppclk_khz));
+
 		update_dppclk = true;
 	}
 
 	if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) {
 		clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
-		dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DISPCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dispclk_khz));
+
+		if (clk_mgr->smu_present)
+			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DISPCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dispclk_khz));
+
 		update_dispclk = true;
 	}
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 39/43] drm/amd/display: FCLK P-state support updates
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (36 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 38/43] drm/amd/display: Introduce new update_clocks logic Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 40/43] drm/amd/display: Updates for OTG and DCCG clocks Alex Deucher
                   ` (3 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Chaitanya Dhere

From: Chaitanya Dhere <chaitanya.dhere@amd.com>

[Why]
Previously we used to send FCLK P-state enable messages upon each call
to update_clocks based on dml output. This resulted in increased message
transactions between DC and PMFW.

[How]
Update the code to check safe_to_lower status and send the message based
on dml input only on boot. This reduces message transactions. Also
remove other unwanted code based on current code status.

Signed-off-by: Chaitanya Dhere <chaitanya.dhere@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c  | 11 +++++++----
 .../display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h  |  1 -
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
index 390432537305..6a1d7c86e6b7 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
@@ -284,7 +284,7 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
 	bool dpp_clock_lowered = false;
 	struct dmcu *dmcu = clk_mgr_base->ctx->dc->res_pool->dmcu;
 	bool force_reset = false;
-	bool update_uclk = false;
+	bool update_uclk = false, update_fclk = false;
 	bool p_state_change_support;
 	bool fclk_p_state_change_support;
 	int total_plane_count;
@@ -371,14 +371,17 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
 			update_uclk = true;
 		}
 
+		/* Always update saved value, even if new value not set due to P-State switching unsupported. Also check safe_to_lower for FCLK */
+		if (safe_to_lower && (clk_mgr_base->clks.fclk_p_state_change_support != clk_mgr_base->clks.fclk_prev_p_state_change_support)) {
+			update_fclk = true;
+		}
+
 		/* set UCLK to requested value if P-State switching is supported, or to re-enable P-State switching */
 		if (clk_mgr_base->clks.p_state_change_support &&
 				(update_uclk || !clk_mgr_base->clks.prev_p_state_change_support))
 			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
 
-		if (clk_mgr_base->clks.fclk_p_state_change_support &&
-				(update_uclk || !clk_mgr_base->clks.fclk_prev_p_state_change_support)) {
-
+		if (clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21 && clk_mgr_base->clks.fclk_p_state_change_support && update_fclk) {
 			/* Handle the code for sending a message to PMFW that FCLK P-state change is supported */
 			dcn32_smu_send_fclk_pstate_message(clk_mgr, FCLK_PSTATE_SUPPORTED);
 		}
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h
index ee6259108add..5f69cdcb9885 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h
@@ -33,7 +33,6 @@
 #define FCLK_PSTATE_SUPPORTED          0x01
 
 /* TODO Remove this MSG ID define after it becomes available in dalsmc */
-#define DALSMC_MSG_SetFclkSwitchAllow	0x11
 #define DALSMC_MSG_SetCabForUclkPstate	0x12
 #define DALSMC_Result_OK				0x1
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 40/43] drm/amd/display: Updates for OTG and DCCG clocks
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (37 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 39/43] drm/amd/display: FCLK P-state support updates Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 41/43] drm/amd/display: Implement DTBCLK ref switching on dcn32 Alex Deucher
                   ` (2 subsequent siblings)
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Samson Tam

From: Samson Tam <Samson.Tam@amd.com>

Use DTBCLK for valid pixel clock generation

Signed-off-by: Samson Tam <Samson.Tam@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h | 5 ++++-
 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c | 4 ++--
 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.h | 4 ++++
 drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.h | 1 +
 4 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h
index df155cc2bfea..3fe5882ed018 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h
@@ -514,7 +514,6 @@ struct dcn_optc_registers {
 	type DIG_UPDATE_POSITION_X;\
 	type DIG_UPDATE_POSITION_Y;\
 	type OTG_H_TIMING_DIV_MODE;\
-	type OTG_H_TIMING_DIV_MODE_MANUAL;\
 	type OTG_DRR_TIMING_DBUF_UPDATE_MODE;\
 	type OTG_CRC_DSC_MODE;\
 	type OTG_CRC_DATA_STREAM_COMBINE_MODE;\
@@ -522,13 +521,17 @@ struct dcn_optc_registers {
 	type OTG_CRC_DATA_FORMAT;\
 	type OTG_V_TOTAL_LAST_USED_BY_DRR;
 
+#define TG_REG_FIELD_LIST_DCN3_2(type) \
+	type OTG_H_TIMING_DIV_MODE_MANUAL;
 
 struct dcn_optc_shift {
 	TG_REG_FIELD_LIST(uint8_t)
+	TG_REG_FIELD_LIST_DCN3_2(uint8_t)
 };
 
 struct dcn_optc_mask {
 	TG_REG_FIELD_LIST(uint32_t)
+	TG_REG_FIELD_LIST_DCN3_2(uint32_t)
 };
 
 struct optc {
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
index c7d07203d21a..828a72c0720c 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
@@ -148,8 +148,8 @@ void dccg32_set_dtbclk_dto(
 		uint32_t modulo, phase;
 
 		// phase / modulo = dtbclk / dtbclk ref
-		modulo = 0xffffffff;
-		phase = (((unsigned long long)modulo * req_dtbclk_khz) + dccg->ref_dtbclk_khz - 1) / dccg->ref_dtbclk_khz;
+		modulo = dccg->ref_dtbclk_khz * 1000;
+		phase = req_dtbclk_khz * 1000;
 
 		REG_WRITE(DTBCLK_DTO_MODULO[otg_inst], modulo);
 		REG_WRITE(DTBCLK_DTO_PHASE[otg_inst], phase);
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.h
index 0e54c0a105a1..1c46fad0977b 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.h
@@ -45,6 +45,7 @@
 	SR(PHYDSYMCLK_CLOCK_CNTL),\
 	SR(PHYESYMCLK_CLOCK_CNTL),\
 	SR(DPSTREAMCLK_CNTL),\
+	SR(HDMISTREAMCLK_CNTL),\
 	SR(SYMCLK32_SE_CNTL),\
 	SR(SYMCLK32_LE_CNTL),\
 	DCCG_SRII(PIXEL_RATE_CNTL, OTG, 0),\
@@ -98,6 +99,8 @@
 	DCCG_SF(DPSTREAMCLK_CNTL, DPSTREAMCLK2_SRC_SEL, mask_sh),\
 	DCCG_SF(DPSTREAMCLK_CNTL, DPSTREAMCLK3_SRC_SEL, mask_sh),\
 	DCCG_SF(HDMISTREAMCLK_CNTL, HDMISTREAMCLK0_EN, mask_sh),\
+	DCCG_SF(HDMISTREAMCLK_CNTL, HDMISTREAMCLK0_DTO_FORCE_DIS, mask_sh),\
+	DCCG_SF(HDMISTREAMCLK_CNTL, HDMISTREAMCLK0_SRC_SEL, mask_sh),\
 	DCCG_SF(SYMCLK32_SE_CNTL, SYMCLK32_SE0_SRC_SEL, mask_sh),\
 	DCCG_SF(SYMCLK32_SE_CNTL, SYMCLK32_SE1_SRC_SEL, mask_sh),\
 	DCCG_SF(SYMCLK32_SE_CNTL, SYMCLK32_SE2_SRC_SEL, mask_sh),\
@@ -143,6 +146,7 @@
 	DCCG_SF(DTBCLK_P_CNTL, DTBCLK_P2_EN, mask_sh),\
 	DCCG_SF(DTBCLK_P_CNTL, DTBCLK_P3_SRC_SEL, mask_sh),\
 	DCCG_SF(DTBCLK_P_CNTL, DTBCLK_P3_EN, mask_sh),\
+	DCCG_SF(DCCG_AUDIO_DTO_SOURCE, DCCG_AUDIO_DTO_SEL, mask_sh),\
 	DCCG_SF(DCCG_AUDIO_DTO_SOURCE, DCCG_AUDIO_DTO0_SOURCE_SEL, mask_sh)
 
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.h b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.h
index e07b317ed3f4..5e57c39235fa 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_optc.h
@@ -245,6 +245,7 @@
 	SF(OTG0_OTG_DRR_TRIGGER_WINDOW, OTG_DRR_TRIGGER_WINDOW_END_X, mask_sh),\
 	SF(OTG0_OTG_DRR_V_TOTAL_CHANGE, OTG_DRR_V_TOTAL_CHANGE_LIMIT, mask_sh),\
 	SF(OTG0_OTG_H_TIMING_CNTL, OTG_H_TIMING_DIV_MODE, mask_sh),\
+	SF(OTG0_OTG_H_TIMING_CNTL, OTG_H_TIMING_DIV_MODE_MANUAL, mask_sh),\
 	SF(OTG0_OTG_DOUBLE_BUFFER_CONTROL, OTG_DRR_TIMING_DBUF_UPDATE_MODE, mask_sh),\
 	SF(OTG0_OTG_DRR_CONTROL, OTG_V_TOTAL_LAST_USED_BY_DRR, mask_sh)
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 41/43] drm/amd/display: Implement DTBCLK ref switching on dcn32
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (38 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 40/43] drm/amd/display: Updates for OTG and DCCG clocks Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 42/43] drm/amd/display: Add ODM seamless boot support Alex Deucher
  2022-05-25 16:19 ` [PATCH 43/43] drm/amd/display: Drop unnecessary guard from DC resource Alex Deucher
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Alvin Lee

From: Alvin Lee <Alvin.Lee2@amd.com>

[WHY & HOW]
Implements DTB ref clock switching with reg key default to OFF.
Refactors dccg DTBCLK logic to not store redundant state information
dccg. Also removes duplicated functions that should be inherited from
other dcn versions.

Signed-off-by: Alvin Lee <Alvin.Lee2@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c  |  8 +-
 .../display/dc/clk_mgr/dcn31/dcn31_clk_mgr.h  |  2 +
 .../dc/clk_mgr/dcn315/dcn315_clk_mgr.c        |  5 +-
 .../dc/clk_mgr/dcn316/dcn316_clk_mgr.c        |  3 +-
 .../display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c  | 87 +++++++++++++++----
 .../dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c  | 18 ++++
 .../dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h  |  1 +
 drivers/gpu/drm/amd/display/dc/dc.h           |  3 +
 .../display/dc/dce110/dce110_hw_sequencer.c   |  6 ++
 .../gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c | 79 ++++++++---------
 .../gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h |  2 +-
 .../gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c | 52 +++++------
 .../drm/amd/display/dc/dcn32/dcn32_resource.c | 12 ++-
 .../amd/display/dc/dcn321/dcn321_resource.c   | 11 ++-
 .../gpu/drm/amd/display/dc/inc/hw/clk_mgr.h   |  1 +
 drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h  | 21 +++--
 .../amd/display/dc/link/link_hwss_hpo_dp.c    | 19 ++--
 17 files changed, 221 insertions(+), 109 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
index ceb34376decb..051322580014 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
@@ -638,8 +638,14 @@ static void dcn31_set_low_power_state(struct clk_mgr *clk_mgr_base)
 	}
 }
 
+int dcn31_get_dtb_ref_freq_khz(struct clk_mgr *clk_mgr_base)
+{
+	return clk_mgr_base->clks.ref_dtbclk_khz;
+}
+
 static struct clk_mgr_funcs dcn31_funcs = {
 	.get_dp_ref_clk_frequency = dce12_get_dp_ref_freq_khz,
+	.get_dtb_ref_clk_frequency = dcn31_get_dtb_ref_freq_khz,
 	.update_clocks = dcn31_update_clocks,
 	.init_clocks = dcn31_init_clocks,
 	.enable_pme_wa = dcn31_enable_pme_wa,
@@ -719,7 +725,7 @@ void dcn31_clk_mgr_construct(
 	}
 
 	clk_mgr->base.base.dprefclk_khz = 600000;
-	clk_mgr->base.dccg->ref_dtbclk_khz = 600000;
+	clk_mgr->base.base.clks.ref_dtbclk_khz = 600000;
 	dce_clock_read_ss_info(&clk_mgr->base);
 	/*if bios enabled SS, driver needs to adjust dtb clock, only enable with correct bios*/
 	//clk_mgr->base.dccg->ref_dtbclk_khz = dce_adjust_dp_ref_freq_for_ss(clk_mgr_internal, clk_mgr->base.base.dprefclk_khz);
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.h
index 961b10a49486..be06fdbd0c22 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.h
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.h
@@ -51,6 +51,8 @@ void dcn31_clk_mgr_construct(struct dc_context *ctx,
 		struct pp_smu_funcs *pp_smu,
 		struct dccg *dccg);
 
+int dcn31_get_dtb_ref_freq_khz(struct clk_mgr *clk_mgr_base);
+
 void dcn31_clk_mgr_destroy(struct clk_mgr_internal *clk_mgr_int);
 
 #endif //__DCN31_CLK_MGR_H__
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
index ff8f60f38b21..fb4ae800e919 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
@@ -578,6 +578,7 @@ static void dcn315_enable_pme_wa(struct clk_mgr *clk_mgr_base)
 
 static struct clk_mgr_funcs dcn315_funcs = {
 	.get_dp_ref_clk_frequency = dce12_get_dp_ref_freq_khz,
+	.get_dtb_ref_clk_frequency = dcn31_get_dtb_ref_freq_khz,
 	.update_clocks = dcn315_update_clocks,
 	.init_clocks = dcn31_init_clocks,
 	.enable_pme_wa = dcn315_enable_pme_wa,
@@ -654,9 +655,9 @@ void dcn315_clk_mgr_construct(
 
 	clk_mgr->base.base.dprefclk_khz = 600000;
 	clk_mgr->base.base.dprefclk_khz = dcn315_smu_get_dpref_clk(&clk_mgr->base);
-	clk_mgr->base.dccg->ref_dtbclk_khz = clk_mgr->base.base.dprefclk_khz;
+	clk_mgr->base.base.clks.ref_dtbclk_khz = clk_mgr->base.base.dprefclk_khz;
 	dce_clock_read_ss_info(&clk_mgr->base);
-	clk_mgr->base.dccg->ref_dtbclk_khz = dce_adjust_dp_ref_freq_for_ss(&clk_mgr->base, clk_mgr->base.base.dprefclk_khz);
+	clk_mgr->base.base.clks.ref_dtbclk_khz = dce_adjust_dp_ref_freq_for_ss(&clk_mgr->base, clk_mgr->base.base.dprefclk_khz);
 
 	clk_mgr->base.base.bw_params = &dcn315_bw_params;
 
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
index fc3af81ed6c6..e4bb9c6193b5 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
@@ -571,6 +571,7 @@ static void dcn316_clk_mgr_helper_populate_bw_params(
 static struct clk_mgr_funcs dcn316_funcs = {
 	.enable_pme_wa = dcn316_enable_pme_wa,
 	.get_dp_ref_clk_frequency = dce12_get_dp_ref_freq_khz,
+	.get_dtb_ref_clk_frequency = dcn31_get_dtb_ref_freq_khz,
 	.update_clocks = dcn316_update_clocks,
 	.init_clocks = dcn31_init_clocks,
 	.are_clock_states_equal = dcn31_are_clock_states_equal,
@@ -685,7 +686,7 @@ void dcn316_clk_mgr_construct(
 
 	clk_mgr->base.base.dprefclk_khz = 600000;
 	clk_mgr->base.base.dprefclk_khz = dcn316_smu_get_dpref_clk(&clk_mgr->base);
- 	clk_mgr->base.dccg->ref_dtbclk_khz = clk_mgr->base.base.dprefclk_khz;
+	clk_mgr->base.base.clks.ref_dtbclk_khz = clk_mgr->base.base.dprefclk_khz;
 	dce_clock_read_ss_info(&clk_mgr->base);
 	/*clk_mgr->base.dccg->ref_dtbclk_khz =
 	dce_adjust_dp_ref_freq_for_ss(&clk_mgr->base, clk_mgr->base.base.dprefclk_khz);*/
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
index 6a1d7c86e6b7..1db61029481b 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
@@ -29,9 +29,11 @@
 #include "dcn32/dcn32_clk_mgr_smu_msg.h"
 #include "dcn20/dcn20_clk_mgr.h"
 #include "dce100/dce_clk_mgr.h"
+#include "dcn31/dcn31_clk_mgr.h"
 #include "reg_helper.h"
 #include "core_types.h"
 #include "dm_helpers.h"
+#include "dc_link_dp.h"
 
 #include "atomfirmware.h"
 #include "smu13_driver_if.h"
@@ -253,9 +255,10 @@ void dcn32_init_clocks(struct clk_mgr *clk_mgr_base)
 					&clk_mgr_base->bw_params->clk_table.entries[0].socclk_mhz,
 					&num_levels);
 	/* DTBCLK */
-	dcn32_init_single_clock(clk_mgr, PPCLK_DTBCLK,
-			&clk_mgr_base->bw_params->clk_table.entries[0].dtbclk_mhz,
-			&num_levels);
+	if (!clk_mgr->base.ctx->dc->debug.disable_dtb_ref_clk_switch)
+		dcn32_init_single_clock(clk_mgr, PPCLK_DTBCLK,
+				&clk_mgr_base->bw_params->clk_table.entries[0].dtbclk_mhz,
+				&num_levels);
 
 	/* DISPCLK */
 	dcn32_init_single_clock(clk_mgr, PPCLK_DISPCLK,
@@ -270,6 +273,39 @@ void dcn32_init_clocks(struct clk_mgr *clk_mgr_base)
 	dcn32_build_wm_range_table(clk_mgr);
 }
 
+static void dcn32_update_clocks_update_dtb_dto(struct clk_mgr_internal *clk_mgr,
+			struct dc_state *context,
+			int ref_dtbclk_khz)
+{
+	struct dccg *dccg = clk_mgr->dccg;
+	uint32_t tg_mask = 0;
+	int i;
+
+	for (i = 0; i < clk_mgr->base.ctx->dc->res_pool->pipe_count; i++) {
+		struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
+		struct dtbclk_dto_params dto_params = {0};
+
+		/* use mask to program DTO once per tg */
+		if (pipe_ctx->stream_res.tg &&
+				!(tg_mask & (1 << pipe_ctx->stream_res.tg->inst))) {
+			tg_mask |= (1 << pipe_ctx->stream_res.tg->inst);
+
+			dto_params.otg_inst = pipe_ctx->stream_res.tg->inst;
+			dto_params.ref_dtbclk_khz = ref_dtbclk_khz;
+
+			if (is_dp_128b_132b_signal(pipe_ctx)) {
+				dto_params.pixclk_khz = pipe_ctx->stream->phy_pix_clk;
+
+				if (pipe_ctx->stream_res.audio != NULL)
+					dto_params.req_audio_dtbclk_khz = 24000;
+			}
+
+			dccg->funcs->set_dtbclk_dto(clk_mgr->dccg, &dto_params);
+			//dccg->funcs->set_audio_dtbclk_dto(clk_mgr->dccg, &dto_params);
+		}
+	}
+}
+
 static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
 			struct dc_state *context,
 			bool safe_to_lower)
@@ -320,7 +356,7 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
 
 		if (should_set_clock(safe_to_lower, new_clocks->dcfclk_khz, clk_mgr_base->clks.dcfclk_khz)) {
 			clk_mgr_base->clks.dcfclk_khz = new_clocks->dcfclk_khz;
-			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DCFCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_khz));
+			dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DCFCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_khz));
 		}
 
 		if (should_set_clock(safe_to_lower, new_clocks->dcfclk_deep_sleep_khz, clk_mgr_base->clks.dcfclk_deep_sleep_khz)) {
@@ -350,7 +386,7 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
 
 			/* to disable P-State switching, set UCLK min = max */
 			if (!clk_mgr_base->clks.p_state_change_support)
-				dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
+				dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
 						clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
 		}
 
@@ -379,7 +415,7 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
 		/* set UCLK to requested value if P-State switching is supported, or to re-enable P-State switching */
 		if (clk_mgr_base->clks.p_state_change_support &&
 				(update_uclk || !clk_mgr_base->clks.prev_p_state_change_support))
-			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
+			dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
 
 		if (clk_mgr_base->ctx->dce_version != DCN_VERSION_3_21 && clk_mgr_base->clks.fclk_p_state_change_support && update_fclk) {
 			/* Handle the code for sending a message to PMFW that FCLK P-state change is supported */
@@ -400,7 +436,7 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
 		clk_mgr_base->clks.dppclk_khz = new_clocks->dppclk_khz;
 
 		if (clk_mgr->smu_present)
-			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DPPCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dppclk_khz));
+			dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DPPCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dppclk_khz));
 
 		update_dppclk = true;
 	}
@@ -409,11 +445,25 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
 		clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
 
 		if (clk_mgr->smu_present)
-			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DISPCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dispclk_khz));
+			dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DISPCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dispclk_khz));
 
 		update_dispclk = true;
 	}
 
+	if (!new_clocks->dtbclk_en) {
+		new_clocks->ref_dtbclk_khz = 0;
+	}
+
+	/* clock limits are received with MHz precision, divide by 1000 to prevent setting clocks at every call */
+	if (!dc->debug.disable_dtb_ref_clk_switch &&
+			should_set_clock(safe_to_lower, new_clocks->ref_dtbclk_khz / 1000, clk_mgr_base->clks.ref_dtbclk_khz / 1000)) {
+		/* DCCG requires KHz precision for DTBCLK */
+		clk_mgr_base->clks.ref_dtbclk_khz =
+				dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DTBCLK, khz_to_mhz_ceil(new_clocks->ref_dtbclk_khz));
+
+		dcn32_update_clocks_update_dtb_dto(clk_mgr, context, clk_mgr_base->clks.ref_dtbclk_khz);
+	}
+
 	if (dc->config.forced_clocks == false || (force_reset && safe_to_lower)) {
 		if (dpp_clock_lowered) {
 			/* if clock is being lowered, increase DTO before lowering refclk */
@@ -502,13 +552,13 @@ static void dcn32_set_hard_min_memclk(struct clk_mgr *clk_mgr_base, bool current
 
 	if (current_mode) {
 		if (clk_mgr_base->clks.p_state_change_support)
-			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
+			dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
 					khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
 		else
-			dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
+			dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
 					clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
 	} else {
-		dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
+		dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
 				clk_mgr_base->bw_params->clk_table.entries[0].memclk_mhz);
 	}
 }
@@ -584,7 +634,7 @@ static bool dcn32_is_smu_present(struct clk_mgr *clk_mgr_base)
 
 
 static struct clk_mgr_funcs dcn32_funcs = {
-		.get_dp_ref_clk_frequency = dce12_get_dp_ref_freq_khz,
+		.get_dp_ref_clk_frequency = dcn31_get_dtb_ref_freq_khz,
 		.update_clocks = dcn32_update_clocks,
 		.init_clocks = dcn32_init_clocks,
 		.notify_wm_ranges = dcn32_notify_wm_ranges,
@@ -627,7 +677,13 @@ void dcn32_clk_mgr_construct(
 	 * dprefclk DID divider
 	 */
 	clk_mgr->base.dprefclk_khz = 716666;
-	clk_mgr->dccg->ref_dtbclk_khz = 268750;
+	if (ctx->dc->debug.disable_dtb_ref_clk_switch) {
+		//initialize DTB ref clock value if DPM disabled
+		if (ctx->dce_version == DCN_VERSION_3_21)
+			clk_mgr->base.clks.ref_dtbclk_khz = 477800;
+		else
+			clk_mgr->base.clks.ref_dtbclk_khz = 268750;
+	}
 
 	/* integer part is now VCO frequency in kHz */
 	clk_mgr->base.dentist_vco_freq_khz = 4300000;//dcn32_get_vco_frequency_from_reg(clk_mgr);
@@ -635,8 +691,9 @@ void dcn32_clk_mgr_construct(
 	if (clk_mgr->base.dentist_vco_freq_khz == 0)
 		clk_mgr->base.dentist_vco_freq_khz = 4300000; /* Updated as per HW docs */
 
-	if (clk_mgr->dccg->ref_dtbclk_khz != clk_mgr->base.boot_snapshot.dtbclk) {
-		clk_mgr->dccg->ref_dtbclk_khz = clk_mgr->base.boot_snapshot.dtbclk;
+	if (ctx->dc->debug.disable_dtb_ref_clk_switch &&
+			clk_mgr->base.clks.ref_dtbclk_khz != clk_mgr->base.boot_snapshot.dtbclk) {
+		clk_mgr->base.clks.ref_dtbclk_khz = clk_mgr->base.boot_snapshot.dtbclk;
 	}
 
 	if (clk_mgr->base.boot_snapshot.dprefclk != 0) {
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c
index 95ab268e9179..d7c99e9179be 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.c
@@ -114,3 +114,21 @@ void dcn32_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr)
 	dcn32_smu_send_msg_with_param(clk_mgr,
 			DALSMC_MSG_TransferTableDram2Smu, TABLE_WATERMARKS, NULL);
 }
+
+/* Returns the actual frequency that was set in MHz, 0 on failure */
+unsigned int dcn32_smu_set_hard_min_by_freq(struct clk_mgr_internal *clk_mgr, uint32_t clk, uint16_t freq_mhz)
+{
+	uint32_t response = 0;
+
+	/* bits 23:16 for clock type, lower 16 bits for frequency in MHz */
+	uint32_t param = (clk << 16) | freq_mhz;
+
+	smu_print("SMU Set hard min by freq: clk = %d, freq_mhz = %d MHz\n", clk, freq_mhz);
+
+	dcn32_smu_send_msg_with_param(clk_mgr,
+			DALSMC_MSG_SetHardMinByFreq, param, &response);
+
+	smu_print("SMU Frequency set = %d KHz\n", response);
+
+	return response;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h
index 5f69cdcb9885..352435edbd79 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr_smu_msg.h
@@ -41,5 +41,6 @@ dcn32_smu_send_fclk_pstate_message(struct clk_mgr_internal *clk_mgr, bool enable
 void dcn32_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr);
 void dcn32_smu_send_cab_for_uclk_message(struct clk_mgr_internal *clk_mgr, unsigned int num_ways);
 void dcn32_smu_transfer_wm_table_dram_2_smu(struct clk_mgr_internal *clk_mgr);
+unsigned int dcn32_smu_set_hard_min_by_freq(struct clk_mgr_internal *clk_mgr, uint32_t clk, uint16_t freq_mhz);
 
 #endif /* __DCN32_CLK_MGR_SMU_MSG_H_ */
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index e25e91bfe763..01e12b0b7c40 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -430,6 +430,7 @@ struct dc_clocks {
 	bool p_state_change_support;
 	enum dcn_zstate_support_state zstate_support;
 	bool dtbclk_en;
+	int ref_dtbclk_khz;
 	bool fclk_p_state_change_support;
 	enum dcn_pwr_state pwr_state;
 	/*
@@ -737,6 +738,8 @@ struct dc_debug_options {
 	bool force_disable_subvp;
 	bool force_subvp_mclk_switch;
 	bool force_usr_allow;
+	/* uses value at boot and disables switch */
+	bool disable_dtb_ref_clk_switch;
 	bool apply_vendor_specific_lttpr_wa;
 	bool extended_blank_optimization;
 	union aux_wake_wa_options aux_wake_wa;
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
index 7eff7811769d..28a7be5d2370 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
@@ -2181,6 +2181,8 @@ static void dce110_setup_audio_dto(
 			build_audio_output(context, pipe_ctx, &audio_output);
 
 			if (dc->res_pool->dccg && dc->res_pool->dccg->funcs->set_audio_dtbclk_dto) {
+				struct dtbclk_dto_params dto_params = {0};
+				dto_params.ref_dtbclk_khz = dc->clk_mgr->funcs->get_dtb_ref_clk_frequency(dc->clk_mgr);
 				/* disable audio DTBCLK DTO */
 				dc->res_pool->dccg->funcs->set_audio_dtbclk_dto(
 					dc->res_pool->dccg, 0);
@@ -2190,6 +2192,10 @@ static void dce110_setup_audio_dto(
 						pipe_ctx->stream->signal,
 						&audio_output.crtc_info,
 						&audio_output.pll_info);
+
+				dc->res_pool->dccg->funcs->set_audio_dtbclk_dto(
+					dc->res_pool->dccg,
+					&dto_params);
 			} else
 				pipe_ctx->stream_res.audio->funcs->wall_dto_setup(
 					pipe_ctx->stream_res.audio,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
index 20f11fb8f5a6..8eeb3b69b5b9 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
@@ -512,13 +512,10 @@ void dccg31_set_physymclk(
 /* Controls the generation of pixel valid for OTG in (OTG -> HPO case) */
 static void dccg31_set_dtbclk_dto(
 		struct dccg *dccg,
-		int otg_inst,
-		int pixclk_khz,
-		int num_odm_segments,
-		const struct dc_crtc_timing *timing)
+		const struct dtbclk_dto_params *params)
 {
 	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
-	int req_dtbclk_khz = pixclk_khz;
+	int req_dtbclk_khz = params->pixclk_khz;
 	uint32_t dtbdto_div;
 
 	/* Mode	                DTBDTO Rate       DTBCLK_DTO<x>_DIV Register
@@ -529,73 +526,69 @@ static void dccg31_set_dtbclk_dto(
 	 * DSC native 4:2:2     pixel rate/2      4
 	 * Other modes          pixel rate        8
 	 */
-	if (num_odm_segments == 4) {
+	if (params->num_odm_segments == 4) {
 		dtbdto_div = 2;
-		req_dtbclk_khz = pixclk_khz / 4;
-	} else if ((num_odm_segments == 2) ||
-			(timing->pixel_encoding == PIXEL_ENCODING_YCBCR420) ||
-			(timing->flags.DSC && timing->pixel_encoding == PIXEL_ENCODING_YCBCR422
-					&& !timing->dsc_cfg.ycbcr422_simple)) {
+		req_dtbclk_khz = params->pixclk_khz / 4;
+	} else if ((params->num_odm_segments == 2) ||
+			(params->timing->pixel_encoding == PIXEL_ENCODING_YCBCR420) ||
+			(params->timing->flags.DSC && params->timing->pixel_encoding == PIXEL_ENCODING_YCBCR422
+					&& !params->timing->dsc_cfg.ycbcr422_simple)) {
 		dtbdto_div = 4;
-		req_dtbclk_khz = pixclk_khz / 2;
+		req_dtbclk_khz = params->pixclk_khz / 2;
 	} else
 		dtbdto_div = 8;
 
-	if (dccg->ref_dtbclk_khz && req_dtbclk_khz) {
+	if (params->ref_dtbclk_khz && req_dtbclk_khz) {
 		uint32_t modulo, phase;
 
 		// phase / modulo = dtbclk / dtbclk ref
-		modulo = dccg->ref_dtbclk_khz * 1000;
-		phase = div_u64((((unsigned long long)modulo * req_dtbclk_khz) + dccg->ref_dtbclk_khz - 1),
-			dccg->ref_dtbclk_khz);
+		modulo = params->ref_dtbclk_khz * 1000;
+		phase = div_u64((((unsigned long long)modulo * req_dtbclk_khz) + params->ref_dtbclk_khz - 1),
+				params->ref_dtbclk_khz);
 
-		REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
-				DTBCLK_DTO_DIV[otg_inst], dtbdto_div);
+		REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+				DTBCLK_DTO_DIV[params->otg_inst], dtbdto_div);
 
-		REG_WRITE(DTBCLK_DTO_MODULO[otg_inst], modulo);
-		REG_WRITE(DTBCLK_DTO_PHASE[otg_inst], phase);
+		REG_WRITE(DTBCLK_DTO_MODULO[params->otg_inst], modulo);
+		REG_WRITE(DTBCLK_DTO_PHASE[params->otg_inst], phase);
 
-		REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
-				DTBCLK_DTO_ENABLE[otg_inst], 1);
+		REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+				DTBCLK_DTO_ENABLE[params->otg_inst], 1);
 
-		REG_WAIT(OTG_PIXEL_RATE_CNTL[otg_inst],
-				DTBCLKDTO_ENABLE_STATUS[otg_inst], 1,
+		REG_WAIT(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+				DTBCLKDTO_ENABLE_STATUS[params->otg_inst], 1,
 				1, 100);
 
 		/* The recommended programming sequence to enable DTBCLK DTO to generate
 		 * valid pixel HPO DPSTREAM ENCODER, specifies that DTO source select should
 		 * be set only after DTO is enabled
 		 */
-		REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
-				PIPE_DTO_SRC_SEL[otg_inst], 1);
-
-		dccg->dtbclk_khz[otg_inst] = req_dtbclk_khz;
+		REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+				PIPE_DTO_SRC_SEL[params->otg_inst], 1);
 	} else {
-		REG_UPDATE_3(OTG_PIXEL_RATE_CNTL[otg_inst],
-				DTBCLK_DTO_ENABLE[otg_inst], 0,
-				PIPE_DTO_SRC_SEL[otg_inst], 0,
-				DTBCLK_DTO_DIV[otg_inst], dtbdto_div);
-
-		REG_WRITE(DTBCLK_DTO_MODULO[otg_inst], 0);
-		REG_WRITE(DTBCLK_DTO_PHASE[otg_inst], 0);
+		REG_UPDATE_3(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+				DTBCLK_DTO_ENABLE[params->otg_inst], 0,
+				PIPE_DTO_SRC_SEL[params->otg_inst], 0,
+				DTBCLK_DTO_DIV[params->otg_inst], dtbdto_div);
 
-		dccg->dtbclk_khz[otg_inst] = 0;
+		REG_WRITE(DTBCLK_DTO_MODULO[params->otg_inst], 0);
+		REG_WRITE(DTBCLK_DTO_PHASE[params->otg_inst], 0);
 	}
 }
 
 void dccg31_set_audio_dtbclk_dto(
 		struct dccg *dccg,
-		uint32_t req_audio_dtbclk_khz)
+		const struct dtbclk_dto_params *params)
 {
 	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
 
-	if (dccg->ref_dtbclk_khz && req_audio_dtbclk_khz) {
+	if (params->ref_dtbclk_khz && params->req_audio_dtbclk_khz) {
 		uint32_t modulo, phase;
 
 		// phase / modulo = dtbclk / dtbclk ref
-		modulo = dccg->ref_dtbclk_khz * 1000;
-		phase = div_u64((((unsigned long long)modulo * req_audio_dtbclk_khz) + dccg->ref_dtbclk_khz - 1),
-			dccg->ref_dtbclk_khz);
+		modulo = params->ref_dtbclk_khz * 1000;
+		phase = div_u64((((unsigned long long)modulo * params->req_audio_dtbclk_khz) + params->ref_dtbclk_khz - 1),
+			params->ref_dtbclk_khz);
 
 
 		REG_WRITE(DCCG_AUDIO_DTBCLK_DTO_MODULO, modulo);
@@ -606,16 +599,12 @@ void dccg31_set_audio_dtbclk_dto(
 
 		REG_UPDATE(DCCG_AUDIO_DTO_SOURCE,
 				DCCG_AUDIO_DTO_SEL, 4);  //  04 - DCCG_AUDIO_DTO_SEL_AUDIO_DTO_DTBCLK
-
-		dccg->audio_dtbclk_khz = req_audio_dtbclk_khz;
 	} else {
 		REG_WRITE(DCCG_AUDIO_DTBCLK_DTO_PHASE, 0);
 		REG_WRITE(DCCG_AUDIO_DTBCLK_DTO_MODULO, 0);
 
 		REG_UPDATE(DCCG_AUDIO_DTO_SOURCE,
 				DCCG_AUDIO_DTO_SEL, 3);  //  03 - DCCG_AUDIO_DTO_SEL_NO_AUDIO_DTO
-
-		dccg->audio_dtbclk_khz = 0;
 	}
 }
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h
index f31cced5f441..80bd80707991 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h
+++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.h
@@ -192,6 +192,6 @@ void dccg31_set_physymclk(
 
 void dccg31_set_audio_dtbclk_dto(
 		struct dccg *dccg,
-		uint32_t req_audio_dtbclk_khz);
+		const struct dtbclk_dto_params *params);
 
 #endif //__DCN31_DCCG_H__
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
index 828a72c0720c..08232d05d894 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
@@ -135,61 +135,61 @@ static void dccg32_set_dtbclk_p_src(
 /* Controls the generation of pixel valid for OTG in (OTG -> HPO case) */
 void dccg32_set_dtbclk_dto(
 		struct dccg *dccg,
-		int otg_inst,
-		int pixclk_khz,
-		int num_odm_segments,
-		const struct dc_crtc_timing *timing)
+		const struct dtbclk_dto_params *params)
 {
 	struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
 	/* DTO Output Rate / Pixel Rate = 1/4 */
-	int req_dtbclk_khz = pixclk_khz / 4;
+	int req_dtbclk_khz = params->pixclk_khz / 4;
 
-	if (dccg->ref_dtbclk_khz && req_dtbclk_khz) {
+	if (params->ref_dtbclk_khz && req_dtbclk_khz) {
 		uint32_t modulo, phase;
 
 		// phase / modulo = dtbclk / dtbclk ref
-		modulo = dccg->ref_dtbclk_khz * 1000;
+		modulo = params->ref_dtbclk_khz * 1000;
 		phase = req_dtbclk_khz * 1000;
 
-		REG_WRITE(DTBCLK_DTO_MODULO[otg_inst], modulo);
-		REG_WRITE(DTBCLK_DTO_PHASE[otg_inst], phase);
+		REG_WRITE(DTBCLK_DTO_MODULO[params->otg_inst], modulo);
+		REG_WRITE(DTBCLK_DTO_PHASE[params->otg_inst], phase);
 
-		REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
-				DTBCLK_DTO_ENABLE[otg_inst], 1);
+		REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+				DTBCLK_DTO_ENABLE[params->otg_inst], 1);
 
-		REG_WAIT(OTG_PIXEL_RATE_CNTL[otg_inst],
-				DTBCLKDTO_ENABLE_STATUS[otg_inst], 1,
+		REG_WAIT(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+				DTBCLKDTO_ENABLE_STATUS[params->otg_inst], 1,
 				1, 100);
 
 		/* program OTG_PIXEL_RATE_DIV for DIVK1 and DIVK2 fields */
-		dccg32_set_pixel_rate_div(dccg, otg_inst, PIXEL_RATE_DIV_BY_1, PIXEL_RATE_DIV_BY_1);
+		dccg32_set_pixel_rate_div(dccg, params->otg_inst, PIXEL_RATE_DIV_BY_1, PIXEL_RATE_DIV_BY_1);
 
 		/* The recommended programming sequence to enable DTBCLK DTO to generate
 		 * valid pixel HPO DPSTREAM ENCODER, specifies that DTO source select should
 		 * be set only after DTO is enabled
 		 */
-		REG_UPDATE(OTG_PIXEL_RATE_CNTL[otg_inst],
-				PIPE_DTO_SRC_SEL[otg_inst], 2);
-
-		dccg->dtbclk_khz[otg_inst] = req_dtbclk_khz;
+		REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+				PIPE_DTO_SRC_SEL[params->otg_inst], 2);
 	} else {
-		REG_UPDATE_2(OTG_PIXEL_RATE_CNTL[otg_inst],
-				DTBCLK_DTO_ENABLE[otg_inst], 0,
-				PIPE_DTO_SRC_SEL[otg_inst], 1);
-
-		REG_WRITE(DTBCLK_DTO_MODULO[otg_inst], 0);
-		REG_WRITE(DTBCLK_DTO_PHASE[otg_inst], 0);
+		REG_UPDATE_2(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+				DTBCLK_DTO_ENABLE[params->otg_inst], 0,
+				PIPE_DTO_SRC_SEL[params->otg_inst], 1);
 
-		dccg->dtbclk_khz[otg_inst] = 0;
+		REG_WRITE(DTBCLK_DTO_MODULO[params->otg_inst], 0);
+		REG_WRITE(DTBCLK_DTO_PHASE[params->otg_inst], 0);
 	}
 }
 
 void dccg32_set_valid_pixel_rate(
 		struct dccg *dccg,
+		int ref_dtbclk_khz,
 		int otg_inst,
 		int pixclk_khz)
 {
-	dccg32_set_dtbclk_dto(dccg, otg_inst, pixclk_khz, 0, NULL);
+	struct dtbclk_dto_params dto_params = {0};
+
+	dto_params.ref_dtbclk_khz = ref_dtbclk_khz;
+	dto_params.otg_inst = otg_inst;
+	dto_params.pixclk_khz = pixclk_khz;
+
+	dccg32_set_dtbclk_dto(dccg, &dto_params);
 }
 
 static void dccg32_get_dccg_ref_freq(struct dccg *dccg,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
index b492eb424d19..1ea6d258a20d 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
@@ -3306,6 +3306,7 @@ void dcn32_calculate_dlg_params(struct dc *dc, struct dc_state *context, display
 	// context->bw_ctx.bw.dcn.clk.p_state_change_support |= context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching;
 	context->bw_ctx.bw.dcn.clk.dppclk_khz = 0;
 	context->bw_ctx.bw.dcn.clk.dtbclk_en = is_dtbclk_required(dc, context);
+	context->bw_ctx.bw.dcn.clk.ref_dtbclk_khz = context->bw_ctx.dml.vba.DTBCLKPerState[vlevel] * 1000;
 	if (context->bw_ctx.dml.vba.FCLKChangeSupport[vlevel][context->bw_ctx.dml.vba.maxMpcComb] == dm_fclock_change_unsupported)
 		context->bw_ctx.bw.dcn.clk.fclk_p_state_change_support = false;
 	else
@@ -3560,10 +3561,15 @@ static void dcn32_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw
 			dcn3_2_soc.clock_limits[i].dscclk_mhz  = max_dispclk_mhz / 3;
 
 			/* Populate from bw_params for DTBCLK, SOCCLK */
-			if (!bw_params->clk_table.entries[i].dtbclk_mhz && i > 0)
-				dcn3_2_soc.clock_limits[i].dtbclk_mhz  = dcn3_2_soc.clock_limits[i-1].dtbclk_mhz;
-			else
+			if (i > 0) {
+				if (!bw_params->clk_table.entries[i].dtbclk_mhz) {
+					dcn3_2_soc.clock_limits[i].dtbclk_mhz  = dcn3_2_soc.clock_limits[i-1].dtbclk_mhz;
+				} else {
+					dcn3_2_soc.clock_limits[i].dtbclk_mhz  = bw_params->clk_table.entries[i].dtbclk_mhz;
+				}
+			} else if (bw_params->clk_table.entries[i].dtbclk_mhz) {
 				dcn3_2_soc.clock_limits[i].dtbclk_mhz  = bw_params->clk_table.entries[i].dtbclk_mhz;
+			}
 
 			if (!bw_params->clk_table.entries[i].socclk_mhz && i > 0)
 				dcn3_2_soc.clock_limits[i].socclk_mhz = dcn3_2_soc.clock_limits[i-1].socclk_mhz;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
index 780409eb0e98..48af91affb0c 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
@@ -1910,10 +1910,15 @@ static void dcn321_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *b
 			dcn3_21_soc.clock_limits[i].dscclk_mhz  = max_dispclk_mhz / 3;
 
 			/* Populate from bw_params for DTBCLK, SOCCLK */
-			if (!bw_params->clk_table.entries[i].dtbclk_mhz && i > 0)
-				dcn3_21_soc.clock_limits[i].dtbclk_mhz  = dcn3_21_soc.clock_limits[i-1].dtbclk_mhz;
-			else
+			if (i > 0) {
+				if (!bw_params->clk_table.entries[i].dtbclk_mhz) {
+					dcn3_21_soc.clock_limits[i].dtbclk_mhz = dcn3_21_soc.clock_limits[i-1].dtbclk_mhz;
+				} else {
+					dcn3_21_soc.clock_limits[i].dtbclk_mhz = bw_params->clk_table.entries[i].dtbclk_mhz;
+				}
+			} else if (bw_params->clk_table.entries[i].dtbclk_mhz) {
 				dcn3_21_soc.clock_limits[i].dtbclk_mhz  = bw_params->clk_table.entries[i].dtbclk_mhz;
+			}
 
 			if (!bw_params->clk_table.entries[i].socclk_mhz && i > 0)
 				dcn3_21_soc.clock_limits[i].socclk_mhz = dcn3_21_soc.clock_limits[i-1].socclk_mhz;
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
index a89d219e712c..4dd461e6c14b 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
@@ -239,6 +239,7 @@ struct clk_mgr_funcs {
 			bool safe_to_lower);
 
 	int (*get_dp_ref_clk_frequency)(struct clk_mgr *clk_mgr);
+	int (*get_dtb_ref_clk_frequency)(struct clk_mgr *clk_mgr);
 
 	void (*set_low_power_state)(struct clk_mgr *clk_mgr);
 
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
index a06c8daed5ed..025169457cc8 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dccg.h
@@ -68,8 +68,17 @@ struct dccg {
 	const struct dccg_funcs *funcs;
 	int pipe_dppclk_khz[MAX_PIPES];
 	int ref_dppclk;
-	int dtbclk_khz[MAX_PIPES];
-	int audio_dtbclk_khz;
+	//int dtbclk_khz[MAX_PIPES];/* TODO needs to be removed */
+	//int audio_dtbclk_khz;/* TODO needs to be removed */
+	//int ref_dtbclk_khz;/* TODO needs to be removed */
+};
+
+struct dtbclk_dto_params {
+	const struct dc_crtc_timing *timing;
+	int otg_inst;
+	int pixclk_khz;
+	int req_audio_dtbclk_khz;
+	int num_odm_segments;
 	int ref_dtbclk_khz;
 };
 
@@ -119,14 +128,11 @@ struct dccg_funcs {
 
 	void (*set_dtbclk_dto)(
 			struct dccg *dccg,
-			int otg_inst,
-			int pixclk_khz,
-			int num_odm_segments,
-			const struct dc_crtc_timing *timing);
+			const struct dtbclk_dto_params *params);
 
 	void (*set_audio_dtbclk_dto)(
 			struct dccg *dccg,
-			uint32_t req_audio_dtbclk_khz);
+			const struct dtbclk_dto_params *params);
 
 	void (*set_dispclk_change_mode)(
 			struct dccg *dccg,
@@ -148,6 +154,7 @@ void (*set_pixel_rate_div)(
 
 void (*set_valid_pixel_rate)(
         struct dccg *dccg,
+	int ref_dtbclk_khz,
         int otg_inst,
         int pixclk_khz);
 
diff --git a/drivers/gpu/drm/amd/display/dc/link/link_hwss_hpo_dp.c b/drivers/gpu/drm/amd/display/dc/link/link_hwss_hpo_dp.c
index 87972dc8443d..ea6cf8bfce30 100644
--- a/drivers/gpu/drm/amd/display/dc/link/link_hwss_hpo_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/link/link_hwss_hpo_dp.c
@@ -27,6 +27,7 @@
 #include "core_types.h"
 #include "dccg.h"
 #include "dc_link_dp.h"
+#include "clk_mgr.h"
 
 static enum phyd32clk_clock_source get_phyd32clk_src(struct dc_link *link)
 {
@@ -106,14 +107,18 @@ static void setup_hpo_dp_stream_encoder(struct pipe_ctx *pipe_ctx)
 	struct hpo_dp_link_encoder *link_enc = pipe_ctx->link_res.hpo_dp_link_enc;
 	struct dccg *dccg = dc->res_pool->dccg;
 	struct timing_generator *tg = pipe_ctx->stream_res.tg;
-	int odm_segment_count = get_odm_segment_count(pipe_ctx);
+	struct dtbclk_dto_params dto_params = {0};
 	enum phyd32clk_clock_source phyd32clk = get_phyd32clk_src(pipe_ctx->stream->link);
 
+	dto_params.otg_inst = tg->inst;
+	dto_params.pixclk_khz = pipe_ctx->stream->phy_pix_clk;
+	dto_params.num_odm_segments = get_odm_segment_count(pipe_ctx);
+	dto_params.timing = &pipe_ctx->stream->timing;
+	dto_params.ref_dtbclk_khz = dc->clk_mgr->funcs->get_dtb_ref_clk_frequency(dc->clk_mgr);
+
 	dccg->funcs->set_dpstreamclk(dccg, DTBCLK0, tg->inst);
 	dccg->funcs->enable_symclk32_se(dccg, stream_enc->inst, phyd32clk);
-	dccg->funcs->set_dtbclk_dto(dccg, tg->inst, pipe_ctx->stream->phy_pix_clk,
-			odm_segment_count,
-			&pipe_ctx->stream->timing);
+	dccg->funcs->set_dtbclk_dto(dccg, &dto_params);
 	stream_enc->funcs->enable_stream(stream_enc);
 	stream_enc->funcs->map_stream_to_link(stream_enc, stream_enc->inst, link_enc->inst);
 }
@@ -124,9 +129,13 @@ static void reset_hpo_dp_stream_encoder(struct pipe_ctx *pipe_ctx)
 	struct hpo_dp_stream_encoder *stream_enc = pipe_ctx->stream_res.hpo_dp_stream_enc;
 	struct dccg *dccg = dc->res_pool->dccg;
 	struct timing_generator *tg = pipe_ctx->stream_res.tg;
+	struct dtbclk_dto_params dto_params = {0};
+
+	dto_params.otg_inst = tg->inst;
+	dto_params.timing = &pipe_ctx->stream->timing;
 
 	stream_enc->funcs->disable(stream_enc);
-	dccg->funcs->set_dtbclk_dto(dccg, tg->inst, 0, 0, &pipe_ctx->stream->timing);
+	dccg->funcs->set_dtbclk_dto(dccg, &dto_params);
 	dccg->funcs->disable_symclk32_se(dccg, stream_enc->inst);
 	dccg->funcs->set_dpstreamclk(dccg, REFCLK, tg->inst);
 }
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 42/43] drm/amd/display: Add ODM seamless boot support
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (39 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 41/43] drm/amd/display: Implement DTBCLK ref switching on dcn32 Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  2022-05-25 16:19 ` [PATCH 43/43] drm/amd/display: Drop unnecessary guard from DC resource Alex Deucher
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Duncan Ma

From: Duncan Ma <duncan.ma@amd.com>

Revised validation logic when marking for seamless boot. Init resources
accordingly when Pre-OS has ODM enabled. Reset ODM when transitioning
Pre-OS odm to Post-OS non-odm to avoid corruption. Apply logic to set
odm accordingly upon commit.

Signed-off-by: Duncan Ma <duncan.ma@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/display/dc/core/dc_resource.c |  9 +++++++-
 .../amd/display/dc/dcn10/dcn10_hw_sequencer.c |  5 +++++
 .../gpu/drm/amd/display/dc/dcn31/dcn31_optc.c | 21 +++++++++++++++++++
 .../drm/amd/display/dc/dcn32/dcn32_hwseq.c    |  4 ++--
 .../amd/display/dc/inc/hw/timing_generator.h  |  2 ++
 5 files changed, 38 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
index b087452e4590..7072b79d1207 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
@@ -2165,7 +2165,7 @@ static int acquire_resource_from_hw_enabled_state(
 
 		if (pipe_ctx->stream_res.tg->funcs->get_optc_source)
 			pipe_ctx->stream_res.tg->funcs->get_optc_source(pipe_ctx->stream_res.tg,
-					&numPipes, &id_src[0], &id_src[1]);
+						&numPipes, &id_src[0], &id_src[1]);
 
 		if (id_src[0] == 0xf && id_src[1] == 0xf) {
 			id_src[0] = tg_inst;
@@ -2177,6 +2177,8 @@ static int acquire_resource_from_hw_enabled_state(
 			if (id_src[i] == 0xf)
 				return -1;
 
+			pipe_ctx = &res_ctx->pipe_ctx[id_src[i]];
+
 			pipe_ctx->stream_res.tg = pool->timing_generators[tg_inst];
 			pipe_ctx->plane_res.mi = pool->mis[id_src[i]];
 			pipe_ctx->plane_res.hubp = pool->hubps[id_src[i]];
@@ -2190,13 +2192,17 @@ static int acquire_resource_from_hw_enabled_state(
 
 				if (pool->mpc->funcs->read_mpcc_state) {
 					struct mpcc_state s = {0};
+
 					pool->mpc->funcs->read_mpcc_state(pool->mpc, pipe_ctx->plane_res.mpcc_inst, &s);
+
 					if (s.dpp_id < MAX_MPCC)
 						pool->mpc->mpcc_array[pipe_ctx->plane_res.mpcc_inst].dpp_id =
 								s.dpp_id;
+
 					if (s.bot_mpcc_id < MAX_MPCC)
 						pool->mpc->mpcc_array[pipe_ctx->plane_res.mpcc_inst].mpcc_bot =
 								&pool->mpc->mpcc_array[s.bot_mpcc_id];
+
 					if (s.opp_id < MAX_OPP)
 						pipe_ctx->stream_res.opp->mpc_tree_params.opp_id = s.opp_id;
 				}
@@ -2205,6 +2211,7 @@ static int acquire_resource_from_hw_enabled_state(
 
 			if (id_src[i] >= pool->timing_generator_count) {
 				id_src[i] = pool->timing_generator_count - 1;
+
 				pipe_ctx->stream_res.tg = pool->timing_generators[id_src[i]];
 				pipe_ctx->stream_res.opp = pool->opps[id_src[i]];
 			}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
index 1cb206dc8352..e3a62873c0e7 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
@@ -1375,6 +1375,11 @@ void dcn10_init_pipes(struct dc *dc, struct dc_state *context)
 		pipe_ctx->stream_res.tg = NULL;
 		pipe_ctx->plane_res.hubp = NULL;
 
+		if (tg->funcs->is_tg_enabled(tg)) {
+			if (tg->funcs->init_odm)
+				tg->funcs->init_odm(tg);
+		}
+
 		tg->funcs->tg_init(tg);
 	}
 
diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_optc.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_optc.c
index 96fac715a77b..c4304f25ce95 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_optc.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_optc.c
@@ -213,6 +213,26 @@ void optc31_set_drr(
 	}
 }
 
+void optc3_init_odm(struct timing_generator *optc)
+{
+	struct optc *optc1 = DCN10TG_FROM_TG(optc);
+
+	REG_SET_5(OPTC_DATA_SOURCE_SELECT, 0,
+			OPTC_NUM_OF_INPUT_SEGMENT, 0,
+			OPTC_SEG0_SRC_SEL, optc->inst,
+			OPTC_SEG1_SRC_SEL, 0xf,
+			OPTC_SEG2_SRC_SEL, 0xf,
+			OPTC_SEG3_SRC_SEL, 0xf
+			);
+
+	REG_SET(OTG_H_TIMING_CNTL, 0,
+			OTG_H_TIMING_DIV_MODE, 0);
+
+	REG_SET(OPTC_MEMORY_CONFIG, 0,
+			OPTC_MEM_SEL, 0);
+	optc1->opp_count = 1;
+}
+
 static struct timing_generator_funcs dcn31_tg_funcs = {
 		.validate_timing = optc1_validate_timing,
 		.program_timing = optc1_program_timing,
@@ -273,6 +293,7 @@ static struct timing_generator_funcs dcn31_tg_funcs = {
 		.program_manual_trigger = optc2_program_manual_trigger,
 		.setup_manual_trigger = optc2_setup_manual_trigger,
 		.get_hw_timing = optc1_get_hw_timing,
+		.init_odm = optc3_init_odm,
 };
 
 void dcn31_timing_generator_init(struct optc *optc1)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
index 04360553a43f..a10ec5919194 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
@@ -657,7 +657,7 @@ void dcn32_init_hw(struct dc *dc)
 	 * Otherwise, if taking control is not possible, we need to power
 	 * everything down.
 	 */
-	if (dcb->funcs->is_accelerated_mode(dcb) || dc->config.seamless_boot_edp_requested) {
+	if (dcb->funcs->is_accelerated_mode(dcb) || !dc->config.seamless_boot_edp_requested) {
 		hws->funcs.init_pipes(dc, dc->current_state);
 		if (dc->res_pool->hubbub->funcs->allow_self_refresh_control)
 			dc->res_pool->hubbub->funcs->allow_self_refresh_control(dc->res_pool->hubbub,
@@ -669,7 +669,7 @@ void dcn32_init_hw(struct dc *dc)
 	 * To avoid this, power down hardware on boot
 	 * if DIG is turned on and seamless boot not enabled
 	 */
-	if (dc->config.seamless_boot_edp_requested) {
+	if (!dc->config.seamless_boot_edp_requested) {
 		struct dc_link *edp_links[MAX_NUM_EDP];
 		struct dc_link *edp_link;
 
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h b/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h
index a89b2230cd2c..62d4683f17a2 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h
@@ -318,6 +318,8 @@ struct timing_generator_funcs {
 			int vmin, int vmax);
 	bool (*validate_vtotal_change_limit)(struct timing_generator *optc,
 			uint32_t vtotal_change_limit);
+
+	void (*init_odm)(struct timing_generator *tg);
 };
 
 #endif
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 43/43] drm/amd/display: Drop unnecessary guard from DC resource
  2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
                   ` (40 preceding siblings ...)
  2022-05-25 16:19 ` [PATCH 42/43] drm/amd/display: Add ODM seamless boot support Alex Deucher
@ 2022-05-25 16:19 ` Alex Deucher
  41 siblings, 0 replies; 44+ messages in thread
From: Alex Deucher @ 2022-05-25 16:19 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Rodrigo Siqueira

From: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>

Signed-off-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
index 7072b79d1207..24a7c481d73d 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
@@ -1998,7 +1998,6 @@ enum dc_status dc_remove_stream_from_ctx(
 	if (dc->res_pool->funcs->link_enc_unassign)
 		dc->res_pool->funcs->link_enc_unassign(new_ctx, del_pipe->stream);
 
-#if defined(CONFIG_DRM_AMD_DC_DCN)
 	if (is_dp_128b_132b_signal(del_pipe)) {
 		update_hpo_dp_stream_engine_usage(
 			&new_ctx->res_ctx, dc->res_pool,
@@ -2006,7 +2005,6 @@ enum dc_status dc_remove_stream_from_ctx(
 			false);
 		remove_hpo_dp_link_enc_from_ctx(&new_ctx->res_ctx, del_pipe, del_pipe->stream);
 	}
-#endif
 
 	if (del_pipe->stream_res.audio)
 		update_audio_usage(
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH 14/43] drm/amd: Add GFX11 modifiers support to AMDGPU
  2022-05-25 16:19 ` [PATCH 14/43] drm/amd: Add GFX11 modifiers support to AMDGPU Alex Deucher
@ 2022-05-25 19:27   ` Marek Olšák
  0 siblings, 0 replies; 44+ messages in thread
From: Marek Olšák @ 2022-05-25 19:27 UTC (permalink / raw)
  To: Alex Deucher; +Cc: Aurabindo Pillai, amd-gfx mailing list

[-- Attachment #1: Type: text/plain, Size: 13535 bytes --]

On Wed, May 25, 2022 at 12:20 PM Alex Deucher <alexander.deucher@amd.com>
wrote:

> From: Aurabindo Pillai <aurabindo.pillai@amd.com>
>
> GFX11 IP introduces new tiling mode. Various combinations of DCC
> settings are possible and the most preferred settings must be exposed
> for optimal use of the hardware.
>
> add_gfx11_modifiers() is based on recommendation from Marek for the
> preferred tiling modifier that are most efficient for the hardware.
>
> Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
> Acked-by: Alex Deucher <alexander.deucher@amd.com>
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_display.c   | 40 ++++++++--
>  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 74 ++++++++++++++++++-
>  include/uapi/drm/drm_fourcc.h                 |  2 +
>  3 files changed, 108 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> index ec395fe427f2..a54081a89282 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
> @@ -30,6 +30,9 @@
>  #include "atom.h"
>  #include "amdgpu_connectors.h"
>  #include "amdgpu_display.h"
> +#include "soc15_common.h"
> +#include "gc/gc_11_0_0_offset.h"
> +#include "gc/gc_11_0_0_sh_mask.h"
>  #include <asm/div64.h>
>
>  #include <linux/pci.h>
> @@ -662,6 +665,11 @@ static int convert_tiling_flags_to_modifier(struct
> amdgpu_framebuffer *afb)
>  {
>         struct amdgpu_device *adev = drm_to_adev(afb->base.dev);
>         uint64_t modifier = 0;
> +       int num_pipes = 0;
> +       int num_pkrs = 0;
> +
> +       num_pkrs = adev->gfx.config.gb_addr_config_fields.num_pkrs;
> +       num_pipes = adev->gfx.config.gb_addr_config_fields.num_pipes;
>
>         if (!afb->tiling_flags || !AMDGPU_TILING_GET(afb->tiling_flags,
> SWIZZLE_MODE)) {
>                 modifier = DRM_FORMAT_MOD_LINEAR;
> @@ -674,7 +682,7 @@ static int convert_tiling_flags_to_modifier(struct
> amdgpu_framebuffer *afb)
>                 int bank_xor_bits = 0;
>                 int packers = 0;
>                 int rb = 0;
> -               int pipes =
> ilog2(adev->gfx.config.gb_addr_config_fields.num_pipes);
> +               int pipes = ilog2(num_pipes);
>                 uint32_t dcc_offset = AMDGPU_TILING_GET(afb->tiling_flags,
> DCC_OFFSET_256B);
>
>                 switch (swizzle >> 2) {
> @@ -690,12 +698,17 @@ static int convert_tiling_flags_to_modifier(struct
> amdgpu_framebuffer *afb)
>                 case 6: /* 64 KiB _X */
>                         block_size_bits = 16;
>                         break;
> +               case 7: /* 256 KiB */
> +                       block_size_bits = 18;
> +                       break;
>                 default:
>                         /* RESERVED or VAR */
>                         return -EINVAL;
>                 }
>
> -               if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 3, 0))
> +               if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(11, 0, 0))
> +                       version = AMD_FMT_MOD_TILE_VER_GFX11;
> +               else if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10,
> 3, 0))
>                         version = AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS;
>                 else if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10,
> 0, 0))
>                         version = AMD_FMT_MOD_TILE_VER_GFX10;
> @@ -718,7 +731,17 @@ static int convert_tiling_flags_to_modifier(struct
> amdgpu_framebuffer *afb)
>                 }
>

The switch statement right above this that is not in this patch and changes
"version" should be skipped on >= gfx11. Under no circumstances should the
version be changed on gfx11.

Marek


>
>                 if (has_xor) {
> +                       if (num_pipes == num_pkrs && num_pkrs == 0) {
> +                               DRM_ERROR("invalid number of pipes and
> packers\n");
> +                               return -EINVAL;
> +                       }
> +
>                         switch (version) {
> +                       case AMD_FMT_MOD_TILE_VER_GFX11:
> +                               pipe_xor_bits = min(block_size_bits - 8,
> pipes);
> +                               packers = min(block_size_bits - 8 -
> pipe_xor_bits,
> +
>  ilog2(adev->gfx.config.gb_addr_config_fields.num_pkrs));
> +                               break;
>                         case AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS:
>                                 pipe_xor_bits = min(block_size_bits - 8,
> pipes);
>                                 packers = min(block_size_bits - 8 -
> pipe_xor_bits,
> @@ -752,9 +775,10 @@ static int convert_tiling_flags_to_modifier(struct
> amdgpu_framebuffer *afb)
>                         u64 render_dcc_offset;
>
>                         /* Enable constant encode on RAVEN2 and later. */
> -                       bool dcc_constant_encode = adev->asic_type >
> CHIP_RAVEN ||
> +                       bool dcc_constant_encode = (adev->asic_type >
> CHIP_RAVEN ||
>                                                    (adev->asic_type ==
> CHIP_RAVEN &&
> -                                                   adev->external_rev_id
> >= 0x81);
> +                                                   adev->external_rev_id
> >= 0x81)) &&
> +
>  adev->ip_versions[GC_HWIP][0] < IP_VERSION(11, 0, 0);
>
>                         int max_cblock_size = dcc_i64b ?
> AMD_FMT_MOD_DCC_BLOCK_64B :
>                                               dcc_i128b ?
> AMD_FMT_MOD_DCC_BLOCK_128B :
> @@ -869,10 +893,11 @@ static unsigned int get_dcc_block_size(uint64_t
> modifier, bool rb_aligned,
>                 return max(10 + (rb_aligned ? (int)AMD_FMT_MOD_GET(RB,
> modifier) : 0), 12);
>         }
>         case AMD_FMT_MOD_TILE_VER_GFX10:
> -       case AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS: {
> +       case AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS:
> +       case AMD_FMT_MOD_TILE_VER_GFX11: {
>                 int pipes_log2 = AMD_FMT_MOD_GET(PIPE_XOR_BITS, modifier);
>
> -               if (ver == AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS && pipes_log2
> > 1 &&
> +               if (ver >= AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS && pipes_log2
> > 1 &&
>                     AMD_FMT_MOD_GET(PACKERS, modifier) == pipes_log2)
>                         ++pipes_log2;
>
> @@ -965,6 +990,9 @@ static int amdgpu_display_verify_sizes(struct
> amdgpu_framebuffer *rfb)
>                         case DC_SW_64KB_S_X:
>                                 block_size_log2 = 16;
>                                 break;
> +                       case DC_SW_VAR_S_X:
> +                               block_size_log2 = 18;
> +                               break;
>                         default:
>                                 drm_dbg_kms(rfb->base.dev,
>                                             "Swizzle mode with unknown
> block size: %d\n", swizzle);
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index a93affc37c53..badd136f5686 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -88,10 +88,14 @@
>  #include "dcn/dcn_1_0_offset.h"
>  #include "dcn/dcn_1_0_sh_mask.h"
>  #include "soc15_hw_ip.h"
> +#include "soc15_common.h"
>  #include "vega10_ip_offset.h"
>
>  #include "soc15_common.h"
>
> +#include "gc/gc_11_0_0_offset.h"
> +#include "gc/gc_11_0_0_sh_mask.h"
> +
>  #include "modules/inc/mod_freesync.h"
>  #include "modules/power/power_helpers.h"
>  #include "modules/inc/mod_info_packet.h"
> @@ -4885,7 +4889,9 @@ fill_gfx9_tiling_info_from_modifier(const struct
> amdgpu_device *adev,
>         unsigned int mod_bank_xor_bits = AMD_FMT_MOD_GET(BANK_XOR_BITS,
> modifier);
>         unsigned int mod_pipe_xor_bits = AMD_FMT_MOD_GET(PIPE_XOR_BITS,
> modifier);
>         unsigned int pkrs_log2 = AMD_FMT_MOD_GET(PACKERS, modifier);
> -       unsigned int pipes_log2 = min(4u, mod_pipe_xor_bits);
> +       unsigned int pipes_log2;
> +
> +       pipes_log2 = min(5u, mod_pipe_xor_bits);
>
>         fill_gfx9_tiling_info_from_device(adev, tiling_info);
>
> @@ -5221,6 +5227,67 @@ add_gfx10_3_modifiers(const struct amdgpu_device
> *adev,
>                     AMD_FMT_MOD_SET(TILE_VERSION,
> AMD_FMT_MOD_TILE_VER_GFX9));
>  }
>
> +static void
> +add_gfx11_modifiers(const struct amdgpu_device *adev,
> +                     uint64_t **mods, uint64_t *size, uint64_t *capacity)
> +{
> +       int num_pipes = 0;
> +       int pipe_xor_bits = 0;
> +       int num_pkrs = 0;
> +       int pkrs = 0;
> +       u32 gb_addr_config;
> +       unsigned swizzle_r_x;
> +       uint64_t modifier_r_x;
> +       uint64_t modifier_dcc_best;
> +       uint64_t modifier_dcc_4k;
> +
> +       /* TODO: GFX11 IP HW init hasnt finish and we get zero if we read
> from
> +        * adev->gfx.config.gb_addr_config_fields.num_{pkrs,pipes} */
> +       gb_addr_config = RREG32_SOC15(GC, 0, regGB_ADDR_CONFIG);
> +       ASSERT(gb_addr_config != 0);
> +
> +       num_pkrs = 1 << REG_GET_FIELD(gb_addr_config, GB_ADDR_CONFIG,
> NUM_PKRS);
> +       pkrs = ilog2(num_pkrs);
> +       num_pipes = 1 << REG_GET_FIELD(gb_addr_config, GB_ADDR_CONFIG,
> NUM_PIPES);
> +       pipe_xor_bits = ilog2(num_pipes);
> +
> +    /* R_X swizzle modes are the best for rendering and DCC requires
> them. */
> +    swizzle_r_x = num_pipes > 16 ? AMD_FMT_MOD_TILE_GFX11_256K_R_X :
> +
> AMD_FMT_MOD_TILE_GFX9_64K_R_X;
> +
> +       modifier_r_x = AMD_FMT_MOD |
> +                            AMD_FMT_MOD_SET(TILE_VERSION,
> AMD_FMT_MOD_TILE_VER_GFX11) |
> +                            AMD_FMT_MOD_SET(TILE, swizzle_r_x) |
> +                            AMD_FMT_MOD_SET(PIPE_XOR_BITS, pipe_xor_bits)
> |
> +                            AMD_FMT_MOD_SET(PACKERS, pkrs);
> +
> +    /* DCC_CONSTANT_ENCODE is not set because it can't vary with gfx11
> (it's implied to be 1). */
> +       modifier_dcc_best = modifier_r_x |
> +                            AMD_FMT_MOD_SET(DCC, 1) |
> +                            AMD_FMT_MOD_SET(DCC_INDEPENDENT_64B, 0) |
> +                            AMD_FMT_MOD_SET(DCC_INDEPENDENT_128B, 1) |
> +                            AMD_FMT_MOD_SET(DCC_MAX_COMPRESSED_BLOCK,
> AMD_FMT_MOD_DCC_BLOCK_128B);
> +
> +    /* DCC settings for 4K and greater resolutions. (required by display
> hw) */
> +       modifier_dcc_4k = modifier_r_x |
> +                            AMD_FMT_MOD_SET(DCC, 1) |
> +                            AMD_FMT_MOD_SET(DCC_INDEPENDENT_64B, 1) |
> +                            AMD_FMT_MOD_SET(DCC_INDEPENDENT_128B, 1) |
> +                            AMD_FMT_MOD_SET(DCC_MAX_COMPRESSED_BLOCK,
> AMD_FMT_MOD_DCC_BLOCK_64B);
> +
> +       add_modifier(mods, size, capacity, modifier_dcc_best);
> +       add_modifier(mods, size, capacity, modifier_dcc_4k);
> +
> +       add_modifier(mods, size, capacity, modifier_dcc_best |
> AMD_FMT_MOD_SET(DCC_RETILE, 1));
> +       add_modifier(mods, size, capacity, modifier_dcc_4k |
> AMD_FMT_MOD_SET(DCC_RETILE, 1));
> +
> +       add_modifier(mods, size, capacity, modifier_r_x);
> +
> +       add_modifier(mods, size, capacity, AMD_FMT_MOD |
> +             AMD_FMT_MOD_SET(TILE_VERSION, AMD_FMT_MOD_TILE_VER_GFX11) |
> +                        AMD_FMT_MOD_SET(TILE,
> AMD_FMT_MOD_TILE_GFX9_64K_D));
> +}
> +
>  static int
>  get_plane_modifiers(const struct amdgpu_device *adev, unsigned int
> plane_type, uint64_t **mods)
>  {
> @@ -5254,6 +5321,9 @@ get_plane_modifiers(const struct amdgpu_device
> *adev, unsigned int plane_type, u
>                 else
>                         add_gfx10_1_modifiers(adev, mods, &size,
> &capacity);
>                 break;
> +       case AMDGPU_FAMILY_GC_11_0_0:
> +               add_gfx11_modifiers(adev, mods, &size, &capacity);
> +               break;
>         }
>
>         add_modifier(mods, &size, &capacity, DRM_FORMAT_MOD_LINEAR);
> @@ -5292,7 +5362,7 @@ fill_gfx9_plane_attributes_from_modifiers(struct
> amdgpu_device *adev,
>                 dcc->enable = 1;
>                 dcc->meta_pitch = afb->base.pitches[1];
>                 dcc->independent_64b_blks = independent_64b_blks;
> -               if (AMD_FMT_MOD_GET(TILE_VERSION, modifier) ==
> AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS) {
> +               if (AMD_FMT_MOD_GET(TILE_VERSION, modifier) >=
> AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS) {
>                         if (independent_64b_blks && independent_128b_blks)
>                                 dcc->dcc_ind_blk =
> hubp_ind_block_64b_no_128bcl;
>                         else if (independent_128b_blks)
> diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h
> index fc0c1454d275..14cb2dafb0fa 100644
> --- a/include/uapi/drm/drm_fourcc.h
> +++ b/include/uapi/drm/drm_fourcc.h
> @@ -1294,6 +1294,7 @@ drm_fourcc_canonicalize_nvidia_format_mod(__u64
> modifier)
>  #define AMD_FMT_MOD_TILE_VER_GFX9 1
>  #define AMD_FMT_MOD_TILE_VER_GFX10 2
>  #define AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS 3
> +#define AMD_FMT_MOD_TILE_VER_GFX11 4
>
>  /*
>   * 64K_S is the same for GFX9/GFX10/GFX10_RBPLUS and hence has GFX9 as
> canonical
> @@ -1309,6 +1310,7 @@ drm_fourcc_canonicalize_nvidia_format_mod(__u64
> modifier)
>  #define AMD_FMT_MOD_TILE_GFX9_64K_S_X 25
>  #define AMD_FMT_MOD_TILE_GFX9_64K_D_X 26
>  #define AMD_FMT_MOD_TILE_GFX9_64K_R_X 27
> +#define AMD_FMT_MOD_TILE_GFX11_256K_R_X 31
>
>  #define AMD_FMT_MOD_DCC_BLOCK_64B 0
>  #define AMD_FMT_MOD_DCC_BLOCK_128B 1
> --
> 2.35.3
>
>

[-- Attachment #2: Type: text/html, Size: 17157 bytes --]

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2022-05-25 19:28 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-25 16:18 [PATCH 00/43] Add support for DCN 3.2 Alex Deucher
2022-05-25 16:19 ` [PATCH 01/43] drm/amd/display: remove stale config guards Alex Deucher
2022-05-25 16:19 ` [PATCH 02/43] drm/amd: Add atomfirmware.h definitions needed for DCN32/321 Alex Deucher
2022-05-25 16:19 ` [PATCH 03/43] drm/amd/display: Add DCN32/321 version identifiers Alex Deucher
2022-05-25 16:19 ` [PATCH 05/43] drm/amd/display: Add DMCUB source files and changes for DCN32/321 Alex Deucher
2022-05-25 16:19 ` [PATCH 06/43] drm/amd/display: add dcn32 IRQ changes Alex Deucher
2022-05-25 16:19 ` [PATCH 07/43] drm/amd/display: add GPIO changes for DCN32/321 Alex Deucher
2022-05-25 16:19 ` [PATCH 08/43] drm/amd/display: DML " Alex Deucher
2022-05-25 16:19 ` [PATCH 09/43] drm/amd/display: add CLKMGR " Alex Deucher
2022-05-25 16:19 ` [PATCH 10/43] drm/amd/display: add DCN32/321 specific files for Display Core Alex Deucher
2022-05-25 16:19 ` [PATCH 11/43] drm/amd/display: Add dependant changes for DCN32/321 Alex Deucher
2022-05-25 16:19 ` [PATCH 12/43] drm/amd/display: Add DM support for DCN32/DCN321 Alex Deucher
2022-05-25 16:19 ` [PATCH 13/43] drm/amd/display: add DCN32 to IP discovery table Alex Deucher
2022-05-25 16:19 ` [PATCH 14/43] drm/amd: Add GFX11 modifiers support to AMDGPU Alex Deucher
2022-05-25 19:27   ` Marek Olšák
2022-05-25 16:19 ` [PATCH 15/43] drm/amd/display: Fix USBC link creation Alex Deucher
2022-05-25 16:19 ` [PATCH 16/43] drm/amd/display: Add missing instance for clock source register Alex Deucher
2022-05-25 16:19 ` [PATCH 17/43] drm/amd/display: Use DTBCLK for valid pixel clock Alex Deucher
2022-05-25 16:19 ` [PATCH 18/43] drm/amd/display: Add guard for FCLK pstate message to PMFW for DCN321 Alex Deucher
2022-05-25 16:19 ` [PATCH 19/43] drm/amd/display: Various DML fixes to enable higher timings Alex Deucher
2022-05-25 16:19 ` [PATCH 20/43] drm/amd/display: Implement WM table transfer for DCN32/DCN321 Alex Deucher
2022-05-25 16:19 ` [PATCH 21/43] drm/amd/display: add missing interrupt handlers " Alex Deucher
2022-05-25 16:19 ` [PATCH 22/43] drm/amd/display: disable idle optimizations Alex Deucher
2022-05-25 16:19 ` [PATCH 23/43] drm/amd/display: Select correct DTO source Alex Deucher
2022-05-25 16:19 ` [PATCH 24/43] drm/amd/display: use updated clock source init routine Alex Deucher
2022-05-25 16:19 ` [PATCH 25/43] drm/amd/display: Ensure that DMCUB fw in use is loaded by DC and not VBIOS Alex Deucher
2022-05-25 16:19 ` [PATCH 26/43] drm/amd/display: Add additional guard for FCLK pstate message for DCN321 Alex Deucher
2022-05-25 16:19 ` [PATCH 27/43] drm/amd/display: Halve DTB Clock Value for DCN32 Alex Deucher
2022-05-25 16:19 ` [PATCH 28/43] drm/amd/display: set dram speed for all states Alex Deucher
2022-05-25 16:19 ` [PATCH 29/43] drm/amd/display: Disable DTB Ref Clock Switching in dcn32 Alex Deucher
2022-05-25 16:19 ` [PATCH 30/43] drm/amd/display: change dsc image width cap for dcn32 and dcn321 Alex Deucher
2022-05-25 16:19 ` [PATCH 31/43] drm/amd/display: do not override CURSOR_REQ_MODE when SubVP is not enabled Alex Deucher
2022-05-25 16:19 ` [PATCH 32/43] drm/amd/display: Remove W/A for ODM memory pins Alex Deucher
2022-05-25 16:19 ` [PATCH 33/43] drm/amd/display: add new pixel rate programming Alex Deucher
2022-05-25 16:19 ` [PATCH 34/43] drm/amd/display: set link fec status during init for DCN32 Alex Deucher
2022-05-25 16:19 ` [PATCH 35/43] drm/amd/display: update disp pattern generator routine for DCN30 Alex Deucher
2022-05-25 16:19 ` [PATCH 36/43] drm/amd/display: cleaning up smu_if to add future flexibility Alex Deucher
2022-05-25 16:19 ` [PATCH 37/43] drm/amd/display: Match dprefclk with clk registers Alex Deucher
2022-05-25 16:19 ` [PATCH 38/43] drm/amd/display: Introduce new update_clocks logic Alex Deucher
2022-05-25 16:19 ` [PATCH 39/43] drm/amd/display: FCLK P-state support updates Alex Deucher
2022-05-25 16:19 ` [PATCH 40/43] drm/amd/display: Updates for OTG and DCCG clocks Alex Deucher
2022-05-25 16:19 ` [PATCH 41/43] drm/amd/display: Implement DTBCLK ref switching on dcn32 Alex Deucher
2022-05-25 16:19 ` [PATCH 42/43] drm/amd/display: Add ODM seamless boot support Alex Deucher
2022-05-25 16:19 ` [PATCH 43/43] drm/amd/display: Drop unnecessary guard from DC resource Alex Deucher

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.